Internet Protocol (IP) Page

Internet Protocol (IP) is the technology that allows data to cross networks, using a destination address (IP address) to make sure it reaches the right place. IP protocol provides no guarantees that the datagram successfully reaches the receiver, so the datagram either goes to right place or does not get there. If the loss of the packet is considered a problem it is up to the upper level protocols and/or application run on top of IP to detect the loss of the packets and retransmit those lost packets if necessary. Transmission Control Protocol (TCP) ensures the correct delivery of that data or its re-transmission if it gets lost. TCP automatically retransmits packets, which have not reached the receiver within defined time bounds. Together they form a powerful networking system that is called TCP/IP. TCP/IP is the protocol that runs Internet.

TCP/IP is a layered set of protocols that forms the basis of all communications in the Internet. The most important protocols of the suite are Internet Protocol (IP), Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Most of the other protocols in the TCP/IP suite are built on top of those three protocols. The current IP protocol version in wide use is IP protocol version 4, known as IPv4. The newest version of IP protocol is IP version 6, which is not yet in wide use outside research laboratories and test networks. The rest of the document refers in technical details to currently used IPv4 protocol (some details like addressing are different in IPv6, nut most basics are the same).

IP datagrams can have variable size from 20 bytes up to 64 kilobytes. The first 20 bytes of IP datagram contain the IP packet header information. That header part of IP packet is protected by header checksum included in the header. After the header it is possible to have optional IP header options, which are quite rarely used. Next comes the actual data included in the IP packet. Because of the limitations of the maximum packet sizes in different networks, the IP datagrams are seldom bigger than few kilobytes. A typical IP packet travelling in the network has size from 40 bytes to 1500 bytes. The maximum packet size of 1500 byte comes from the maximum packet size available on Ethernet networks where most of Internet traffic is originating. IP protocol is designed in such way that it can be adapted to be transported to almost any imaginable networking technology. IP is oten transported over Ethernet, serial data lines like traditional modem lines (PPP), ATM network and many other networking technologies. Different networking technologies have different limitations, for example different maximum packet size they can properly handle. If the packet sent to network is larger than underlying network can handle, there are provisions to split datagrams up into pieces. This process is referred to as fragmentation. The IP header contains fields indicating that datagram has been split, and enough information to let the receiver to put the pieces together. This is transparent to the users and normal applications. To optimize the network use some upper protocols like TCP have an option to negotiate the optimum packet size so that the packet can pass through the network without fragmentation. In practice, in well operating networks fragmented IP packets are very rare.

The development of TCP/IP protocol suite began in the late sixties as a research project funded by the US government and military. Nowadays the Internet Society (ISOC), the Internet Architecture Board (IAB), the Internet Engineering Task Force (IETF) and the Internet Research Task Force (IRTF) control its development.

All the official standards related to TCP/IP protocol suite are published as RFCs. The Requests for Comments (RFCs) form a series of published notes, started in 1969, about the Internet (originally the ARPANET). The notes discuss many aspects of computer communication, focusing on networking protocols, procedures, programs, and concepts but also including meeting notes, opinion, and sometimes even jokes. The RFCs are identified by unique numbers, with higher numbers for newer RFCs. RFC publications do not have a status of official standard, but the protocols published in them can be considered to be kind of Internet de-facto standards.

The Internet is a gigantic collection of millions of computers, all linked. Internet is a collection of very many individual networks that are interconnected and run TCP/IP protocol. Internet consits of 250 000-plus networks, all using the same technical standards (TCP/IP).

The network allows all of the computers to communicate with each nother and allows over a billion people to get online. Internet is an internetwork, which means that it is a collection of individual networks, connected by intermediate networking devices, which function as a single large network. The network contains lots of different types of computer and network devices, but they can all communicate together, because they use the same TCP/IP protocol suite and the network is sensibly structured to sub-networks.

Internet is based on TCP/IP protocol suite and the "catenet model". This model assumes that there are a large number of independent networks connected together by gateways. The user should be able to access computers or other resources on any of these networks. Datagrams will often pass through a dozen different networks before getting to their final destination.

Different networks consist of routers connected to each other using some suitable data connection. The router works so that it takes packets from the incoming network links and forwards them to the outgoing links to the direction they should go to reach their destination. The task of finding how to get an IP datagram to its destination is referred to as 'routing'. The process of Internet traffic routing consts of determinign suitable packet forwarding tables (routing tables) and then forwarding IP packets between diffent network interfaces within router based on the instructions in the forwaring table. Normal IP packet forwarding is based entirely upon the network number of the destination address. In more complex cases the network routing and forwarding processes might take also consideration on the other fields in the IP header like source IP address (for source routing), protocol contained in the packet, type of service (TOS) field or other information contained in the packet. The most complex packet forwarding devices like firewalls and layer 4 switches might even take a look inside the data contained in the IP packet in the decision process.

When a computer wants to send a datagram, it first checks to see if the destination address is on the system???s own local network. If so, the datagram can be sent directly. Otherwise, the system expects to find a routing table entry for the network that the destination address is on. The datagram is sent to the gateway listed in that entry. In large networks, like Internet, this routing table can become quite big. Various strategies have been developed to reduce the size of the routing table maintained in a computer. Most often used strategy is to depend upon "default routes". Often, there is only one gateway out of a network. In this way the computers inside a network need only know to one gateway address.

The gateways/routers, which connect multiple networks, can???t depend upon this strategy. They have to have fairly complete routing tables. A routing protocol is simply a technique for the gateways to find each other, and keep up to date about the best way to get to every network. There are multiple network routing protocols in use including RIP, EGRP and OSPF.

In general, all of the machines on the Internet can be categorized as two types: servers and clients. Those computers that provide services are servers. Computers that are used to connect to those services are clients. A server computer may provide one or more services on the Internet. Most common services provided by Internet servers are name service (DNS), e-mail services (SMTP), web services (HTTP) and file downloading (FTP) services. A computer running a multitasking operating system like Windows, Linux or any UNIX version can act at the same time as client and server.

Each computer connected to the Internet is assigned an unique address called IP address. IP addresses are 32-bit numbers, normally expressed as 4 "octets" in a "dotted decimal number." A typical IP address looks like this:

The four numbers in an IP address are called octets, because they can have values between 0 and 255, which means 2^8 possibilities per octet. The Internet is made up of millions of computers and routers, each with an unique IP address. A server has typically a static IP address that does not change very often. A home machine that is dialing up through a modem often has an IP address that is assigned by the ISP. This kind of ISP assigned IP address can be different every time an user dials through the modem line. The modern broadband connections like ADSL, cable modem, etc. can have addressing system that allocated fixed addresses, allocated new address every time or something in between (usually certain addresses are allocated for users for some time to come and can change after that if needed).

Because most people have trouble remembering the strings of numbers that make up IP addresses, and because IP addresses sometimes need to change, all servers on the Internet also have human-readable names, which consist of computer name and domain name.

The DNS (Domain Name System) is the address book of the internet, matching numeric IP addresses to alphabetic addresses such as People find easier to use and remember those than numeric addresses like Using names has also the benfit that the computers operating different services can change their IP addresses, but people will still easily find them with the same name, when the name to address mapping is updated when IP address changes. There is no single central list of everyone's internet address, because that would be a very huge and too bigh for anyone single source to handle. But instead of one central the DNS system splits addresses into their constituent parts - called domains - and gives each machine in the network enough information to know where to locate the next machine down the line. This is known as a distributed database.

Although the DNS is a distributed database it needs a starting point, a list of where to go for the first part of an internet address and start a search for a particular machine. This list of where to start is called the root zone file. It is a list of 248 country code top-level domains (ccTLDs) - such as .uk and .fr - as well as 14 generic top-level domains (gTLDs), which are subject-based such as .com and .net and .org. The list, held on 13 machines across the world.

The Internet Corporation for Assigned Names and Numbers (ICANN) is a not-for-profit organisation that manages the DNS. It decides who gets to operate the most basic domains, the top-level domains such as .com and .org as well as all the world's country codes. It is responsible for allocating space on the internet. ICANN was set up in California under contract to the department of commerce, thus in practice US government is more or less overseeing the internet's address structure, called the domain name system (DNS).

Most often IP traffic is nowadays transported uing Ethernet network. Ethernet systems (comprising interface controllers, bridges, routers, management systems and other devices) represent the most widely deployed networking technology in history. Many of the current and proposed next-generation residential broadband access technologies advocate the use of Ethernet as the universal service interface technology. It is useful to understand how IP over Ethernet works. Ethernet system has its own addressing which uses 48 bits long addresses. Each Ethernet network interface card comes with a unique address built in from the factory, so users can just plug Ethernet components together with need to worry about the Ethernet addressing. To be able to send packet from one computer to another in Ethernet network, the computers need to know the Ethernet address of other end. For this each computer connected to network has to have a table which maps IP addresses to Ethernet addresses. This table is called ARP table. To figure out what Ethernet address to use for the first time, there is a separate protocol for this, called ARP (address resolution protocol) is used. When a computer needs to know the Ethernet address of a certain other computer, it sends an ARP request to the network as broadcast message. Every computer in the network listens to ARP requests and they will respond with an ARP reply if they get an ARP request meant for them. The computer that sent the ARP request will save the information from ARP reply to its ARP table for future use. Most systems treat the ARP table as a cache, and clear entries if they have not been used for a certain period of time.

Ethernet layer takes the IP packet data as it is and adds it???s own header before the actual IP packet and a packet checksum after the packet. The checksum (CRC) is used for checking on the receiving end that the packet received from the network is received correctly. If the data is not received correctly, the computer receiving the packet will discard it. Because of the data length inside Ethernet must be a multiple of 4 bytes and the length of IP packet can be any number of bytes, there are sometimes need to add 1-3 bytes of padding in the end of IP packet to make the total data length to be multiple of four bytes. This padding is only added in the Ethernet layer is not seen by upper level protocols.

It is often necessary to understand also some details of TCP, UDP and ICMP protocols to understand how Internet networkign works.

The Transmission Control Protocol (TCP) provides a reliable connection-oriented inorder transport service for today???s Internet applications. Given the simple best-effort service provided by typical IP networks, TCP must cope with the different transmission media crossed by Internet traffic. Most Internet traffic originate from TCP sources. Transmission Control Protocol (TCP) is a reliable connection-oriented byte-stream transport level protocol built on top of IP. This means that TCP datagrams are always carried inside IP packets. TCP is responsible for breaking up the message into datagrams, reassembling them at the other end, re-sending anything that gets lost, and putting things back in the right order. To provide the reliable operation, the TCP is designed as a reliable window-based acknowledgement-clocked flow control protocol. Basically this means that TCP connection end points acknowledge received packets to the sender and the sender retransmits if it does not get acknowledgement from the receiver. Connection-oriented protocol means that before the end points are able to communicate using TCP, a connection has to be established. In addition to IP addresses used by IP, TCP uses 16-bit integers called port numbers for identifying the communication end points. TCP port numbers are needed because one computer can have many communication end points used by different application programs and services. Some of the port numbers are standardized to be used by certain services (for example port 80 for HTTP protocol used to run WWW system).

A TCP-connection appears to be a two-way data stream, without datagram structure evident. IP-addresses and TCP-port numbers of the end points identify a TCP connection. IP-address - TCP-port pairs are commonly called sockets. The data that an application sends using TCP may be sent in varying sized datagrams, depending how TCP decides to send it. All good TCP implementations use automatic MTU discovery based on ICMP protocol to decide the optimum size of datagrams to send to network (to avoid packet fragmentation on the way). TCP is designed to automatically adapt its sending rate to available network capacity. To do this TCP has a flow-control mechanism, which dynamically adapts to the available network transport capacity and capacity of the receiving end to consume data. The basic idea of TCP operation is that it first probes the network speed and starts using it. If network is congested, TCP reduces its transmission rate and if network has free available capacity, TCP increases its sending rate to take use of it. TCP flow-control mechanism provides both efficient adaptation of the underlying network capabilities and fair share of the network bandwidth to the TCP connections sharing the network. Because TCP protocol operates like this, the connection is always slow at the start, but starts to quite quickly accelerate to use the available network speed. And when it has reached the available speed, the transmission speed tends to oscillate around the optimum transmission speed. In practice the TCP protocol works amazingly well with variable network speeds and network capabilities. Because it???s good operation it is utilised in most of the application level protocols used to implement Internet services. Common protocols like HTTP, TELNET, FTP, SNMP, NNTP and SSH are all built on top of TCP.

This Chart shows a couple "common" TCP Ports:
TCP Port # Daemon Use
21 FTP File Transfer Protocol
22 SSH Secure Shell
23 Telnet Terminal Emulation
25 SMTP outgoing mail
80 HTTP Web Server
110 POP3 incoming mail
123 NTP Network Time Protocol
143 IMAP4 incoming mail
194 IRC Internet Relay Chat

Other popular protocol used in Internet is UPD (Universal Datagram Protocol). It simply just packs datagrams inside IP packet. UDP protocol does not contain any checking if the transmitted packet reaches its destination. Hence it avoids the overhead of retransmission in the case of error or lost packets. This kind of protocol is suitable for applications like IP telephony and video streaming, where it is necessary to get packets to other end with as little delay as possible, but few lost packets do not cause too much problems. UDP does not include transport-level congestion control, which implies that the amount of traffic transmitted to network is entirely limited by application-level congestion control. By using UDP application which does not have any application-level congestion control it is possible to fill the congested network completely with UDP traffic which does not stop until the application stops transmitting. Bandwidth hungry applications, which use UDP, should be used with caution.

ICMP (Internet control message protocol) is a protocol for control messages. ICMP is used for error and other messages intended for the TCP/IP software itself rather than for any particular user program. For example, if you attempt to connect to a host, your system may get back an ICMP message saying "host unreachable". ICMP packets are used for network routing controlling, testing network operation (MTU discovery) and to indicate failure for other protocols (such as TCP and UDP). Widely used network-analyzing programs "ping" uses ICMP protocol???s mandatory ECHO_REQUEST datagram to elicit an ICMP ECHO_RESPONSE from a host or gateway to probe the "distance" to the target machine. The program "traceroute" is a network-analyzing program uses ICMP protocol to analyze the route IP packets take in the network. Both ping and traceroute are part of the toolkit (usually included witin modern operating systems) used by network administrators that operate IP networks.

Internet today offers wide variety of services. The applications, which use TCP protocol, use the most of the network capacity. Most of that TCP traffic is generated by HTTP protocol used in World Wide Web and by peer-to-peer networks. Other popular TCP based protocols in use are NNTP for transporting Usenet news discussions and SMTP for delivering Internet mail. Besides those traditional applications new bandwidth hungry applications such as internet telephony (video telephony), audio/video streaming and various peer-to-peer networking technologies are taking increasing amount of network bandwidth.

    System and network administration and security

    Internet security is the practice of protecting and preserving private resources and information on the Internet. The internet has become more dangerous over the last few years. The amount of traffic is increasing and more important transactions are taking place. With this the risk from people trying to damage, intercept or alter your data grows. Network security threats:

    • Faults in servers (OS bugs, installation mistakes): most common holes utilized by hackers
    • Weak authentication
    • Hijacking of connections (especially with unsecure protocols)
    • Interference: jamming and crashing the servers using for example Denial of Service (DoS) attacks
    • Viruses with wire range of effects
    • Active content with trojans
    • Internal threats
    Computer and network security are challenging topics executives and managers of computer corporations. Enterprise management teams are often not aware of the many advances and innovations in Internet and intranet security technology. Without this knowledge, corporations are not able to take full advantage of the benefits and capabilities of the network. Key elements of working datasecurity:
    • Psychological datasecurity
    • Administrative datasecurity
    • Technical datasecurity
    • Physical security
    Together, network security and a well-implemented security policy can provide a highly secure solution. Employees can then confidently use secure data transmission channels and reduce or eliminate less secure methods (unsecure networking practices, photocopying proprietary information, sending sensitive information by fax, placing orders by phone, etc.).

      Internet security protocols

      Security protocols are essential to keep your secret information (like paswords) secret. Examples of unsecure protocols: Telnet passes passwords over the network in clear-text (i.e.unencrypted) form. A packet sniffer then could extract yoursystem's root password. This same applies to to FTP protocolas well. In web in normal HTTP sessions the user names and passwords (both ones written to forms or to user authentication boxes) are transferred practically in plain text format over the network. To get more security to your system the unsecure protocolsneeds to be replaced with more secure ones. Telnet sessions should not be used for system administration anywhereoutside trusted network, instead of Telnet use SSH for this and you aresafe. For file transfers the use of SCP or SFTP is preferred overunsecure FTP. In web applications where information should be secure, secure web protocols should be used (HTTPS, SSL etc.). In applications where you can't avoid the use of unsecure protocols minimize the risk by changing the passwords often (and do thepassword changes preferably over some secure connection).

      • How SSL Works - This document explains how Netscape uses RSA public key cryptography for Internet security. Netscape's implementation of the Secure Sockets Layer (SSL) protocol employs the techniques discussed in this document.    Rate this link
      • HTTP Over TLS - SSL, and its successor TLS [RFC2246] were designed to provide channel-oriented security. This document describes how to use HTTP over TLS.    Rate this link
      • IP Security Protocol (ipsec) - Ipsec is designed to provide cryptographic security services that will flexibly support combinations of authentication, integrity, access control, and confidentiality.    Rate this link
      • Secure Shell (secsh) - The goal of the working group is to update and standardize the popular SSH protocol. SSH provides support for secure remote login, secure file transfer, and secure TCP/IP and X11 forwardings. It can automatically encrypt, authenticate, and compress transmitted data.    Rate this link
      • The SSH Protocol - SSH provides support for secure remote login, secu e file transfer, and secure TCP/IP and X11 forwardings. It can automatically encrypt, authenticate, and compress transmitted data. This page has lots of information on SSH 1.5 and SSH 2 protocols.    Rate this link
      • The SSL Protocol Version 3.0 - This document specifies Version 3.0 of the Secure Sockets Layer (SSL V3.0) protocol, a security protocol that provides communications privacy over the Internet. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. SSL is widely used for securing web page access.    Rate this link
      • The TLS Protocol Version 1.0 - This document specifies Version 1.0 of the Transport Layer Security (TLS) protocol. The TLS protocol provides communications privacy over the Internet. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. TLS is a protocol that is run on the top of TCP/IP. TLS implements strong cryptographic protection for internet data by running an extra protocol on the top of a TCP/IP stream. Therefore TLS protects single TCP/IP session.    Rate this link

      Virtual Private Networks (VPNs)

      A Virtual Private Network, VPN, is a secure "network" built on top ofa public/unsecure network. It is called virtual because no new physical connection lines are required. A VPN runs over a network transport protocol such as TCP/IP. Devicess ending packets over the VPN do not know it's a virtual network, but they do see it as a different network.

      With the use of cryptography, hosts communicate with each other in a secure manner by exchanging information that is ciphered. All other computers connected to the public network are not able to "interpret" the packets exchanged among VPN servers, although, they may actually receive those ciphered packets.

      Secure VPN (SVPN) use cryptographic tunneling protocols to provide the necessary confidentiality (preventing snooping), sender authentication (preventing identity spoofing), and message integrity (preventing message alteration) to achieve the privacy intended. When properly chosen, implemented, and used, such techniques can provide secure communications over unsecured networks. Such virtual private network (VPN) is a tool that enables the secure transmission of data over untrusted networks such as the Internet. VPNs commonly are used to connect local area networks (LANs) into wide area networks (WANs) using the InternetA VPN is "a private data network that makes use of the public telecommunication infrastructure, maintaining privacy through the use of a tunneling protocol and security procedures.

      An Internet-based virtual private network (VPN) uses the open, distributed infrastructure of the Internet to transmit data between corporate sites. There is also technologies (e.g. GRE from Cisco) that are similar toVPN, except that it is 100% transparent (at least in theory). Soi nstead of a new network for the tunnel, the tunnel would be part in an existing network.

      There are three major families of VPN implementations in wide usage today: SSL, IPSec, and PPTP.

      IPsec (IP security) is a standardized framework for securing Internet Protocol (IP) communications by encrypting and/or authenticating each IP packet in data stream. There are two modes of IPsec operation: transport mode and tunnel mode. In transport mode only the payload (message) of the IP packet is encrypted. It is fully-routable since the IP header is sent as plain text (however, it can not cross NAT interfaces that change the contents of IP header). In tunnel mode, the entire IP packet is encrypted. It must then be encapsulated into a new IP packet for routing to work. Tunnel mode is used for network-to-network communications (secure tunnels between routers) or host-to-network and host-to-host communications over the Internet. The IPSec protocol is designed to be implemented as a modification to the IP stack in kernel space. Historically, one of IPSec's advantages has been multi-vendor support. IPsec has very many possible configurations, some of which produce insecure architectures (complexity is the enemy of security).

      SSL used either for tunneling the entire network stack, such as in OpenVPN, or for securing what is essentially a web proxy. Although the latter is often called a "SSL VPN" by VPN vendors, many implementations not really a fully-fledged VPN. The main architectural advantage of SSL VPNs is that they shed the complexity of IPsec in exchange for the simple, well tested SSL/TLS structure for their cryptographic layer. SSL VPNs allow users to connect to the central VPN using any machine they happen to find.

      The Point-to-Point Tunneling Protocol (PPTP) is a method for implementing virtual private networks. The PPTP protocol works by sending a regular PPP session to the peer with the Generic Routing Encapsulation (GRE) protocol. A second session on TCP port 1723 is used to initiate and manage the GRE session. PPTP is difficult to forward past a network firewall because it requires two network sessions. While the PPTP protocol has the advantage of a pre-installed client base on Windows platforms, analysis by cryptography experts has revealed security vulnerabilities.

      There are also other ways to build VPN than encryption. Trusted VPN do not use cryptographic tunneling, and instead rely on the security of a single provider's network to protect the traffic. Multi-protocol label switching (MPLS) is commonly used to build trusted VPN. Other protocols for trusted VPN include L2F (Layer 2 Forwarding) developed by Cisco.

      Some large ISPs now offer "managed" VPN service for business customers who want the security and convenience of a VPN but prefer not to undertake administering a VPN server themselves. In addition to providing remote workers with secure access to their employer's internal network, sometimes other security and management services are included as part of the package, such as keeping anti-virus and anti-spyware programs updated on each client's computer.

      • Setting up a VPN Gateway - The VPN firewall discussed in this article will run on just about any 486-or-better PC that has 16MB or more main memory and two Linux-compatible Ethernet network cards. This article shows you how to set up, at minimal expense, a working VPN gateway that uses the IETF's (Internet Engineering Task Force) IPSec (internet protocol security) specification.    Rate this link
      • Virtual Private Networks (VPNs) - This tutorial addresses the basic architecture and enabling technologies of a VPN. The benefits and applications of VPNs are also explored. Finally, this tutorial discusses strategies for the deployment and implementation of VPNs.    Rate this link
      • Why TCP Over TCP Is A Bad Idea - A frequently occurring idea for IP tunneling applications is to run a protocol like PPP, which encapsulates IP packets in a format suited for a stream transport (like a modem line), over a TCP-based connection. This would be an easy solution for encrypting tunnels by running PPP over SSH, for which several recommendations already exist (one in the Linux HOWTO base, one on my own website, and surely several others). It would also be an easy way to compress arbitrary IP traffic, while datagram based compression has hard to overcome efficiency limits. Unfortunately, it doesn't work well. Long delays and frequent connection aborts are to be expected. Here is why.    Rate this link
      • Point-to-point tunneling protocol    Rate this link
      • Virtual private network - description from Wikipedia, the free encyclopedia    Rate this link
      • IPsec    Rate this link
      • OpenVPN - OpenVPN is a full-featured SSL VPN solution which can accomodate a wide range of configurations, including remote access, site-to-site VPNs, WiFi security, and enterprise-scale remote access solutions with load balancing, failover, and fine-grained access-controls.    Rate this link


      IPSEC is Internet Protocol SECurity. It uses strong cryptography to provide both authentication and encryption services. Authentication ensures that packets are fromthe right sender and have not been altered in transit. Encryption prevents unauthorised reading of packet contents. These services allow you to build secure tunnels through untrusted networks. Everything passing through the untrusted net is encrypted by the IPSEC gatewaymachine and decrypted by the gateway at the other end. The result is Virtual Private Network or VPN. This is a network which is effectively private even though it includes machines at several different sites connected by the insecure Internet.

      VPN Software

      • CIPE - Crypto IP Encapsulation - This is an ongoing project to build encrypting IP routers. It works by tunneling IP packets in encrypted UDP packets. The protocol is designed to be lightweight and simple. CIPE s designed for passing encrypted packets between prearranged routers in the form of UDP packets. This is not as flexible as IPSEC but it is enough for the original intended purpose: securely connecting subnets over an insecure transit network.    Rate this link
      • CIPE-Win32 - Crypto IP Encapsulation for Windows NT/2000 - CIPE-Win32 is a port of Olaf Titz's CIPE package from Linux to Windows NT. It is protocol compatible with versions 1.3.0 and greater of the Linux implementation. It OS compatible with Windows NT4.0 SP3 - SP6 and Windows 2000.    Rate this link
      • Kame - KAME Project is a joint effort of seven companies in Japan to provide a free IPv6 and IPsec (for both IPv4 and IPv6) stack for BSD variants to the world.    Rate this link
      • Linux FreeS/WAN - Linux FreeS/WAN is a free implementation of IPSEC & IKE for Linux. FreeS/WAN project's primary objective is to help make IPSEC widespread by providing source code which is freely available, runs on a range of machines including ubiquitous cheap PCs, and is not subject to US or other nations' export restrictions.    Rate this link
      • OpenVPN - OpenVPN is a full-featured SSL VPN solution which can accomodate a wide range of configurations, including remote access, site-to-site VPNs, WiFi security, and enterprise-scale remote access solutions with load balancing, failover, and fine-grained access-controls    Rate this link

      Building VPN

      • Encrypted Tunnels using SSH and MindTerm HOWTO - MindTerm is an an implementation of a secure shell client in pure Java supporting both the ssh1 and the ssh2 protocols. SSH and MindTerm will work together to use a technique called port forwarding, which can be used to implement Virtual Private Network (VPN). This port-forwarding can only be done with TCP services.    Rate this link

      Network naming and address allocation

      The Dynamic Host Configuration Protocol (DHCP) is an Internet protocol for automating the configuration of computers that use TCP/IP. DHCP can be used to automatically assign IP addresses, to deliver TCP/IP stack configuration parameters such as the subnet mask and default router, and to provide other configuration information such as the addresses for printer, time and news servers. With DHCP the IP address space is efficiently used because IP addresses are "leased" to clients for a limited time and after that "recycled". DHCP eliminates the need for a system administrator to keep a manual log of all IP addresses. The standards on DHCP are RFCs 1541, 1542, 2131 and 2132. The Domain Name System (DNS) is a distributed Internet directory service. DNS is used mostly to translate between domain names and IP addresses, and to control Internet email delivery. Most Internet services rely on DNS to work, and if DNS fails, web sites cannot be located and email delivery stalls.

      Network management

      The most often used network management protocol in modern computer networks is SNMP. SNMP lets TCP/IP-based network management clients exchange detailed information about their configuration and status.

      The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the exchange of management information between network devices. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP enables network administrators to manage network performance, find and solve network problems, and plan for network growth. The Simple Network Management Protocol (SNMP) is a request/response protocol that communicates management information between two types of SNMP software entities: applications and agents.

      An SNMP-managed network consists of three key components: managed devices, agents, and network-management systems (NMSs).

      A managed device is a network node that contains an SNMP agent and that resides on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be routers and access servers, switches and bridges, hubs, computer hosts, or printers.

      An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP.

      An NMS executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs must exist on any managed network.

      An SNMP community is a group of managed devices and network management systems within the same administrative domain. SNMP community table enables you to control SNMP access to the device. For security reasons, the SNMP agent validates each request from an application before responding to the request. The validation procedure consists of verifying that the application entity belongs to an SNMP community with access privileges to the agent. When a device receives an SNMP request packet, it compares the SNMP community name in the packet with those in its SNMP community table. If the name is not found, the request is denied and an error is returned. If the name is found, the associated access level is checked and, if the access level allows the request, the request is performed. All message exchanges have an SNMP community name and a data field. The data field contains the SNMP operation and its associated operands.

      SNMP uses trap messages to provide automatic event notification. Rather than waiting for the SNMP application to query the agent about the event, the agent automatically sends reports (traps) to the application when certain events occur. The network administrator should be aware that trap messages are not guaranteed to be reliable and are not intended to replace polling, only to supplement it. The trap messages require a certain amount of the SNMP agent's resources. Occasionally, trap messages are not delivered due to a lack of these SNMP agent resources. Traps should only be considered hints to the management application that a significant event has occurred in the device. The network administrator should then run polling to get more information on the event.

      The information that SNMP can attain from a network is defined as a MIB (Management Information Base). A Management Information Base (MIB) is a collection of information that is organized hierarchically. MIBs are accessed using a network-management protocol such as SNMP. They are comprised of managed objects and are identified by object identifiers. MIBs are structured like trees. At the top of the tree is the most general information available about a network. Each branch provides more details about specific network areas. The leave, or end nodes, provide the most detailed information about the network and/or device. A managed object (sometimes called a MIB object, an object, or a MIB) is one of any number of specific characteristics of a managed device. Managed objects are comprised of one or more object instances, which are essentially variables. Two types of managed objects exist: scalar and tabular. Scalar objects define a single object instance. Tabular objects define multiple related object instances that are grouped in MIB tables. An object identifier (or object ID) uniquely identifies a managed object in the MIB hierarchy. The MIB hierarchy can be depicted as a tree with a nameless root, the levels of which are assigned by different organizations. The managed object atInput can be uniquely identified either by the object name? variables.AppleTalk.atInput?or by the equivalent object descriptor,

      A MIB is often referred to as a database. A MIB is not a database. A MIB is a file, written in a specific language that lists variables. It assigns each variable a name, a number, and a set of permissions. It may also provide a description of what the variable is supposed to represent. Since everything in SNMP is an action on a variable, this is very important.

      Two versions of SNMP exist: SNMP version 1 (SNMPv1) and SNMP version 2 (SNMPv2). Both versions have a number of features in common, but SNMPv2 offers enhancements, such as additional protocol operations. Standardization of yet another version of SNMP?SNMP Version 3 (SNMPv3)?is pending.

      SNMP version 1 (SNMPv1) is the initial implementation of the SNMP protocol. It is described in Request For Comments (RFC) 1157 and functions within the specifications of the Structure of Management Information (SMI). SNMPv1 operates over protocols such as User Datagram Protocol (UDP), Internet Protocol (IP), OSI Connectionless Network Service (CLNS), AppleTalk Datagram-Delivery Protocol (DDP), and Novell Internet Packet Exchange (IPX). SNMPv1 is widely used and is the de facto network-management protocol in the Internet community.

      An SNMP operation takes the form of a Protocol Data Unit (PDU), basically a fancy word for packet. Version 1 SNMP supports five possible PDUs:

      • GetRequest / SetRequest supplies a list of objects and, possibly, values they are to be set to (SetRequest). In either case, the agent returns a GetResponse.
      • GetResponse informs the management station of the results of a GetRequest or SetRequest by returning an error indication and a list of variable/value bindings.
      • GetNextRequest is used to perform table transversal, and in other cases where the management station does not know the exact MIB name of the object it desires. GetNextRequest does not require an exact name to be specified; if no object exists of the specified name, the next object in the MIB is returned. Note that to support this, MIBs must be strictly ordered sets (and are).
      • Trap is the only PDU sent by an agent on its own initiative. It is used to notify the management station of an unusual event that may demand further attention (like a link going down). In version 2, traps are named in MIB space. Newer MIBs specify management objects that control how traps are sent.

      SNMP is a simple request/response protocol. The network-management system issues a request, and managed devices return responses. This behavior is implemented by using one of four protocol operations: Get, GetNext, Set, and Trap. SNMP must account for and adjust to incompatibilities between managed devices. Different computers use different data representation techniques, which can compromise the capability of SNMP to exchange information between managed devices. SNMP uses a subset of Abstract Syntax Notation One (ASN.1) to accommodate communication between diverse systems.

      The SNMPv1 SMI specifies the use of a number of SMI-specific data types, which are divided into two categories: simple data types and application-wide data types. Three simple data types are defined in the SNMPv1 SMI, all of which are unique values: integers, octet strings, and object IDs. The integer data type is a signed integer in the range of -2,147,483,648 to 2,147,483,647. Octet strings are ordered sequences of 0 to 65,535 octets. Object IDs come from the set of all object identifiers allocated according to the rules specified in ASN.1. Seven application-wide data types exist in the SNMPv1 SMI: network addresses, counters, gauges, time ticks, opaques, integers, and unsigned integers. Network addresses represent an address from a particular protocol family. SNMPv1 supports only 32-bit IP addresses.

      SNMP version 2 (SNMPv2) is an evolution of the initial version, SNMPv1. Originally, SNMPv2 was published as a set of proposed Internet standards in 1993; currently, it is a draft standard. As with SNMPv1, SNMPv2 functions within the specifications of the Structure of Management Information (SMI). In theory, SNMPv2 offers a number of improvements to SNMPv1, including additional protocol operations. The Structure of Management Information (SMI) defines the rules for describing management information, using ASN.1.

      Network management software

      • Active SNMP - network management software with a Web-browser interface implemented in Java    Rate this link
      • CMU SNMP - SNMP agent and applications to Linux, with enhancements to the MIB-2 group    Rate this link
      • HP OpenView - propably the most well known commercial network management software    Rate this link
      • Linux CMU SNMP Project    Rate this link
      • SNMP module for Apache web server - The idea behind this module is that an ISP, webhosting sites, colocation sites, etc, etc can as well monitor apache, but also control in real time some important values via Simple Network Management Protocol (SNMP). SNMP is the well-known management framework for the Internet letting hardware (such as routers, briges and modems) and software (such as operating systems, network layers and applications) provide their status. This SNMP module might enable someone not only to detect when a service is hanging, spinning out ot control, etc., but also to make it possible to make some reconfiguration changes so that the server as such, and any other services it renders for perhaps other customers, are not affected that much. Another much needed use is switching "ON" extensive logging dynamically for a short period of time to investigate a problem.    Rate this link
      • Network Management Using Free Software - Meta-Resources and Individual Software Packages    Rate this link
      • The Simpleweb - Freely available SNMP / Network Management Software    Rate this link
      • The NET-SNMP Home Page - extensible agent, SNMP library, SNMP tools, netstat using SNMP, graphical Perl/Tk/SNMP based mib browser, previously known with name UCD-SNMP    Rate this link
      • WebSNMP - WebSNMP is a perl module for Apache Web Servers that allows web developer to easily insert snmp functionality into web pages with simple html tags.    Rate this link

      Networking security

      Computer security takes on more importance as commerce becomes e-commerce and business embraces the Net. Popular press coverage of computer security orbits around basic technology issues such as what firewalls are, when to use the DES encryption algorithm, which anti-virus product is best, or how the latest email-based attack works. The problem is, many security practitioners don't know what the problem is. What does security mean in the Internet context? Quite simply: ensuring the availability of service, authenticating users and their data, and protecting the confidentiality and integrity of data. Securing a network requires many different pieces of a very large puzzle. Some security is provided at the firewall. Some security is provided by user authentication at the server. Some security is done by Intrusion Detection Systems. Still other security is created on the network hardware itself.One large par of the problem puzze is the software running in computers connected to network. Internet-enabled software applications, especially custom applications, present the most common security risk encountered today, and are the target of choice for hackers.Sniffer is a program and/or device that monitors data traveling over a network. Sniffers can be used both for legitimate network management functions and for stealing information off a network. Unauthorized sniffers can be extremely dangerous to a network's security because they are virtually impossible to detect and can be inserted almost anywhere. This makes them a favorite weapon in the hacker's arsenal. On TCP/IP networks, where they sniff packets, they're often called packet sniffers. For safety reasons any system which can be directly accessed fromuntrusted networks (and the Internet is an excellent example of one ofthese "untrusted networks") should be placed on an isolated networksegment. This way you can impliment special filtering rules which willprevent the system from being used to attack your internal hosts in theinvent it is compromised. The preceding is true no matter what operatingsystem you choose to run on your hosts/routers/firewalls/etc.A large percentage of malicious traffic is focused on a small number ofvulnerabilities and their associated ports. The fast spread of network worms and other malware has forcedInternet Service Providers (ISPs) into implementing packet filtering. Blocking some of these ports will isolate infected machines and slow the spread of malicious, autonomous code such as worms. In some cases, this is the only way to keep the network operating, butit has become common to block certain ports permanently even afterthe threat diminishes. However, the vulnerable services used by these worms do have legitimate uses. If secured properly, they can be used without the risk of infection.It is generally a good idea to block ports commonly used for Microsoft Filesharing and related services; specifically, ports 135, 137, 139, and 445. These ports and, in particular, Microsoft File Sharing, draw a lot of attention from malware authors. Microsoft does not recommend use of these services across a public network, and in fact, Microsoft advocates blocking traffic on these ports as a best practice. Filtering port 135, 137, 139, and 445 will reduce malicious traffic. Port filters are not perfect for all security problems. In particular, the limited filters listed above leave plenty of room for other vulnerabilities. However, these ports account for a large percentage of malicious activity. There are three disparate levels of security you need to consider, and it is advisable to take the following approach to the common network problem in most companies:

      • Block the company internal network so that nobody outside can access the computers inside networks directly. The access to company internal network is by default only possible from devices that are inside same physical network (usually inside one building or larger company network).
      • For employees and others who have trusted access to your network, the answer is not to poke holes in your firewall. Rather, the answer is VPN. By setting up a secure, encrypted, authenticating channel, you bring your trusted users into your network. From your point of view and theirs, it is as if their machines were physically located on the other side of your firewall--just like having the machines right in your building. Using a VPN in this case prevents random hackers from entering your network on these levels.
      • For business partners and contractors who need limited access to a subset of services, but whom you do not trust fully, the answer is quite likely also a VPN, but not directly into your company internal network. For services provided to these people, you want everything from your end first going through application-level firewalls, and then through the VPN, over the Internet, to them. Using a VPN in these case prevents random hackers from entering your network. Application-level firewall will prevent those partners to acces only the limited part of your network you want them to have access to and nothing else.
      • Many companies also need some publically available service. For the general public who simply need access to your web site, the ideal situation is to simply host the web site on a network entirely separate from yours (possibly at some service provider premises). Use an application-level firewall to help prevent things like buffer overflows. If your web server needs to retrieve information from other systems on your network, have it communicate over a VPN, just like business partners.
      By following this approach, you expose nothing more than is necessary to the world, and greatly mitigate the risk of intrusion. In the VPN apprach from outside office diretly to office network there is one thing to consider if people have access to company network from their home computers: You cannot trust employees computers at home, even if you can trust employees - if they are running Windows, they are potential virus and worm vectors, and needs to be shielded off, so a simple VPN-solution is no solution. Whatever you do, keep it simple. Do not trust a too complicated system. Closed systems are more secure than one that everybody can access. So keep the system as closed as you can is a good advice. And keep your software patched for the latest bugs - keep an eye on the security-update-service for your distro/OS and bugtraq. Remeber software updates. The boring part, but the most critical. The beauty of the traditional firewall is it's simplicity. It's reliable and secure, and easy to understand/debug. One commonly used traditional firewall software used in Linux systems is built-in iptables functionality. It's a sharp tool, so be careful - but correctly applied, it kicks pants off most application or appliance firewalls. Invest the time to learn the sharp tool, and you'll realize that most of what you pay for on big expensive firewalls is manageability (i.e. Java GUIs, wizards, databases, multiple systems preconfigured - IDS, firewall, proxy, etc). Application layer firewalls are another layer above port filtering. They can increase security and could, in theory, make it worthwhile to share a service hosted on a machine that is inside your network. Application firewalls and filters are complex systems. To user it means more can go wrong, more holes can be found. The problem here is that application-level firewalling is fraught with problems. The lack of intuitive management for this type of firewalling is a problem that quite a few companies are trying to solve -- with limited success, so far. The field of firewalling is getting more complicated every day. Originally, the rules were dead simple. One port == one protocol. Some protocols used multiple ports, but even then it was kept nice and simple. But no, not everybody liked this situation. Nowadays there are the many other protocols people see fit to encapsulate in HTTP (RDP / Terminal Services, instant messaging, etc). Application layer firewalls are are becoming must-have items in this kind of environment. Two or more firewalls approack is becoming common. Traditional packet filtering firewalls are absolutely necessary, but they must become much more widely distributed inside large networks in order to be effective. The same applies to application filtering technologies and all the other stuff people think of as perimeter defenses. Any attempt to set up large networks as controlled domains with known security characteristics is a losing battle. The world seems to be going to endpoint-driven security. A lot of companies are working on making this manageable and cost-effective. As long as you have machines on your network that can hit external web sites or have floppy drives or unauthorized wireless access points, your internal network is insecure and open. Most network designs assume that once you get in to the "internal network" there is no more security and all your deepest company secrets are available to anyone browsing around. If this is true, you've probably made some bad decisions somewhere along the way and you should address those before you open any holes. Anyone can decently secure a network that doesn't interact with anything; the real trick is allowing business to flow as usual and still have an acceptable level of Security. You can mitigate the danger by using a set of consistent criteria for each of your requirements, like a checklist. For example:
      • Is the service mission-critical?
      • Can the service be offered through a less-vulnerable channel?
      • Is there a way to move the service into a perimeter network (or outside entirely)? Even if this means synchronizing a set of data to an outside machine via cron, if the data on the machine is less important than the internal network security, this can help.
      • Once the user is connected, authenticated and accessed, then can go wrong? What could they do maliciously? What could they do accidentally?
      When thinking of security, remember that you won't be able to think of everything. No security model is complete without behind-the-wall systems, be they basic monitoring systems up through more sophisticated custom snort or proprietary IDS. It all depends on your paranoia level.
      • An Analysis of the RADIUS Authentication Protocol - RADIUS is currently the de-facto standard for remote authentication. It is commonly used for embedded network devices such as routers, modem servers, switches, etc. RADIUS facilitates centralized user administration. RADIUS consistently provides some level of protection against a sniffing, active attacker. RADIUS is uniformly supported. RADIUS's primary competition for remote authentication is TACACS+ and LDAP.    Rate this link
      • An Architectural Overview of UNIX Network Security - UNIX network security architecture based on the Internet connectivity model and Firewall approach to implementing security    Rate this link
      • Center for Internet Security - The Center for Internet Security is a not-for-profit cooperative enterprise assisting network users and operators, and their insurers and auditors, to reduce the risk of significant disruptions of electronic commerce and business operations due to technical failures or deliberate attacks.    Rate this link
      • CERT - The CERT? Coordination Center (CERT/CC) is a center of Internet security expertise. It is located at the Software Engineering Institute, a federally funded research and development center operated by Carnegie Mellon University.    Rate this link
      • CISSP.COM - web portal for the certified information systems security professionals, intended to to promote the CISSP Certification, share knowledge and communication amongst certified information system security professionals and to help information security professionals who are seeking to become CISSPs    Rate this link
      • Denial of Service Attacks - This document provides a general overview of attacks in which the primary goal of the attack is to deny the victim(s) access to a particular resource. Included is information that may help you respond to such an attack.    Rate this link
      • eSecurityOnline - security information portal    Rate this link
      • Exposing hackers    Rate this link
      • FAQ: Firewall Forensics (What am I seeing?) - document explains what you see in firewall logs, especially what port numbers means, document intended for both security-experts maintaining corporate firewalls as well as home users of personal firewalls    Rate this link
      • FAQ: Network Intrusion Detection Systems    Rate this link
      • Firewalls Complete - full book on-line    Rate this link
      • Firewall Security Case Study    Rate this link
      • Fraud Analysis in IP and Next-Generation Networks - Fraud management systems (FMSs) are designed to detect, manage, and assist in the investigation of fraudulent events. This tutorial discusses the crucial role played by FMSs in the containment of next-generation fraud, the vulnerability of Internet protocol (IP)-based technologies, effective data analysis and algorithms, and solution methodologies. The open and distributed nature of convergent and next-generation network (NGN) architecture enables easy access to services, information, and resources, together with the constant abuse of hackers, curious individuals, fraudsters, and organized crime units.    Rate this link
      • Guard your embedded secrets - As manufacturers eagerly roll out limited-resource network appliances, the embedded world is poised to duplicate desktop-security nightmares but with even greater consequences.    Rate this link
      • How Firewalls Works    Rate this link
      • Internet Security Glossary    Rate this link
      • Internet Security Issues - DSL offers consumers many benefits such as high-speed connections from 10 to 100 times faster than dial-up, simultaneous voice and data over the same phone line and choice of ISP. DSL also provides consumers with an "always-on" connection, which means consumers can maintain their DSL Internet connections 24 hours a day, seven days a week. Anybody who establishes a dial-up or "always-on" Internet connection incurs some security risk stemming from the duration of the network connection rather than the access method. A number of standard measures are available that users can apply to protect themselves.    Rate this link
      • IP-spoofing Demystified - The purpose of this paper is to explain IP-spoofing to the masses. It assumes little more than a working knowledge of Unix and TCP/IP.    Rate this link
      • Payback time! How to catch a hacker    Rate this link
      • RADIUS Authentication - RADIUS stands for Remote Authentication Dial In User Service. . There are two specifications that make up the RADIUS protocol suite: Authentication and Accounting. These specifications aim to centralize authentication, configuration, and accounting for dial-in services to an independent server. You probably used RADIUS to get online to surf the web if you obtain access through a dialup account. Your communications software sent your username and password to a terminal server. The terminal server in turn sent this information to a RADIUS server.    Rate this link
      • RFC2504: Users' Security Handbook - Document intended to provide users with the information they need to help keep their networks and systems secure.    Rate this link
      • Securityfocus - discussion on security related topics, create security awareness, and to provide the Internet's largest and most comprehensive database of security knowledge and resources to the public    Rate this link
      • Sniffing (network wiretap, sniffer) FAQ - This document answers questions about eavesdropping on computer networks (a.k.a. "sniffing").    Rate this link
      • Snort: Planning IDS for Your Enterprise - Snort is a free, small, highly configurable and portable network-based IDS or NIDS. Additionally, Snort can be used as a packet sniffer and a packet logger. This article tells how to use it.    Rate this link
      • The CISSP and SCCP SOpen Sudy Guides Web Site - a site dedicated to helping people in achieving their goal of becoming a CISSP or SSCP, vast container of resources that can assist you in mastering the domains of the specific Common Body of Knowledge related to each of the above certifications    Rate this link
      • The Security Writers Guild - contemporary computer-related news items, latest exploits & bugs, voting polls, links, and the general buzz    Rate this link
      • The Secure Shell Frequently Asked Questions - Secure Shell is a program to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. It provides strong authentication and secure communications over unsecure channels. It is intended as a replacement for telnet, rlogin, rsh, and rcp. For SSH2, there is a replacement for FTP: sftp.    Rate this link
      • Using IPSec (IP Security Protocol)    Rate this link

    Software related to information security


      • The Wild, Wild, Net - ot unlike the Wild, Wild West of yore, today's Internet is a place rife with opportunities for the wily cyber outlaw. The new sheriffs in town, security protocols such as IPSec, are ready for a high noon showdown with criminals armed not with muskets and rifles, but with viruses and a basic knowledge of computer technology.    Rate this link

      On-line security tests

      Other security tests

      • PatchWork Tool - Program to check if Windows NT system is vulnerable to the attack and whether it has the files that indicate it has already been compromised. PatchWork checks for the vulnerabilities listed by the FBI, and if any are found, points you directly to the Microsoft patches.    Rate this link

      Intrusion detections

      • Snort - Snort is a lightweight network intrusion detection system, capable of performing real-time traffic analysis and packet logging on IP networks. It can perform protocol analysis, content searching/matching and can be used to detect a variety of attacks and probes, such as buffer overflows, stealth port scans, CGI attacks, SMB probes, OS fingerprinting attempts, and much more.    Rate this link

      Secure Shell

      ecure Shell, SSH, is a cryptographic security tool used to make areliable use of public channels such as the Internet when establishinginteractive connections with a server.SSH (Secure Shell) is a secure replacement for remote login and file transfer programs like telnet, rsh, and ftp, which transmit data and passwords in clear, human-readable text. SSH uses a public-key authentication method to establish an encrypted and secure connection from the user's machine to the remote machine. Secure Shell is the replacement for unsafe rsh, rlogin, rcp, telnet, rexec, rcp and ftp. SSH encrypts all traffic, and provides various levels of authentication depending on your needs. Main features of Secure Shell include secure remote logins, file copying, and tunneling TCP and X11 traffic. SSH uses TCP for its transport.

    Measuring IP networks

    • Measuring IP Network Performance - If you are involved in the operation of an IP network, a question you may hear is: "How good is your network?" Or, to put it another way, how can you measure and monitor the quality of the service that you are offering to your customers? And how can your customers monitor the quality of the service you provide to them?    Rate this link

    Routing and packet forwarding technologies

    Routing is the technique by which data finds its way from one host computerto another.

    The role of the router is to move packets. The basic functions of the router are done through the following steps:

    • 1. Receive the packet.
    • 2. Perform additional services to the packet. For example, tagging the ToS field, or changing the source or destination IP address and so on.
    • 3. Determine how to get to the destination of the packet.
    • 4. Determine the next hop toward the destination and which interface to use.
    • 5. Rewrite the Media Access Control (MAC) header so that it can reach its next hop

    Process switching, or sometimes referred to as punting, is the slowest old IP routing technique. During process switching, the forwarding decision is based on the Routing Information Base, or RIB, and the information necessary for the MAC rewrite is taken from the ARP cache. Depending on the configuration, additional services might also be performed. Process switching normally as a normal process in the CPU and competes for system resources with the rest of the processes. This is how routing normally works on UNIX computers, Linux systems and Widnows PCs that support routing.

    Over the years, various switching methods were devised to overcome the performance limitation of process switching.

    Layer 3 switching

    Switching is defined as the ability to forward packets on the fly through a cross point matrix, a high speed bus, or shared memory arrangement. As a packet enters the switch, either the source and destination addresses or just the destination address is examined. This examination determines the switching action to be taken for the packet. Since the address fields are only fields examined, there is minimal delay, and the packet is switched to the destination address segment (port) before it is received in its entirety.IP service switches are devices generally deployed at the edge of the network to perform a range of functions, such as aggregating IP flows, provisioning, firewalls and security. Service switches are used to implement functionlity like virtual private networks (VPNs), firewalls, security and quality of service (QoS) traffic management.IP core switches are used in the core network to handle very high speed IP traffic.

    IP tunneling

    IP tunneling is a process to carry an IP packet inside another IPpacket. In this process an IP datagram may be encapsulated(carried as payload) within an IP datagram (or some other protocol).Encapsulation is suggested as a means to alterthe normal IP routing for datagrams, by delivering them to an intermediate destination that would otherwise not be selected by the (network part ofthe) IP Destination Address field in the original IP header. The process of encapsulation and decapsulation of a datagram is frequently referred to as"tunneling" the datagram, and the encapsulator and decapsulator are then considered to be the the "endpoints" of the tunnel; the encapsulator nodeis refered to as the "entry point" of the tunnel, and the decapsulator node is refered to as the "exit point" of the tunnel.Tunneling is typically needed for some special specialapplications where packet need to be routed on special route(for example needed in Mobile IP) and they are needed when buildingVirtual Private Networks (VPN). Sometimes tunneling can be usedto pass data through firewalls. There are many different ways totunnel IP traffic.

      Using and setting up tunnels

      Other related links

      • Access VPNs and IP Security Protocol Tunneling Technology Overview - document from Cisco Systems    Rate this link
      • Why TCP Over TCP Is A Bad Idea - A frequently occurring idea for IP tunneling applications is to run a protocol like PPP, which encapsulates IP packets in a format suited for a stream transport (like a modem line), over a TCP-based connection. This would be an easy solution for encrypting tunnels by running PPP over SSH, for which several recommendations already exist (one in the Linux HOWTO base, one on my own website, and surely several others). It would also be an easy way to compress arbitrary IP traffic, while datagram based compression has hard to overcome efficiency limits. Unfortunately, it doesn't work well. Long delays and frequent connection aborts are to be expected. Here is why.    Rate this link

    Protocols and services information

      General information

      • Internet Protocol - good introduction to IP itself    Rate this link
      • Protocol Numbers - In the Internet Protocol version 4 (IPv4) [RFC791] there is a field, called "Protocol", to identify the next level protocol. This is an 8 bit field. In Internet Protocol version 6 (IPv6) [RFC1883] this field is called the "Next Header" field. IANA has standardized the use of this field.    Rate this link

      IP over Ethernet

      • Address Resolution Protocol (ARP) - In an Ethernet or Token Ring network, any station wishing to communicate with another station must know the DLC address of the Network Interface Card that will next receive the data frames. This presents a unique problem to higher layer protocol stacks because they must find some means of relating their Network Layer Addresses to the actual DLC address of the destination NIC. This essay discusses the purpose and functionality of TCP/IP's answer to this problem, Address Resolution Protocol (ARP)    Rate this link
      • ARP: Address Resolution Protocol - A good introduction to ARP protocol.    Rate this link
      • ARP, Address Resolution Protocol - Technical information on ARP protocol.    Rate this link
      • Behavior of Gratuitous ARP in Windows NT 4.0 - Gratuitous ARP, also called a courtesy ARP, is a mechanism used by TCP/IP computers to "announce" their IP address to the local network and, therefore, avoid duplicate IP addresses on the network. Routers and other network hardware may use cache information gained from gratuitous ARPs.    Rate this link


      E-mail is very much used Internet service. E-mail is an integral part of business and everyday life today. The history of e-mail is quite long. The first email program was developed in the early 1970s, but for two decades the technology was hardly used . except by computer scientists, researchers and hobbyists.Growing popularity of personal computers converged with easy access to the Internet in the mid-1990's made the e-mail truly pervasive as a way to communicate at work, with family and with friends. Email's popularity has produced one very troubling side effect: spam. Unsolicited commercial email is a spreading plague, because it is cheap way to send message to millions of people. Generally unwanted (and often pornographic or with fraudulent intent) spam is a nuisance and a distraction. Spamming is sent out cheaply by using someone else's computer/network ressources. The companies that market tools or services for sending spam say that you can literally contact millions of people for pennies on the dollar compared to traditional marketing. Unsolicited Commercial Email (UCE) is not illegal in the United States today! (this could change in the future) However there are some details you need to understand before you venture down that path. There are many tate and federal current and pending laws that affect you! Even though you get your message to large number of people, nothing guarantees that even a very small fraction of people will buy your product. One thing that is guaranteed that there are large number of people pissed of because you are filling their e-mail box with the message they do not want. Unsolicited Commercial email (spam) is a serious ethical, moral, and legal issue. Spam is a drain on productivity, an increasingly costly waste of time and resources for Internet service providers and for businesses. Many companies nowadays use spam filters to get rid of most of spam, but something still gets through. The real economic burden is the 10 to 15 percent of the spam that still gets through, which costs the company is asily about $1 in lost productivity per message. You can't just hit the delete button. You have to figure out that it's spam first. This involves viewing the message, reading at least part of it. The work time used for this can easily cost $1 per minute. As everyone struggles to sift spam out of their inboxes lots of time is lost in this and valid messages are sometimes overlooked or deleted. The volume of spam is increasing exponentially, and some people fear that is will reach a point when it will start choking up Email entirely. There are efforts to get rid of spam problem. Every major Internet service provider (ISP) has rules against spamming. ISPs are shutting down those who violate our anti-spam account policies. Government and industry working together also can put an end to spammers' deceptive practices. Users of e-mail programs nowadays use often spam filtering technologies that try to stop spam. Part of the challenge in curbing spam lies in accurately identifying legitimate commercial email. There are several protocols that are used in modern Internet e-mail systems. The Internet e-mail infrastructure is based on SMTP (simple mail transfer protocol), that defines the communication between the mail servers. SMTP is also often used by e-mail client programs to send out the mails to mail server. When reading the mail, it can happen differently on different systems. In UNIX systems and similar, the local mail programs generally access the mail storage in the computer directly. If you are reading mails from another servers, then there are special protcols for this. The most popular are IMAP and POP. In practise the difference between IMAP and POP is that POP gets the mail messages into the client computers hard disk. POP is a good choice for homeusers, but not for public computers. IMAP acts like a terminal: messages are only vieved on the screen, nothing is stored locally. This is a great advantage if you use the same email account both at home and at work. IMAP can use SSL-protocol (Secure Sockets Layer), which is quite secure.

      Other protocols

    QoS on IP networks

    Internet was originally designed to be a best-effort data networkwhich tried it's best to delive the data packets, but does notguarantee any performance for anybody. Traditionally the networkbandwidth has been shared more or less uniformly between users.This is working approach for many aplications, but does notsatisfy all the uses. By adding Quality of Service (QoS) featuresto Internet infrastructure it will be possible to give guaranteedservice levels to users that need it (and pay for it). This technology is coming, but is not ready and widely deployed yetin Internet Protocol (IP) level.Before discussing QoS it is best to have some definition of the term. By one definition, QoS is: "A collective measure of the level of service delivered to the customer. QoS can be characterized by several basic performance criteria, including availability (low downtime), error performance, response time and throughput, lost calls or transmissions due to network congestion, connection set-up time, and speed of fault detection and correction."A key element of this definition is that QoS can be determined by any one of a number of parameters, or any combination of those parameters. Equally important, QoS will not be defined in the same way for all services, because different services have different needs.For the transmission of time sensitive information (voice, audio, video, etc.) over a network, Quality of Service is often defined in terms of the following parameters:

    • Bandwidth: The maximum data rate supported by a networking technology. Bandwidth indicates the theoretical maximum capacity of a connection.
    • Latency: Delay in a transmission path or in a device within a transmission path
    • Jitter: The distortion of timing of data as it is propagated through the network, where the signal varies from its original reference timing and packets do not arrive at its destination in consecutive order or on a timely basis, i.e. they vary in latency. Also referred to as delay variance. In packet-switched networks, jitter is a distortion of the interpacket arrival times compared to the interpacket times of the original transmission.
    • Packet Error Rate: The rate at which the end user application receives a packet that differs from the packet as it was originally sent.
    Those parameters usually has some connection to each other. As the theoretical bandwidth is approached in data transfer, negative factors such as transmission delay or increased jitter can cause deterioration in quality.Packet error rate may differ from the rate at which the medium causes packet errors, because mechanisms such as packet retry and error correction can be used to reduce this basic packet error rate. In case of packet retries are used, this usually has effect on available bandwidth, delay and jitter. Jitter is particularly damaging to real-time multimedia traffic, like voice or video telephony. Long latency can also be propblematic in real-time communications. Applications like audio or streaming are not as sensitive to delay and jitter as real-time communications, because in streaming applications the applications can usually quite easily buffer data and user can easily accept some delay from broadcast source to viewing screen. Within the realm of video and audio streaming, bandwidth requirements for the transport of data streams can vary by as much as 3 orders of magnitude. For example compressed voice audio can take less than 10 kbps while full HDTV signal can take a bandwidth of over 20 Mbps.

    IP Telephony

    Voice Over IP is a new communication means that let you telephone with Internet at almost null cost. It is expected that IP-based networks will carry 15% of the worlds voice traffic by 2002.Driving the convergence of traditionally separate voice and data networks, VoIP promises to deliver many features including cost savings and applications such as videoconferencing and global, toll-free calling. Voice-over-IP (VoIP) networks differ from conventional telephone networks in that voice quality on VoIP is affected by a wider variety of network impairments and can vary from call to call and over the course of a call.

    There are many different Voice over IP related protocols in use. There are both proprietary systems and open standards. The most important standard protocols are H.323 and SIP.

    Compression of voice data is necessary to conserve network bandwidth utilization. VOIP consumes from 18 to 88 kilobits/sec per active voice call, depending on the level of compression. With video the figure is 128k to around 4 Mbit. Usually they are in the 348k-1Mbit/s area

    For interoperability reasons a typical VoIP product supports a few common ITU codecs, such as G.711, G.723.1, G.726, and G.729A. These codecs offer trade-offs among bit-rate, implementation complexity, and voice quality. For example a toll-quality codec, G.711 is a simple codec that uses less than 1 MIPS of DSP, but takes up 64 kbit/s of the network bandwidth, not including the overhead for RTP, UDP and IP headers. G.723.1 uses only 5.3 or 6.3 kbit/s of network bandwidth plus overhead and delivers near toll-quality voice, but consumes significantly more DSP resources (both MIPS and memory). G.726 supports multiple bit-rates (16-40 kbps), and G.729A supports 8 kbit/s. Both codecs deliver near toll-quality voice, and are not as demanding on DSP resources.

    Voice over IP (VoIP) is susceptible to network behaviors, referred to as delay and jitter, which can degrade the voice application to the point of being unacceptable to the average user. Delay is the time taken from point-to-point in a network. VoIP typically tolerates delays up to 150 ms before the quality of the call is unacceptable. Jitter is the variation in delay over time from point-to-point. If the delay of transmissions varies too widely in a VoIP call, the call quality is greatly degraded. Packet loss is losing packets along the data path, which severely degrades the voice application. Prior to deploying VoIP applications, it is important to assess the delay, jitter, and packet loss on the data network in order to determine if the voice applications work. The delay, jitter, and packet loss measurements can then aid in the correct design and configuration of traffic prioritization, as well as buffering parameters in the data network equipment. VoIP technology has made significant progress since its introduction during the mid 1990s. With the widespread adoption of the Internet, in both the private and public sectors. The IP-based networking infrastructure on which VoIP technology relies has since improved dramatically, making the technology more cost-effective and reliable than ever before.

    VoIP is not just for wires anymore. When paired with Wi-Fi, the technology can potentially provide 802.11-capable PDAs with the ability to place phone calls wirelessly without requiring the additional circuitry of a cellular phone. VoIP voice quality is on the rise, the potential cost-savings alone provide a good reason for implementing VoIP platforms in emerging enterprise applications. The opportunity to implement PBX style systems using software-driven packages running on computers is proving to be a major attraction when companies are rebuilding their telephone infrastructure. Of course, adopting VoIP means different things to different companies. Some find that bridging their existing PBX systems via IP trunking is a cost-effective way to leverage their IP network and preserve their investment in traditional circuit-switched infrastructure. Others are extending IP telephony throughout the enterprise with software-based call servers and IP phones on the desktop. Additionally, many enterprises are also taking advantage of IP-based audio conferencing and call centers. Smaller companies, eager to offload voice infrastructure maintenance, are turning to provider-hosted solutions, such as IP Centrex or unified messaging services. VoIP is also affecting the telephone company.Competing new operators are establishing national VoIP networks that are beginning to offer both domestic and international connections at a fraction of what their land-line counterparts charge as well as familiar features of traditional telephone systemsCustomers who already subscribe to a DSL- or cable-modem-based service can add VoIP capabilities to their home or office PCs by means of software downloads over the Internet.

    • Meeting the Challenges of VoIP ATA Designs - For VoIP services to continue to grow, carriers need lower-cost phone adapters that are robust and easier to install. But, to make that happen, designers must make some difficult hardware and software decisions. Here's a look at some of the tough choices that must be made.    Rate this link


      • Direct Image - internet telephony information, info on software for doing it    Rate this link
      • Internet Protocol (IP)Telephony Clearinghouses Tutorial - Clearinghouses provide Internet service providers (ISPs), Internet telephony service providers (ITSPs), and telecommunications companies with a complete solution, enabling them to offer Internet telephony, fax, and a range of value-added services. Clearinghouses act as intermediaries for the financial settlement of Internet telephony and fax traffic and guarantee payment to all members.    Rate this link
      • Internet Telephony: An Introduction - What are the benefits of this new technology, and how will it be regulated?    Rate this link
      • Internet Telephony Tutorial - Internet telephony refers to communications services?voice, facsimile, and/or voice-messaging applications?that are transported via the Internet, rather than the public switched telephone network (PSTN). The basic steps involved in originating an Internet telephone call are conversion of the analog voice signal to digital format and compression/translation of the signal into Internet protocol (IP) packets for transmission over the Internet; the process is reversed at the receiving end.    Rate this link
      • Make The Move To VoIP Now! - IP telephony strategies from VoIP Gateways to IP PBXs and IP Phones    Rate this link
      • Network Prep and QoS Assurance - Yes, you should look at your network before installing an IP PBX. But the news is probably good. If not, here's a make-ready recipe, and some products to help.    Rate this link
      • - information on voice over IP    Rate this link
      • SIP Forum - SIP Forum is a non profit association whose mission is to promote awareness and provide information about the benefits and capabilities that are enabled by SIP. SIP (Session Initiation Protocol) is emerging as the protocol of choice for setting up conferencing, telephony, multimedia and other types of communication sessions on the Internet. SIP may also be used for new types of communications, such as instant messaging and application level mobility across various networks, including wireless, and across user devices.    Rate this link
      • Using IP Telephony to Bypass Local Phone Monopolies - While Internet telephony has received a great deal of attention as a means for reducing long distance telephone tolls, there is another perhaps even more compelling use for IP telephony as a means to bypass local phone monopolies. This article explains how competitive carriers (in particular ISPs) can get into the local dialtone business, and in the process offer customers a much better deal than the Baby Bells can.    Rate this link
      • Telephone moves to broadband - Voice-over-broadband (VoB) represents the next incremental step in the evolution of the global voice/data network from a circuit- to packet-switching architecture.    Rate this link
      • Voice And Data Converge In VoIP - Internet telephony is a complicated communication technology that provides cost and bandwidth savings to both the consumer and the enterprise.    Rate this link
      • Voice-Data Consolidation Tutorial - information on the transmission of both voice and data over a single packetized communications network    Rate this link
      • Voice over packet: putting it all together - building a voice-over-packet system used to be a complex process of integrating several components and perhaps designing one or two of them yourself    Rate this link
      • VoIP--do it right - VoIP can help your company save on telephone costs, leverage its existing network infrastructure, and add communications features that enhance productivity--assuming, of course, that it's done right. If you're planning to take the plunge and swap out your old PBX for a VoIP system, you need to keep your eye on what's critical--and know the pitfalls to avoid.    Rate this link


    Multimedia communications

    Internet multimedia communications is an example of convergence (merging digital technologies likedigital signal processing, computer hardware and Internet datacommunications to analogue technologies like traditionalaudio, video and telephone network).In Internet multimedia communications data from the audio/video source is modified with the help help of encoding/converting program thatconverts the data to IP (Internet protocol) packets. This isrequired because only bytes (a combination of 1's and 0's) can betransmitted thru' the internet. At the receiving end, these packetsare again converted to the electric audio/video signals withaid of your computer and the receiving program running in it.Voice over IP enables making telephone calls through Internet. Video streaming technologies allow you to wantch videos and movies over Internet. Three things are happening in the video conferencing industry:

    • Users of video conferencing equipment are in the process of converting their existing ISDN infrastructure over to Internet Protocol, or IP.
    • Many corporate Local Area Network, or LAN, infrastructures are being upgraded to support Voice-over-IP services, using existing Ethernet LAN equipment.
    • Many corporations will use their private networks, corporate intranets, and Virtual Private Networks, or VPNs, to carry an increasing amount of voice and video call traffic.
    The Internet was designed to move basic data, but nowadays is alsoused for applications like real-time audio and video services.Internet is a good network to carry all kids of data, butit can fail badly when tasked with carrying streaming media. Streaming media requires each packet of the stream to arrive at the end location without a significant time lag before the next. Unfortunately, as there are many paths for each packet to take across the Internet and there are many varying delays on each, so packets can arrive at highly staggered times to the end user. This "jitter", as well as outright packet loss, results in the choppy pictures and broken audio that most of us experience while watching streaming media on the Internet. In best connections when networkis not very heavily loaded the audio and video generally work well,but in congested low network connection the problems are common.Unfortunately there are several ways and techniques that can improve the quality of audio and video in Internet. But to be able to usethem you must undertand how multimedia communications in Internet works.Streaming media was born to deliver audio, then audio and video, over the public Internet. As with many new technologies, it started with single-vendor proprietary techniques. Internet users do more than just surf the Web, and popular Internet activities include watching video clips. Streaming media can only thrive if there one standard. It is now time to put an end to the "click here to view with this, that, or the other player"--not just to eliminate viewer confusion but also to reduce costs by eliminating the need for each streaming media content provider to host multiple systems. Happily, robust multivendor standards and interoperability now exist, thanks to the Moving Picture Experts Group (MPEG), the Internet Streaming Media Alliance (ISMA) and the MPEG Industry Forum (MPEGIF).


      • Internet Streaming Media Alliance - Just as the adoption of standard mark-up languages has fueled innovation and the explosive use of today's World Wide Web, the goal of Internet Streaming Media Alliance is to accomplish the same for the next wave of rich Internet content, streaming video and audio. The Alliance believes that in creating an interoperable approach for transporting and viewing streaming media, content creators, product developers and service providers will have easier access to the expanding commercial and consumer markets for streaming media services.    Rate this link

      Networked video and video streaming

      Ideally, video and audio are streamed across the Internet from the server to the client in response to a client request for a Web page containing embedded videos. The client plays the incoming multimedia stream in real time as the data is received. Current transport protocol, codec and scalability research will eventually make high quality video on the Web a practical reality for masses.Small video and sound files can be usually streamed from an normalHTTP Server with satisfactory results under ideal conditions. But, such playback can suffer significantly from congested network conditions. To stream large files well, streaming should be done usually done from a dedicated streaming server using a suitablestraming protocol.The processing involved in streaming video applications can be divided into roughly two types of functions: control and transport (CT) and media decode (MD).The CT and MD functions have different processing requirements. CT is not computationally intense and mainly involves string parsing, data packet manipulation, and finite state machine implementation (suitable for normal microprocessors). The CTR functionality usually used protocols like real-time streaming protocol (RTSP) session control and real-time transport protocol (RTP) media transport. The MD functionality is much more computationally intense because of the sophisticated signal processing required by audio and video coding algorithms (suitable for DSPs or microprocessor with special multimedia instructions). Media dcoding includes besides normal audio/video signal decoding also error concealment, and other ancillary signal processing steps such as echo cancellation and others.The streaming technologies are generally divided to two classes:real-time streaming and progressive streaming.True or real-time streaming refers to technologies that keep the bandwidth of the media signal matched to that of the viewer's connection so that the media is always seen in real-time. Dedicated streaming media servers and streaming protocols are required to enable real-time streaming. Real-time streaming delivery always happens in real-time, so it is well suited to live events. It also allows viewers to fast forward to different parts of the movie, a useful function for presentations and lectures. In theory, real-time streaming movies should never pause once they start playing, but in reality, periodic pauses may occur. Progressive streaming, also known as progressive download, refers to online media that users can watch as the files are downloaded. The user can see the part of the file that has downloaded at a given time, but can't jump ahead to portions that haven't been transferred yet. Progressive streaming files don't adjust during transmission to match the bandwidth of the user's connection like a real-time streaming format. Progressive streaming is often called HTTP streaming because standard HTTP servers can deliver files in this fashion and no special protocols are needed.This method guarantees the quality of the final movie because the viewed portion of the file is downloaded before it is played. This means users may experience a delay before the movie starts, especially with slower connections. Progressive streaming is especially useful for delivery of short pieces. Progressive streaming is not a good solution for long movies or material the user may want to skip through, such as lectures, speeches or presentations. Progressive streaming technologies also don'twork for material that must be broadcast live.So which method should you choose? Many sites are opting to offer both. When you stream on your web site you have a choice of embeddinga streaming player in the web page or running the player asa stand-alone application.It is easier to run the streaming player as a stand-alone application.Embedding the player is a bit more involved and the Netscape vs.Internet Explorer rivalry has made the situation even more complicated.You may also discover problems running embedded streaming players inoperating system environments other than Windows. Mac, Linux and Unix playersdo not have as much functionality available as their Windows counterparts.

        General information

        • E-mail Discussion Lists @ - discussion lists on streaming media topics    Rate this link
        • Streaming Video - This is a general article on streaming video and how it changed the audio/video industry.    Rate this link
        • Video and Streaming Media - When building web sites remeber that most users still do not have sufficient bandwidth to receive streaming video in an acceptable quality and they won't get broadband for another four years.    Rate this link
        • Wireless Video--Get The Picture? - Today, wireless video is in the transitional phase between the "advanced prototype" and "functional solution with practical application"--what Geoffrey Moore might call, "crossing the chasm." All of the enabling technologies currently exist and are either in the market, beginning to be deployed, or will become available over the next 18 months. There is little doubt that these technologies will make widespread, global wireless video a reality.    Rate this link


      • H.323 Tutorial - H.323 is a standard that specifies the components, protocols and procedures that provide multimedia communication services like real-time audio, video, and data communications, over packet networks, including Internet protocol (IP)based networks    Rate this link
      • Media Gateway Control Protocol (MGCP) - application programming interface and a corresponding protocol (MGCP) for controlling Voice over IP (VoIP) Gateways from external call control elements    Rate this link

    Embedded systems and IP protocol

    We live in an increasingly networked world and embedded systems are no exception. Network connections are becoming common for purposes such as downloading information or revised code, uploading acquired data or operating statistics, and diagnosing problems from remote locations. TCP/IP has become the network protocol of choice.

      Project pages

      • A $25 Web Server - article from    Rate this link
      • IPic - A Match Head Sized Web-Server - This is propably the world's tiniest implementation of a TCP/IP stack and a HTTP web-server based on a PIC 12C509A, in a tiny 8-pin SO8 package. The iPic web-server is connected directly to a router running SLIP at 115200bps.    Rate this link
      • Web51 Project page - Free Web Server and Telnet-RS232 converter on 8kb Flash,256B RAM. Fully documented project of connecting an Intel x51 compatible CPU to RTL8019AS network controller. WWW server, "TELNET" and SMTP implemented on 256B RAM and 8 kB FLASH solution. You can use the GPL version or buy HW kit + SW + licence & MAC address space for your own project.    Rate this link

      Embedded TCP/IP software

      • lwIP - A Lightweight TCP/IP Stack - lwIP is a small independent implementation of the TCP/IP protocol suite that has been developed by Adam Dunkels at the Computer and Networks Architectures lab at the Swedish Institute of Computer Science as part of the Connected project. The focus of the lwIP TCP/IP implementation is to reduce the RAM usage while still having a full scale TCP. This making lwIP suitable for use in embedded systems with tenths of kilobytes of free RAM and room for around 40 kilobytes of code ROM.    Rate this link
      • PIC IP: Internet Protocol I/O - PIC TCP/IP and Web server links    Rate this link
      • uIP - uIP is a free small TCP/IP implementation for 8- and 16-bit Microcontrollers. uIP is written in the C programming language and the source code is free to distribute and use for both non-commercial and commercial use (BSD license). The main philosophy behind uIP is that by integrating the TCP/IP stack and the application, the code size of both the TCP/IP stack and the application can be kept down to a minimum. The uIP code footprint is on the order of a few kilobytes and RAM usage is on the order of a few hundred bytes. uIP has been compiled and tested on the x86 and the 6502 CPUs.    Rate this link
      • Project uC/IP - uC/IP (pronounced mew-kip) is a project to develop a TCP/IP protocol stack for microcontrollers. The code is based on BSD code and therefore carries the BSD licence    Rate this link

      Silicon based routing and switching

      When the power of the normal network cards and CPU are not enough to carry all the traffic, we need specialized hardware for it. A typical high end IP (layer 3) router is based on network processor-based line cards that use CAM and SRAM based lookup tables. A typical look-up table is used to perform the Layer 3 longest prefix match of an IP header packet, where an incoming IP packet will need to be directed to the appropriate address. This task is accomplished by the network processor, which locates correct coordinates through the look-up table. In a CAM-based look-up system, addresses are searched and matched with the appropriate stored addresses. Once the match is found, the network processor can then guide the incoming packets to the appropriate end locations through the switch fabric. With IPv4 packets, 32 bits of header need to be searched and matched. With IPv6 packets, 128 bits need to be processed, plus the added burden of providing the required classless inter-domain routing (CIDR), which previously only a ternary CAM (TCAM) perform. The "unknown" or "don't care" state allowed in a TCAM is used to prevent address space waste due to class organization as required in CIDR address programming. It has been estimated (C. Bernard Shung, Network Processing ICs, ISSCC 2001 Tutorial, San Francisco, February 4, 2001.) that in IPv4, 150 Mbits of CAM is required to properly handle a 1024-kbyte IP prefix number (address locations). The memory size increases dramatically to 2400 Mbits of TCAM for IPv6 with the same quantity of IP prefix numbers. As one can see, a very large amount of memory is required for a CAM look-up table. Here, the memory size is not the only issue; the ability to perform the search quickly is another one. Traditional networking networking applications use CAM or SRAM devices. In today's designs, the most each of these devices can store is approximately 18 Mbits for a CAM and 32 Mbits for an SRAM. Packet buffer memory stores the actual packets until the network processor has come up with the correct location. Traditionally, SRAMs were employed because of their short row cycle time (tRC) and quick bus turn-around, which allows storing and releasing data quickly.

    Future of IP networks

      Home networking

      A silent revolution is happening - the revolution of home networking. And while this revolution should give us no cause for concern, it does, however, hold the promise of forever changing and improving our lives.It is clear that the computer technology is here to stay and will continue to grow and evolve. Motivation for home networking has come primarily from the desire to share expensive resources such as Internet access, printers, and scanners with multiple PCs, it has set the stage for the revolution.The cost benefits from sharing resources may serve as a catalyst for consumers to build home networks, but the future will extend their use for entertainment experience purposes (on-line gaming, sharing music/video etc.). Nowadays a typical home network in modern home uses Ethernet cabling and runs TCP/IP plus some other protocols (IPX, Windows networking) on it.There are also alternatives to Ethernet cabling, those include using telephone wiring (HomePNA technology), mains cabling (some proprietary slow solutions) and doing it wirelessly (IEEE 802.11b WLAN, possibly Bluetooth some day).

      Most home networks are connected to Internet through some Internet connection. It is quite typical that the home network computers do not connect the Internet directly, but though a communication device that allows connection to Internet. Many Internet gateways nowadays do network address translation (NAT) that allow for example many computers at home to communicate using only one IP address visible to the world. Various types of NAT:

      • Full Cone: A full cone NAT is one where all requests from the same internal IP address and port are mapped to the same external IP address and port. Furthermore, any external host can send a packet to the internal host, by sending a packet to the mapped external address.
      • Restricted Cone: A restricted cone NAT is one where all requests from the same internal IP address and port are mapped to the same external IP address and port. Unlike a full cone NAT, an external host (with IP address X) can send a packet to the internal host only if the internal host had previously sent a packet to IP address X.
      • Port Restricted Cone: A port restricted cone NAT is like a restricted cone NAT, but the restriction includes port numbers. Specifically, an external host can send a packet, with source IP address X and source port P, to the internal host only if the internal host had previously sent a packet to IP address X and port P.
      • Symmetric: A symmetric NAT is one where all requests from the same internal IP address and port, to a specific destination IP address and port, are mapped to the same external IP address and port. If the same host sends a packet with the same source address and port, but to a different destination, a different mapping is used. Furthermore, only the external host that receives a packet can send a UDP packet back to the internal host.
      The use of NAT allows that every computer at home does not need to have a publically visible IP address. The downside of the use of NAT is that it can limit the use of some services. for example can limit your ability to run publically visible Internet servers and IP telephony. There are also solutions to solve problems created by NATs. STUN (Simple Traversal of UDP through NATs (Network Address Translation)) is a protocol for assisting devices behind a NAT firewall or router with their packet routing.
      • Attaining Fast, Scaleable Home Networks - High-speed home phoneline networks are being enabled as improvements to the HomePNA specification accelerate the architecture's speed from 16 to 32 Mbps. Meanwhile provisions are being added for QoS and other features to bring these deployments up to par.    Rate this link
      • SOHO gateways: provisioning the last 10m - Broadband access, voice over data, and simplified LAN management are hot technologies in their own right. Put them together, however, and you just might have next year's most promising killer application.    Rate this link

      Traffic maganement

      • Internet Model for Control of Converged Networks Tutorial - Convergence technologies are changing the way telecommunications companies will provide voice and data traffic. Telecommunications convergence is the merger of legacy-based time division multiplexing (TDM) architecture with today?s packet-switching technology and call-control intelligence, which allows commercial carriers and service providers to consolidate voice and data networks to provide integrated communications services.    Rate this link
      • Peering into the Broadband WAN Future - Speed is nice, but it's not everything. Tomorrow's WAN designs must combine speed with traffic management, classification, voice, and other service offerings.    Rate this link


      It is well realized that the lifetime of the IPv4 address space is limited. The day when no more 32-bit IP network addresses are left may, and most certainly will, arrive. The new IPv6/IPng architecture solves the address space problem in an effective way by increasing the IP address length to 128 bits. IPv4 and IPv6 are not compatible with each other.The transition process from the current IP version 4 to the future IP version 6 is probably one of the hottest subjects discussed among the people working with the IPng concept

    Other interresting

    • FreeWeb - A web which allows you to publish and view any type of information with complete privacy and anonymity.    Rate this link
    • Internet Optimization Tweaks - Next to inadequate hardware, the biggest frustration among internet gamers is lag. Sometimes the problem is unavoidable, such as excessive traffic at peak times, but often something can be done to improve the situation. And, just in case you are unable to find the cure for your dial-up here, links to Cable and DSL services are also provided.    Rate this link
    • Residential Internet-Ready Buildings (IRBs) Tutorial - Internet service in residential multidwelling units (MDUs) is about to become the next utility after gas, water, and electricity. Internet-ready building (IRB) is defined as making an MDU ready for high-speed broadband services using Internet protocol (IP) with the entire required infrastructure, network, operation, and service functions.    Rate this link
    • NetGeo - NetGeo is a database and collection of Perl scripts used to map IP addresses, domain names and AS numbers to geographical locations. CAIDA is developing this system for use in network visualization tools.    Rate this link
    • W. Richard Stevens' Home Page - many UNIX IP networking and UNIX networking programming related papers    Rate this link

<[email protected]>

Back to ePanorama main page ??