Saturday, April 5, 2008

Accessing the Internet

Common methods of home access include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G technology cell phones.

Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi-cafes, where a would-be user needs to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. The whole campus or park, or even the entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial WiFi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.

Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services.

High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.

An Internet service provider (abbr. ISP, also called Internet access provider or IAP) is a business or organization that provides consumers or businesses access to the Internet and related services. In the past, most ISPs were run by the phone companies. Now, ISPs can be started by just about any individual or group with sufficient money and expertise. In addition to Internet access via various technologies such as dial-up and DSL, they may provide a combination of services including Internet transit, domain name registration and hosting, web hosting, and colocation.

Just as ISPs customers pay ISPs for Internet access, ISPs themselves pay upstream ISPs for Internet access. In the simplest case, a single connection is established to an upstream ISP using one of the technologies described above, and the ISP uses this connection to send or receive any data to or from parts of the Internet beyond its own network; in turn, the upstream ISP uses its own upstream connection, or connections to its other customers (usually other ISPs) to allow the data to travel from source to destination.

In reality, the situation is often more complicated. For example, ISPs with more than one point of presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and have connections to each one at one or more of their PoPs. ISPs may engage in peering, where multiple ISPs interconnect with one another at a peering point or Internet exchange point (IX), allowing the routing of data between their networks, without charging one another for that data - data that would otherwise have passed through their upstream ISPs, incurring charges from the upstream ISP. ISPs that require no upstream and have only customers and/or peers are called Tier 1 ISPs, indicating their status as ISPs at the top of the Internet hierarchy. Routers, switches, Internet routing protocols, and the expertise of network administrators all have a role to play in ensuring that data follows the best available route and that ISPs can "see" one another on the Internet.

A Virtual ISP (vISP) purchases services from another ISP (sometimes called a wholesale ISP or similar within this context) that allow the vISP's customers to access the Internet via one or more point of presence (PoPs) that are owned and operated by the wholesale ISP. There are various models for the delivery of this type of service; for example, the wholesale ISP could provide network access to end users via its dial-up modem PoPs or DSLAMs installed in telephone exchanges, and route, switch, and/or tunnel the end user traffic to the vISP's network, whereupon they may route the traffic toward its destination. In another model, the vISP does not route any end user traffic, and needs only provide AAA (Authentication, Authorization and Accounting) functions, as well as any "value-add" services like email or web hosting. Any given ISP may use their own PoPs to deliver one service, and use a vISP model to deliver another service, or, use a combination to deliver a service in different areas. The service provided by a wholesale ISP in a vISP model is distinct from that of an upstream ISP, even though in some cases, they may both be one and the same company. The former provides connectivity from the end user's premises to the Internet or to the end user's ISP, the latter provides connectivity from the end user's ISP to all or parts of the rest of the Internet.

A vISP can also refer to a completely automated white label service offered to anyone at no cost or for a minimal set-up fee. The actual ISP providing the service generates revenue from the calls and may also share a percentage of that revenue with the owner of the vISP. All technical aspects are dealt with leaving the owner of vISP with the task of promoting the service. This sort of service is however declining due to the popularity of unmetered internet access also known as flatrate.

Wednesday, April 2, 2008

Mailing List

A mailing list is a collection of names and addresses used by an individual or an organization to send material to multiple recipients. The term is often extended to include the people subscribed to such a list, so the group of subscribers is referred to as "the mailing list", or simply "the list".

At least two quite different types of mailing lists can be defined: the first one is closer to the literal sense, where a "mailing list" of people is used as a recipient for newsletters, periodicals or advertising. Traditionally this was done through the postal system, but with the rise of e-mail, the electronic mailing list became popular. The second type allows members to post their own items which are broadcast to all of the other mailing list members.

When similar or identical material is sent out to all subscribers on a mailing-list, it is often referred to as a mailshot or blast. A list for such use can also be referred to as a distribution list.

In legitimate (non-spam) mailing lists, individuals can subscribe or unsubscribe themselves.

Mailing lists are often rented or sold. If rented, the renter agrees to use the mailing list for only contractually agreed-upon times. The mailing list owner typically enforces this by "salting" the mailing list with fake addresses and creating new salts for each time the list is rented. Unscrupulous renters may attempt to bypass salts by renting several lists and merging them to find the common, valid addresses.

Mailing list brokers exist to help organizations rent their lists. For some organizations, such as specialized niche publications or charitable groups, their lists may be some of their most valuable assets, and mailing list brokers help them maximize the value of their lists.

Mailing list is simply a list of e-mail addresses of people that are interested in the same subject, are members of the same work group, or who are taking class together. When a member of the list sends a note to the group's special address, the e-mail is broadcast to all of the members of the list. The key advantage of a mailing list over a things such as web-based discussion is that as new message becomes available they are immediately delivered to the participants' mailboxes.

Listwashing is the process through which individual entries in mailing lists are removed. These mailing lists typically contain e-mail addresses or phone numbers of those that have not voluntarily subscribed. An entry is removed from the list after a complaint is received.

Only complainers are removed via this process. It is widely believed that only a small fraction of those inconvenienced with unsolicited email end up sending a proper complaint. Because most of those that have not voluntarily subscribed stay on the list, and only the complainers stop complaining because they are removed, this helps spammers to maintain a "complaint-free" list of spammable e-mail addresses. Internet service providers who forward complaints to the spamming party are often seen as assisting the spammer in list washing, or, in short, helping spammers.

Thursday, March 20, 2008

VoIP

VoIP stands for Voice over IP, where IP refers to the Internet Protocol that underlies all Internet communication. This phenomenon began as an optional two-way voice extension to some of the Instant Messaging systems that took off around the year 2000. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the actual voice traffic, VoIP can be free or cost much less than a normal telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL.

Thus VoIP is maturing into a viable alternative to traditional telephones. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple inexpensive VoIP modems are now available that eliminate the need for a PC.

Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls.

Remaining problems for VoIP include emergency telephone number dialling and reliability. Currently a few VoIP providers provide an emergency service but it is not universally available. Traditional phones are line powered and operate during a power failure, VoIP does not do so without a backup power source for the electronics.

Most VoIP providers offer unlimited national calling but the direction in VoIP is clearly toward global coverage with unlimited minutes for a low monthly fee. VoIP has also become increasingly popular within the gaming world, as a form of communication between players. Popular gaming VoIP clients include Ventrilo and Teamspeak, and there are others available also.

Monday, March 10, 2008

File Transfer Protocol

FTP or File Transfer Protocol is used to transfer data from one computer to another over the Internet, or through a network. Specifically, FTP is a commonly used protocol for exchanging files over any TCP/IP based network to manipulate files on another computer on that network regardless of which operating systems are involved (if the computers permit FTP access). There are many existing FTP client and server programs. FTP servers can be set up anywhere between game servers, voice servers, internet hosts, and other physical servers.

FTP runs exclusively over TCP. FTP servers by default listen on port 21 for incoming connections from FTP clients. A connection to this port from the FTP Client forms the control stream on which commands are passed to the FTP server from the FTP client and on occasion from the FTP server to the FTP client. FTP uses out-of-band control, which means it uses a separate connection for control and data. Thus, for the actual file transfer to take place, a different connection is required which is called the data stream. Depending on the transfer mode, the process of setting up the data stream is different.

In active mode, the FTP client opens a random port (> 1023), sends the FTP server the random port number on which it is listening over the control stream and waits for a connection from the FTP server. When the FTP server initiates the data connection to the FTP client it binds the source port to port 20 on the FTP server.

In order to use active mode, the client sends a PORT command, with the IP and port as argument. The format for the IP and port is "h1,h2,h3,h4,p1,p2". Each field is a decimal representation of 8 bits of the host IP, followed by the chosen data port. For example, a client with an IP of 192.168.0.1, listening on port 1025 for the data connection will send the command "PORT 192,168,0,1,4,1". The port fields should be interpreted as p1×256 + p2 = port, or, in this example, 4×256 + 1 = 1025.

In passive mode, the FTP server opens a random port (> 1023), sends the FTP client the server's IP address to connect to and the port on which it is listening (a 16 bit value broken into a high and low byte, like explained before) over the control stream and waits for a connection from the FTP client. In this case the FTP client binds the source port of the connection to a random port greater than 1023.

To use passive mode, the client sends the PASV command to which the server would reply with something similar to "227 Entering Passive Mode (127,0,0,1,78,52)". The syntax of the IP address and port are the same as for the argument to the PORT command.

In extended passive mode, the FTP server operates exactly the same as passive mode, however it only transmits the port number (not broken into high and low bytes) and the client is to assume that it connects to the same IP address that was originally connected to. Extended passive mode was added by RFC 2428 in September 1998.

While data is being transferred via the data stream, the control stream sits idle. This can cause problems with large data transfers through firewalls which time out sessions after lengthy periods of idleness. While the file may well be successfully transferred, the control session can be disconnected by the firewall, causing an error to be generated.

The FTP protocol supports resuming of interrupted downloads using the REST command. The client passes the number of bytes it has already received as argument to the REST command and restarts the transfer. In some commandline clients for example, there is an often-ignored but valuable command, "reget" (meaning "get again") that will cause an interrupted "get" command to be continued, hopefully to completion, after a communications interruption.

Resuming uploads is not as easy. Although the FTP protocol supports the APPE command to append data to a file on the server, the client does not know the exact position at which a transfer got interrupted. It has to obtain the size of the file some other way, for example over a directory listing or using the SIZE command.

Wednesday, March 5, 2008

Web Browser

A web browser is a software application that enables a user to display and interact with text, images, videos, music and other information typically located on a Web page at a website on the World Wide Web or a local area network. Text and images on a Web page can contain hyperlinks to other Web pages at the same or different website. Web browsers allow a user to quickly and easily access information provided on many Web pages at many websites by traversing these links. Web browsers format HTML information for display, so the appearance of a Web page may differ between browsers .

Some of the Web browsers available for personal computers include Internet Explorer, Mozilla Firefox, Safari, and Opera in order of descending popularity (in November 2007). Web browsers are the most commonly used type of HTTP user agent. Although browsers are typically used to access the World Wide Web, they can also be used to access information provided by Web servers in private networks or content in file systems.

Web browsers communicate with Web servers primarily using HTTP (hypertext transfer protocol) to fetch webpages. HTTP allows Web browsers to submit information to Web servers as well as fetch Web pages from them. The most commonly used HTTP is HTTP/1.1, which is fully defined in RFC 2616. HTTP/1.1 has its own required standards that Internet Explorer does not fully support, but most other current-generation Web browsers do.

Pages are located by means of a URL (uniform resource locator, RFC 1738 ), which is treated as an address, beginning with http: for HTTP access. Many browsers also support a variety of other URL types and their corresponding protocols, such as gopher: for Gopher (a hierarchical hyperlinking protocol), ftp: for FTP (file transfer protocol), rtsp: for RTSP (real-time streaming protocol), and https: for HTTPS (an SSL encrypted version of HTTP).

The file format for a Web page is usually HTML (hyper-text markup language) and is identified in the HTTP protocol using a MIME content type. Most browsers natively support a variety of formats in addition to HTML, such as the JPEG, PNG and GIF image formats, and can be extended to support more through the use of plugins. The combination of HTTP content type and URL protocol specification allows Web page designers to embed images, animations, video, sound, and streaming media into a Web page, or to make them accessible through the Web page.

Early Web browsers supported only a very simple version of HTML. The rapid development of proprietary Web browsers led to the development of non-standard dialects of HTML, leading to problems with Web interoperability. Modern Web browsers support a combination of standards- and defacto-based HTML and XHTML, which should display in the same way across all browsers. No browser fully supports HTML 4.01, XHTML 1.x or CSS 2.1 yet. Currently many sites are designed using WYSIWYG HTML generation programs such as Macromedia Dreamweaver or Microsoft FrontPage. These often generate non-standard HTML by default, hindering the work of the W3C in developing standards, specifically with XHTML and CSS (cascading style sheets, used for page layout).

Some of the more popular browsers include additional components to support Usenet news, IRC (Internet relay chat), and e-mail. Protocols supported may include NNTP (network news transfer protocol), SMTP (simple mail transfer protocol), IMAP (Internet message access protocol), and POP (post office protocol). These browsers are often referred to as Internet suites or application suites rather than merely Web browsers.

Friday, February 15, 2008

HTTP, an Application Protocol

Now let's get back to our example. Web browsers and servers speak an application protocol that runs on top of TCP/IP, using it simply as a way to pass strings of bytes back and forth. This protocol is called HTTP (Hyper-Text Transfer Protocol) and we've already seen one command in it — the GET shown above.

When the GET command goes to www.google.com webserver with service number 80, it will be dispatched to a server daemon listening on port 80. Most Internet services are implemented by server daemons that do nothing but wait on ports, watching for and executing incoming commands.

If the design of the Internet has one overall rule, it's that all the parts should be as simple and human-accessible as possible. HTTP, and its relatives (like the Simple Mail Transfer Protocol, SMTP, that is used to move electronic mail between hosts) tend to use simple printable-text commands that end with a carriage-return/line feed.

This is marginally inefficient; in some circumstances you could get more speed by using a tightly-coded binary protocol. But experience has shown that the benefits of having commands be easy for human beings to describe and understand outweigh any marginal gain in efficiency that you might get at the cost of making things tricky and opaque.

Therefore, what the server daemon ships back to you via TCP/IP is also text. The beginning of the response will look something like this (a few headers have been suppressed):

HTTP/1.1 200 OK
Date: Sat, 10 Oct 1998 18:43:35 GMT
Server: Apache/1.2.6 Red Hat
Last-Modified: Thu, 27 Aug 1998 17:55:15 GMT
Content-Length: 2982
Content-Type: text/html

These headers will be followed by a blank line and the text of the web page (after which the connection is dropped). Your browser just displays that page. The headers tell it how (in particular, the Content-Type header tells it the returned data is really HTML).

Sunday, February 10, 2008

TCP and IP

To understand how multiple-packet transmissions are handled, we need to know that the Internet actually uses two protocols, stacked one on top of the other.

The lower level, IP (Internet Protocol), is responsible for labeling individual packets with the source address and destination address of two computers exchanging information over a network. For example, when you access http://www.google.com, the packets you send will have your computer's IP address, such as 192.168.1.101, and the IP address of the www.google.com computer, 152.2.210.81. These addresses work in much the same way that your home address works when someone sends you a letter. The post office can read the address and determine where you are and how best to route the letter to you, much like a router does for Internet traffic.

The upper level, TCP (Transmission Control Protocol), gives you reliability. When two machines negotiate a TCP connection (which they do using IP), the receiver knows to send acknowledgements of the packets it sees back to the sender. If the sender doesn't see an acknowledgement for a packet within some timeout period, it resends that packet. Furthermore, the sender gives each TCP packet a sequence number, which the receiver can use to reassemble packets in case they show up out of order. (This can easily happen if network links go up or down during a connection.)

TCP/IP packets also contain a checksum to enable detection of data corrupted by bad links. (The checksum is computed from the rest of the packet in such a way that if either the rest of the packet or the checksum is corrupted, redoing the computation and comparing is very likely to indicate an error.) So, from the point of view of anyone using TCP/IP and nameservers, it looks like a reliable way to pass streams of bytes between hostname/service-number pairs. People who write network protocols almost never have to think about all the packetizing, packet reassembly, error checking, checksumming, and retransmission that goes on below that level.

Packets and Routers

What the browser wants to do is send a command to the Web server on www.google.com that looks like this:

GET /LDP/HOWTO/Fundamentals.html HTTP/1.0

Here's how that happens. The command is made into a packet, a block of bits like a telegram that is wrapped with three important things; the source address (the IP address of your machine), the destination address (152.19.254.81), and a service number or port number (80, in this case) that indicates that it's a World Wide Web request.

Our machine then ships the packet down the wire (Our connection to our ISP, or local network) until it gets to a specialized machine called a router. The router has a map of the Internet in its memory — not always a complete one, but one that completely describes your network neighborhood and knows how to get to the routers for other neighborhoods on the Internet.

The packet may pass through several routers on the way to its destination. Routers are smart. They watch how long it takes for other routers to acknowledge having received a packet. They also use that information to direct traffic over fast links. They use it to notice when another router (or a cable) have dropped off the network, and compensate if possible by finding another route.

There's an urban legend that the Internet was designed to survive nuclear war. This is not true, but the Internet's design is extremely good at getting reliable performance out of flaky hardware in an uncertain world. This is directly due to the fact that its intelligence is distributed through thousands of routers rather than concentrated in a few massive and vulnerable switches (like the phone network). This means that failures tend to be well localized and the network can route around them.

Once the packet gets to its destination machine, that machine uses the service number to feed the packet to the web server. The web server can tell where to reply to by looking at the command packet's source IP address. When the web server returns this document, it will be broken up into a number of packets. The size of the packets will vary according to the transmission media in the network and the type of service.

The Domain Name System

The whole network of programs and databases that cooperates to translate hostnames to IP addresses is called ‘DNS’ (Domain Name System). When you see references to a ‘DNS server’, that means what we just called a nameserver. Internet hostnames are composed of parts separated by dots. A domain is a collection of machines that share a common name suffix. Domains can live inside other domains. For example, the machine www.google.com lives in the .google.com subdomain of the .com domain.

Each domain is defined by an authoritative name server that knows the IP addresses of the other machines in the domain. The authoritative (or ‘primary') name server may have backups in case it goes down; if we see references to a secondary name server or (‘secondary DNS') it's talking about one of those. These secondaries typically refresh their information from their primaries every few hours, so a change made to the hostname-to-IP mapping on the primary will automatically be propagated.

Now here's the important part. The nameservers for a domain do not have to know the locations of all the machines in other domains (including their own subdomains); they only have to know the location of the nameservers. In our example, the authoritative name server for the .com domain knows the IP address of the nameserver for .google.com but not the address of all the other machines in .google.com.

The domains in the DNS system are arranged like a big inverted tree. At the top are the root servers. Everybody knows the IP addresses of the root servers; they're wired into your DNS software. The root servers know the IP addresses of the nameservers for the top-level domains like .com and .org, but not the addresses of machines inside those domains. Each top-level domain server knows where the nameservers for the domains directly beneath it are, and so forth.

DNS is carefully designed so that each machine can get away with the minimum amount of knowledge it needs to have about the shape of the tree, and local changes to subtrees can be made simply by changing one authoritative server's database of name-to-IP-address mappings.

When we query for the IP address of www.google.com, what actually happens is this: First, your nameserver asks a root server to tell it where it can find a nameserver for .com. Once it knows that, it then asks the .org server to tell it the IP address of a .google.com nameserver. Once it has that, it asks the .google.com nameserver to tell it the address of the host www.google.com.

Most of the time, the nameserver doesn't actually have to work that hard. Nameservers do a lot of cacheing; when yours resolves a hostname, it keeps the association with the resulting IP address around in memory for a while. This is why, when you surf to a new website, you'll usually only see a message from your browser about "Looking up" the host for the first page you fetch. Eventually the name-to-address mapping expires and DNS has to re-query — this is important so we don't have invalid information hanging around forever when a hostname changes addresses. Our cached IP address for a site is also thrown out if the host is unreachable.

Names and Locations

The first thing the browser has to do is to establish a network connection to the machine where the document lives. To do that, it first has to find the network location of the host www.google.com (‘host’ is short for ‘host machine’ or ‘network host'; www.google.org is a typical hostname). The corresponding location is actually a number called an IP address (we'll explain the ‘IP’ part of this term later).

To do this, the browser queries a program called a name server. The name server may live on your machine, but it's more likely to run on a service machine that yours talks to. When you sign up with an ISP, part of your setup procedure will almost certainly involve telling the Internet software the IP address of a nameserver on the ISP's network.

The name servers on different machines talk to each other, exchanging and keeping up to date all the information needed to resolve hostnames (map them to IP addresses). The nameserver may query three or four different sites across the network in the process of resolving www.google.com, but this usually happens very quickly (as in less than a second). The nameserver will tell the browser that www.google.com IP address is 64.233.167.99; knowing this, the machine will be able to exchange bits with www.google.com directly.

Wednesday, February 6, 2008

Disadvantages of Extranet

  1. Extranets can be expensive to implement and maintain within an organization (e.g.: hardware, software, employee training costs) — if hosted internally instead of via an ASP.
  2. Security of extranets can be a big concern when dealing with valuable information. System access needs to be carefully controlled to avoid sensitive information falling into the wrong hands.
  3. Extranets can reduce personal contact (face-to-face meetings) with customers and business partners. This could cause a lack of connections made between people and a company, which hurts the business when it comes to loyalty of its business partners and customers.

Extranet: Industry Uses

During the late 1990s and early 2000s, several industries started to use the term "extranet" to describe central repositories of shared data made accessible via the web only to authorized members of particular work groups.

For example, in the construction industry, project teams could login to and access a 'project extranet' to share drawings and documents, make comments, issue requests for information, etc. In 2003 in the United Kingdom, several of the leading vendors formed the Network of Construction Collaboration Technology Providers, or NCCTP, to promote the technologies and to establish data exchange standards between the different systems. The same type of construction-focused technologies have also been developed in the United States, Australia, Scandinavia, Germany and Belgium, among others. Some applications are offered on a Software as a Service (SaaS) basis by vendors functioning as Application service providers (ASPs).

Specially secured extranets are used to provide virtual data room services to companies in several sectors (including law and accountancy).

There are a variety of commercial extranet applications, some of which are for pure file management, and others which include broader collaboration and project management tools. Also exist a variety of Open Source extranet applications and modules, which can be integrated into other online collaborative applications such as Content Management Systems. Companies can use an extranet to:

  • Exchange large volumes of data using Electronic Data Interchange (EDI)
  • Share product catalogs exclusively with wholesalers or those "in the trade"
  • Collaborate with other companies on joint development efforts
  • Jointly develop and use training programs with other companies
  • Provide or access services provided by one company to a group of other companies, such as an online banking application managed by one company on behalf of affiliated banks
  • Share news of common interest exclusively with partner companies

Extranet

An extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organization's information or operations with suppliers, vendors, partners, customers or other businesses. An extranet can be viewed as part of a company's Intranet that is extended to users outside the company (e.g.: normally over the Internet). It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with a preapproved set of other companies business-to-business (B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) involves known server(s) of one or more companies, communicating with previously unknown consumer users.

Briefly, an extranet can be understood as a private intranet mapped onto the Internet or some other transmission system not accessible to the general public, but is managed by more than one company's administrator(s). For example, military networks of different security levels may map onto a common military radio transmission system that never connects to the Internet. Any private network mapped onto a public one is a virtual private network (VPN). In contrast, an intranet is a VPN under the control of a single company's administrator(s).

An argument has been made that "extranet" is just a buzzword for describing what institutions have been doing for decades, that is, interconnecting to each other to create private networks for sharing information. One of the differences that characterized an extranet, however, is that its interconnections are over a shared network rather than through dedicated physical lines. With respect to Internet Protocol networks, RFC 2547 states "If all the sites in a VPN are owned by the same enterprise, the VPN is a corporate intranet. If the various sites in a VPN are owned by different enterprises, the VPN is an extranet. A site can be in more than one VPN; e.g., in an intranet and several extranets. We regard both intranets and extranets as VPNs. In general, when we use the term VPN we will not be distinguishing between intranets and extranets. Even if this argument is valid, the term "extranet" is still applied and can be used to eliminate the use of the above description."

Another very common use of the term "extranet" is to designate the "private part" of a website, where "registered users" can navigate, enabled by authentication mechanisms on a "login page".

An extranet requires security and privacy. These can include firewalls, server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages, and the use of virtual private networks (VPNs) that tunnel through the public network.

Many technical specifications describe methods of implementing extranets, but often never explicitly define an extranet. RFC 3547 presents requirements for remote access to extranets. RFC 2709 discusses extranet implementation using IPSec and advanced network address translation (NAT).

Sunday, February 3, 2008

Today's Internet

Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is essentially defined by its interconnections and routing policies.

As of September 30, 2007, 1.244 billion people use the Internet according to Internet World Stats. Writing in the Harvard International Review, philosopher N.J.Slabbert, a writer on policy issues for the Washington DC-based Urban Land Institute, has asserted that the Internet is fast becoming a basic feature of global civilization, so that what has traditionally been called "civil society" is now becoming identical with information technology society as defined by Internet use. Some suggest that as low as 2% of the World's population regularly accesses the internet.

Internet

The Internet is a worldwide, publicly accessible series of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked web pages and other resources of the World Wide Web (WWW).

History

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution.

Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

At the IPTO, Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran[citation needed] who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI International in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet. Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet switched network service. In the UK, this was referred to as the International Packet Stream Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976. X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 674, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems.

The first TCP/IP-wide area network was made operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols to TCP/IP. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the development of a higher speed 1.5 megabit/second backbone that become the NSFNet. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic email services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet Service Providers were created: UUNET, PSINET and CERFNET. Important, separate networks that offered gateways into, then later merged with the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately-funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth although the rapid growth of the Internet was due primarily to the availability of commercial routers from companies such as Cisco Systems, Proteon and Juniper, the availability of commercial Ethernet equipment for local area networking and the widespread implementation of TCP/IP on the UNIX operating system.

Although the basic applications and guidelines that make the Internet possible had existed for almost a decade, the network did not gain a public face until the 1990s. On August 6, 1991, CERN, which straddles the border between France and Switzerland, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989.

An early popular web browser was ViolaWWW based upon HyperCard. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic/technical Internet. By 1996 usage of the word "Internet" had become commonplace, and consequently, so had its misuse as a reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.

New findings in the field of communications during the 1960s, 1970s and 1980s were quickly adopted by universities across the United States.

Examples of early university Internet communities are Cleveland FreeNet, Blacksburg Electronic Village and NSTN in Nova Scotia. Students took up the opportunity of free communications and saw this new phenomenon as a tool of liberation. Personal computers and the Internet would free them from corporations and governments (Nelson, Jennings, Stallman).

Graduate students played a huge part in the creation of ARPANET. In the 1960’s, the network working group, which did most of the design for ARPANET’s protocols was composed mainly of graduate students.

Monday, January 28, 2008

Planning and Creating an Intranet

Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as:

  • What they hope to achieve from the intranet
  • Which person or department would "own" (take control of) the technology and the implementation
  • How and when existing systems would be phased out/replaced
  • How they intend to make the intranet secure
  • How they'll ensure to keep it within legislative and other constraints
  • Level of interactivity (eg wikis, on-line forms) desired.
  • Is the input of new data and updating of existing data to be centrally controlled or devolved.
  • These are in addition to the hardware and software decisions (like Content Management Systems), participation issues (like good taste, harassment, confidentiality), and features to be supported.

The actual implementation would include steps such as:

  1. User involvement to identify users' information needs.
  2. Setting up a web server with the correct hardware and software.
  3. Setting up web server access using a TCP/IP network.
  4. Installing the user programs on all required computers.
  5. Creating a homepage for the content to be hosted.
  6. User involvement in testing and promoting use of intranet.

Advantages of Intranet

  1. MY Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and - subject to security provisions - from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.
  2. Time: With intranets, organizations can make more information available to employees on a "pull" basis (ie: employees can link to relevant information at a time which suits them) rather than being deluged indiscriminately by emails.
  3. Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization.
  4. Web publishing allows 'cumbersome' corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, news feeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is always available to employees using the intranet.
  5. Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internet worked enterprise.
  6. Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms.
  7. Promote common corporate culture: Every user is viewing the same information within the Intranet.
  8. Enhance Collaboration: With information easily accessible by all authorised users, teamwork is enabled.
  9. Cross-platform Capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX.

About Intranets

Intranets differ from "Extranets" in that the former are generally restricted to employees of the organization while extranets can generally be accessed by customers, suppliers, or other approved parties. There does not necessarily have to be any access from the organization's internal network to the Internet itself. When such access is provided it is usually through a gateway with a firewall, along with user authentication, encryption of messages, and often makes use of virtual private networks (VPNs). Through such devices and systems off-site employees can access company information, computing resources and internal communications.

Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and CRM tools, project management etc., to advance productivity. Intranets are also being used as culture change platforms. For example, large numbers of employees discussing key issues in an online forum could lead to new ideas. Intranet traffic, like public-facing web site traffic, is better understood by using web metrics software to track overall activity, as well as through surveys of users.

Intranet "User Experience", "Editorial", and "Technology" teams work together to produce in-house sites. Most commonly, intranets are owned by the communications, HR or CIO areas of large organizations, or some combination of the three. Because of the scope and variety of content and the number of system interfaces, the intranets of many organisations are much more complex than their respective public websites. And intranets are growing rapidly. According to the Intranet design annual 2007 from Nielsen Norman Group the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 2005–2007.

Intranet

An intranet is a private computer network that uses Internet protocols and network connectivity to securely share part of an organization's information or operations with its employees. Sometimes the term refers only to the most visible service, the internal website. The same concepts and technologies of the Internet such as clients and servers running on the Internet protocol suite are used to build an intranet. HTTP and other Internet protocols are commonly used as well, such as FTP. There is often an attempt to use Internet technologies to provide new interfaces with corporate "legacy" data and information systems. Briefly, an intranet can be understood as "a private version of an Internet," or as a version of the Internet confined to an organization.

Sunday, January 27, 2008

What is Telecommunication?

Telecommunication is the assisted transmission of signals over a distance for the purpose of communication. In earlier times, this may have involved the use of smoke signals, drums, semaphore, flags, or heliograph. In modern times, telecommunication typically involves the use of electronic transmitters such as the telephone, television, radio or computer. Early inventors in the field of telecommunication include Alexander Graham Bell, Guglielmo Marconi and John Logie Baird. Telecommunication is an important part of the world economy and the telecommunication industry's revenue has been placed at just under 3 percent of the gross world product.

A telecommunication system consists of three basic elements:

  • a transmitter that takes information and converts it to a signal;
  • a transmission medium that carries the signal; and,
  • a receiver that receives the signal and converts it back into usable information.

For example, in a radio broadcast the broadcast tower is the transmitter, free space is the transmission medium and the radio is the receiver. Often telecommunication systems are two-way with a single device acting as both a transmitter and receiver or transceiver. For example, a mobile phone is a transceiver.

Telecommunication over a phone line is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous receivers.

Signals can be either analogue or digital. In an analogue signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (for example ones and zeros). During transmission the information contained in analogue signals will be degraded by noise. Conversely, unless the noise exceeds a certain threshold, the information contained in digital signals will remain intact. This noise resistance represents a key advantage of digital signals over analogue signals.

A collection of transmitters, receivers or transceivers that communicate with each other is known as a network. Digital networks may consist of one or more routers that route information to the correct user. An analogue network may consist of one or more switches that establish a connection between two or more users. For both types of network, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from noise.

A channel is a division in a transmission medium so that it can be used to send multiple streams of information. For example, a radio station may broadcast at 96.1 MHz while another radio station may broadcast at 94.5 MHz. In this case, the medium has been divided by frequency and each channel has received a separate frequency to broadcast on. Alternatively, one could allocate each channel a recurring segment of time over which to broadcast — this is known as time-division multiplexing and is sometimes used in digital communication.

The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analogue waveform. This is known as keying and several keying techniques exist (these include phase-shift keying, frequency-shift keying and amplitude-shift keying). Bluetooth, for example, uses phase-shift keying to exchange information between devices.

Modulation can also be used to transmit the information of analogue signals at higher frequencies. This is helpful because low-frequency analogue signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analogue signal must be superimposed on a higher-frequency signal (known as a carrier wave) before transmission. There are several different modulation schemes available to achieve this (two of the most basic being amplitude modulation and frequency modulation). An example of this process is a DJ's voice being superimposed on a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel “96 FM”).

Telecommunication is an important part of modern society. In 2006, estimates placed the telecommunication industry's revenue at $1.2 trillion or just under 3% of the gross world product (official exchange rate).

On the microeconomic scale, companies have used telecommunication to help build global empires. This is self-evident in the case of online retailer Amazon.com but, according to academic Edward Lenert, even the conventional retailer Wal-Mart has benefited from better telecommunication infrastructure compared to its competitors. In cities throughout the world, home owners use their telephones to organize many home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narshingdi district, isolated villagers use cell phones to speak directly to wholesalers and arrange a better price for their goods. In Cote d'Ivoire, coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price.

On the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth. Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal.

Due to the economic benefits of good telecommunication infrastructure, there is increasing worry about the digital divide. This is because the world's population does not have equal access to telecommunication systems. A 2003 survey by the International Telecommunication Union (ITU) revealed that roughly one-third of countries have less than 1 mobile subscription for every 20 people and one-third of countries have less than 1 fixed line subscription for every 20 people. In terms of Internet access, roughly half of all countries have less than 1 in 20 people with Internet access. From this information, as well as educational data, the ITU was able to compile an index that measures the overall ability of citizens to access and use information and communication technologies. Using this measure, Sweden, Denmark and Iceland received the highest ranking while the African countries Niger, Burkina Faso and Mali received the lowest.

A replica of one of Chappe's semaphore towers.Early forms of telecommunication include smoke signals and drums. Drums were used by natives in Africa, New Guinea and South America whereas smoke signals were used by natives in North America and China. Contrary to what one might think, these systems were often used to do more than merely announce the presence of a camp.

In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "The enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London.

In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880.

The first commercial electrical telegraph was constructed by Sir Charles Wheatstone and Sir William Fothergill Cooke and opened on 9 April 1839. Both Wheatstone and Cooke viewed their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device.

Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on 27 July 1866, allowing transatlantic telecommunication for the first time.

The conventional telephone was invented independently by Alexander Bell and Elisha Gray in 1876.[23] Antonio Meucci invented the first device that allowed the electrical transmission of voice over a line in 1849. However Meucci's device was of little practical value because it relied upon the electrophonic effect and thus required users to place the receiver in their mouth to “hear” what was being said.[24] The first commercial telephone services were set-up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London.

In 1832, James Lindsay gave a classroom demonstration of wireless telegraphy to his students. By 1854, he was able to demonstrate a transmission across the Firth of Tay from Dundee, Scotland to Woodhaven, a distance of two miles, using water as the transmission medium. In December 1901, Guglielmo Marconi established wireless communication between St. John's, Newfoundland (Canada) and Poldhu, Cornwall (England), earning him the 1909 Nobel Prize in physics (which he shared with Karl Braun). However small-scale radio communication had already been demonstrated in 1893 by Nikola Tesla in a presentation to the National Electric Light Association.

On March 25, 1925, John Logie Baird was able to demonstrate the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929. However, for most of the twentieth century televisions depended upon the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on September 7, 1927.

On September 11, 1940, George Stibitz was able to transmit problems using teletype to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe with remote dumb terminals remained popular throughout the 1950s. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that would allow chunks of data to be sent to different computers without first passing through a centralized mainframe. A four-node network emerged on December 5, 1969; this network would become ARPANET, which by 1981 would consist of 213 nodes.

ARPANET's development centred around the Request for Comment process and on April 7, 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet and many of the protocols the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol v4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.

However, not all important developments were made through the Request for Comment process. Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent for the token ring protocol was filed by Olof Soderblom on October 29, 1974 and a paper on the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976 issue of Communications of the ACM.

Optical fibre provides cheaper bandwidth for long distance communicationIn an analogue telephone network, the caller is connected to the person he wants to talk to by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it transformed back into sound by a small speaker in that person's handset. There is a separate electrical connection that works in reverse, allowing the users to converse.

The fixed-line telephones in most residential homes are analogue — that is, the speaker's voice directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analogue signals, increasingly telephone service providers are transparently converting the signals to digital for transmission before converting them back to analogue for reception. The advantage of this is that digitized voice data can travel side-by-side with data from the Internet and can be perfectly reproduced in long distance communication (as opposed to analogue signals that are inevitably impacted by noise).

Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m). In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth. Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to depreciate analogue systems such as AMPS.

There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optic fibres. The benefit of communicating with optic fibres is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optic fibre cables are able to carry 25 times as many telephone calls as TAT-8. This increase in data capacity is due to several factors: First, optic fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable. Lastly, improvements in multiplexing have led to an exponential growth in the data capacity of a single fibre.

Assisting communication across many modern optic fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut-off completely. There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future.

Digital television standards and their adoption worldwide.In a broadcast system, a central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The antenna of the receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analogue (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).

The broadcast media industry is at a critical turning point in its development, with many countries moving from analogue to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints with traditional analogue broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analogue transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011 — a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission.

In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2. The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception being the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to "piggyback" on normal AM or FM analogue transmissions.

However, despite the pending switch to digital, analogue receivers still remain widespread. Analogue television is still transmitted in practically all countries. The United States had hoped to end analogue broadcasts on December 31, 2006; however, this was recently pushed back to February 17, 2009.[54] For analogue television, there are three standards in use (see a map on adoption here). These are known as PAL, NTSC and SECAM. For analogue radio, the switch to digital is made more difficult by the fact that analogue receivers are a fraction of the cost of digital receivers.[55][56] The choice of modulation for analogue radio is typically between amplitude modulation (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM.

The OSI reference modelThe Internet is a worldwide network of computers and computer networks that can communicate with each other using the Internet Protocol.[57] Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. In this way, the Internet can be seen as an exchange of messages between computers.

An estimated 16.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (69.7%), Oceania/Australia (53.5%) and Europe (38.9%). In terms of broadband access, England (89%), Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) lead the world.

The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model (pictured on the right), which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite.

For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network.

At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the world wide web, these “IP addresses” are derived from the human readable form using the Domain Name System (e.g. 72.14.207.99 is derived from www.google.com). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent.

At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer where as UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered or retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by. Because certain application-level protocols use certain ports, network administrators can restrict Internet access by blocking the traffic destined for a particular port.

Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that the data transferred between two parties remains completely confidential and one or the other is in use when a padlock appears at the bottom of your web browser.[64] Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and OSCAR (instant messaging).

Despite the growth of the Internet, the characteristics of local area networks (computer networks that run at most a few kilometres) remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them.

In the mid-1980s, several protocol suites emerged to fill the gap between the data link and applications layer of the OSI reference model. These were Appletalk, IPX and NetBIOS with the dominant protocol suite during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point but was typically only used by large government and research facilities.[65] As the Internet grew in popularity and a larger percentage of traffic became Internet-related, local area networks gradually moved towards TCP/IP and today networks mostly dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such as DHCP that allowed TCP/IP clients to discover their own network address — a functionality that came standard with the AppleTalk/IPX/NetBIOS protocol suites.

It is at the data link layer though that most modern local area networks diverge from the Internet. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data link protocols for larger networks, Ethernet and Token Ring are typical data link protocols for local area networks. These protocols differ from the former protocols in that they are simpler (e.g. they omit features such as Quality of Service guarantees) and offer collision prevention. Both of these differences allow for more economic set-ups.

Despite the modest popularity of Token Ring in the 80's and 90's, virtually all local area networks now use wired or wireless Ethernet. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used coaxial cables and some recent implementations (especially high-speed ones) use optic fibres. Optic fibres are also likely to feature prominently in the forthcoming 10-gigabit Ethernet implementations.[68] Where optic fibre is used, the distinction must be made between multi-mode fibre and single-mode fibre. Multi-mode fibre can be thought of as thicker optical fibre that is cheaper to manufacture but that suffers from less usable bandwidth and greater attenuation (i.e. poor long-distance performance).