Networking and the Internet

Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information in multiple locations since the 1950s, with the US military's SAGE system the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the US began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. In the phrase of John Gage and Bill Joy (of Sun Microsystems), "the network is the computer". Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become ubiquitous almost everywhere. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Multiprocessing

Cray designed many supercomputers that used heavy multiprocessing.

Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[13] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of a the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer may return to that task later. If several programs are running "at the same time", then the interrupt generator may be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, many programs may seem to be running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Input/output (I/O)

Hard disks are common I/O devices used with computers.

I/O is the means by which a computer receives information from the outside world and sends results back. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include inputs like the keyboard and mouse, and outputs such as the display and printer. Hard disks, floppy disks and optical discs serve as both inputs and outputs. Computer networking is another form of I/O.

Practically any device that can be made to interface digitally may be used as I/O. The computer in the Engine Control Unit of a modern automobile might read the position of the pedals and steering wheel, the output of the oxygen sensor and devices that monitor the speed of each wheel. The output devices include the various lights and gauges that the driver sees as well as the engine controls such as the spark ignition circuits and fuel injection systems. In a digital wristwatch, the computer reads the buttons and causes numbers and symbols to be shown on the liquid crystal display.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Memory

Magnetic core memory was popular main memory for computers through the 1960s until it was completely replaced by semiconductor memory.

A computer's memory may be viewed as a list of cells into which numbers may be placed or read. Each cell has a numbered "address" and can store a single number. The computer may be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions may be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer may store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Arithmetic/logic unit (ALU)

The ALU is capable of performing two classes of operations: Arithmetic and logic.

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

Logic operations involve boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

Superscalar computers contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

How computers work

Control unit
The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.
The control system's function is as follows—note that this is a simplified description and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.

Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.

Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

Sample Program

Example

A traffic light showing red.
Suppose a computer is being employed to drive a traffic light. A simple stored program might say:

Turn off all of the lights
Turn on the red light, wait for sixty seconds
Turn off the red light, turn on the green light, wait for sixty seconds
Turn off the green light, turn on the amber light, wait for two seconds
Turn off the amber light
Jump to instruction number (2)
With this set of instructions, the computer would cycle the light continually through red, green, amber and back to red again until told to stop running the program.

However, suppose there is a simple on/off switch connected to the computer that is intended be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to:

Turn off all of the lights
Turn on the red light, wait for sixty seconds
Turn off the red light, turn on the green light, wait for sixty seconds
Turn off the green light, turn on the amber light, wait for two seconds
Turn off the amber light
If the maintenance switch is NOT turned on then jump to instruction number (2)
Turn on the red light, wait for one second
Turn off the red light, wait for one second
Jump to instruction number (6)
In this manner, the computer is either running the instructions from number (2) to (6) over and over or it's running the instructions from (6) down to (9) over and over, depending on the position of the switch.

FORTRAN

A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes.

In practical terms, a computer program might include anywhere from a dozen instructions to many millions of instructions for something like a word processor or a web browser. A typical modern computer can execute billions of instructions every second and nearly never make a mistake over years of operation.

Large computer programs may take teams of computer programmers years to write and the probability of the entire program having been written completely in the manner intended is unlikely. Errors in computer programs are called bugs. Sometimes bugs are benign and do not affect the usefulness of the program, in other cases they might cause the program to completely fail (crash), in yet other cases there may be subtle problems. Sometimes otherwise benign bugs may be used for malicious intent, creating a security exploit. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions, the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers, it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. This means that an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Stored program architecture

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to that point.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions.

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.

However, computers cannot "think" for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation and arrive at the correct answer (500,500) with little work. Many modern computers are able to make some decisions that speed up the execution of some programs by "guessing" about the outcomes of certain jump instructions and re-arranging the order of instructions slightly without changing their meaning (branch prediction, speculative execution, and out-of-order execution). However, computers cannot intuitively determine a more efficient way to perform the task given to them because they do not have an overall understanding of what the task, or the "big picture", is. In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.

Microprocessors

Microprocessors are miniaturized devices that often implement stored program CPUs.

Vacuum tube-based computers were in use throughout the 1950s, but were largely replaced in the 1960s by transistor-based devices, which were smaller, faster, cheaper, used less power and were more reliable. These factors allowed computers to be produced on an unprecedented commercial scale. By the 1970s, the adoption of integrated circuit technology and the subsequent creation of microprocessors such as the Intel 4004 caused another leap in size, speed, cost and reliability. By the 1980s, computers had become sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. Around the same time, computers became widely accessible for personal use by individuals in the form of home computers and the now ubiquitous personal computer. In conjunction with the widespread growth of the Internet since the 1990s, personal computers are becoming as common as the television and the telephone and almost all modern electronic devices contain a computer of some kind.

EDSAC

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

-The Atanasoff-Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory.
-Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic and a measure of programmability.
-The secret British Colossus computer (1944), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
-The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
-The US Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and was the first general purpose electronic computer, although it initially had an inflexible architecture which essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the stored program architecture or von Neumann architecture. This design was first formally described by John von Neumann in the paper "First Draft of a Report on the EDVAC", published in 1945. A number of projects to develop computers based on the stored program architecture commenced around this time; the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM) or "Baby". However, the EDSAC, completed a year after SSEM, was perhaps the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but didn't see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored program architecture, making it the single trait by which the word "computer" is now defined. By this standard, many earlier devices would no longer be called computers by today's definition, but are usually referred to as such in their historical context. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. The design made the universal computer a practical reality.

History of computing

The Jacquard loom was one of the first programmable devices.
The question of which was the earliest computer is a difficult one. The very definition of what a computer is has changed over the years and it is therefore impossible to definitively answer the question. Many devices once called "computers" would no longer qualify as such by today's standards.

Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device. Examples of early mechanical computing devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 87 BC). The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers.

However, none of those devices fit the modern definition of a computer because they could not be programmed. In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. While the resulting Jacquard loom is not considered to be a computer, it was an important step because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". Due to limits of finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the US Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation (CTR), which later became IBM. So by the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940).

Computer

A computer is a machine for manipulating data according to a list of instructions, or program.

The ability to store and execute stored programs—that is, programmability—makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: Any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks so long as time and storage capacity are not considerations.

Computers take numerous physical forms. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers. Today, computers can be made small enough to fit into a wrist watch and powered from a watch battery. However, large-scale computing facilities still exist for specialized scientific computation and for the transaction processing requirements of large organizations. Society has come to recognize personal computers and their portable equivalent, the laptop computer, as icons of the information age; they are what most people think of as "a computer". However, the most common form of computer in use today is by far the embedded computer. Embedded computers are small, simple devices that are often used to control other devices—for example, they are used to control machines from fighter aircraft to industrial robots, digital cameras, and even children's toys.

Future of the Internet

What does the future hold for the Internet? Predictions are that in the future nearly every Internet-connected device will communicate wirelessly. Low-power radio cells rather than fiber or copper wire, will connect and relay information. Before 2010, more than half of American homes will have at least one low power radio cell connected to Internet bandwidth. The future appears to hold a wireless Internet because of bandwidth problems with cable or wire.

The personal computer will continue to evolve, but there will be a lot of other Internet-smart appliances. Predictions are that there will be Internet wristwatches to match the person with the message. Televisions will, when prompted, record our favorite shows. Various kitchen appliances will start by Internet commands. The personal automobile will also be a mobile personal information store. Automobiles will have internal connectivity and easily carry a very large cache of favorite music, talk, interactive games, and pictures, while passengers will have the option of looking out the window at the real world or looking in the window of their in-car display. Like the explorers who discovered new continents, people are just beginning to discover the full impact of the Internet on information, space, and time.

Using the Internet

How does one use the Internet? First, one must have a computer with a connection to the outside world either by a modem connection, a fiber connection such as used in local cable television, or a wireless connection, which is becoming more important. The user is then connected to a system of linked computer networks that encircle the globe, facilitating a wide assortment of data communication services including e-mail, data and program file transfers, newsgroups and chatgroups, as well as graphic images, sound, and video of all kinds. One must choose the right tool to accomplish each task. Thus, one needs to understand the tools to travel this information superhighway.

The Internet is in cyberspace; think of it as a number of planets, each with a unique kind of data program or other type of information service. The only hitch is that each planet's communicating language is different, and one needs several communicating applications and tools. A person is responsible for selecting the proper software program or utility to access what he or she wants. Each program performs a specific task, ranging from providing basic connections, to accessing resources, to preparing e-mail. Common Internet tools include the following:

1.Connection and log-on software. This software provides access to logon to cyber-space. The software sets up the connections to the Internet. This software is usually provided by an Internet service provider.

2.Web browser. Web browsers are usually free. The most common Web browsers are Microsoft's Internet Explorer and Netscape's Navigator. These software programs can usually be downloaded free of charge; they also come with office suites such as Microsoft Office.

3.E-mail manager and editor. To communicate by e-mail users must have an e-mail manager and editor. This editor creates, sends, receives, stores, and organizes your e-mail. Again, many of these e-mail editors can be downloaded free from the Web. One of the most common editors is Eudora. However, office suites usually come with an e-mail manager as well.

A custom connect program starts the procedure for logging on to the Internet using TCP/IP. This is a set of standards and protocols for sharing data between computers and the Internet. Once the protocols have connected, a user must establish his or her identity and authorization to use the Internet services. The Internet service provider used has its own identity on the Internet, and this identity is known as a domain. Domain names, as mentioned previously, are all names listed to the right of the @ sign in the address with an extension such as .com or .edu. The computer then sends and receives data from a host computer over the Internet. A program such as Telnet breaks up the data into packets. The protocols specify how packets should be layered, or packaged. Different layers of packets address a variety of software and hardware needs to send information over different networks and communication links. After a user has properly logged on, he or she can begin using the Internet services.

After a user has completed an on-line work session, he or she must logoff the Internet and, depending on the circumstances, disconnect from the Internet service provider. If a user is using an educational service provider such as a college or other educational institution, he or she probably logs off but does not disconnect, since the service is a virtual service provided to many others at the terminal or computer. If one is using a private commercial service provider, one must be sure that a complete disconnection has been made between the computer and provider or one may still be paying fees.

The Internet has spawned an entirely whole new industry called electronic commerce or sometimes electronic business. Businesses sell to other businesses and to consumers on the Internet using secure Web sites. The current market value of U.S. companies with substantial Internet revenue via e-commerce exceeds $3 trillion and is growing annually. It is estimated that by 2003 over 88 percent of all businesses will derive some of their revenue from e-commerce. It has also been said that the growth of the Internet and e-commerce has been one of the main causes of the robust economy in the United States.

Thus, the Internet has been one of the most productive technologies in recent history. The Internet can transport information from nearly any place on the globe to nearly any other place in seconds. The Internet has changed people's notion of how fast things happen. People say now they "did it in Internet time," meaning something was done in a fraction of the traditional or expected amount of time. The Internet is becoming a major cause of time compression.

Internet

The Internet is a technology and electronic communication system such as the world has never seen before. In fact, some people have said that the Internet is the most important innovation since the development of the printing press.

History of the Internet

The Internet was created as a result of the Cold War. In the mid 1960s it became apparent that there was a need for a bomb-proof electronic communication system. A concept was devised to link computers by cable or wire throughout the country in a distributed system so that if some parts of the country were cut off from other parts, messages could still get through. In the beginning, only the federal government and a few universities were linked because the Internet was basically an emergency military communication system, operated by the Department of Defense's Advanced Research Project Agency (ARPA). The whole operation was referred to as ARPANET.

ARPA was linked to computers at a group of top research universities receiving ARPA funding. The first four universities connected to ARPANET were the University of California-Los Angeles, Stanford University, the University of California-Santa Barbara, and the University of Utah. Thus, the Internet was born. Because of a concept developed by Larry Roberts of ARPA and Glen Kleinrock at UCLA, called packet switching, the Internet was able to become a decentralized system, which would prevent large-scale destruction of any centralized system. The system allowed different types of computers from different manufacturers to send messages to one another. Computers merely transmitted information to one another in a standardized protocol packet. The addressing information in these packets told each computer in the chain where the packet was supposed to go.

As the Internet grew, more capability was added. A program called Telnet allowed remote users to run programs and computers at other sites. The File Transfer Protocol (FTP) allowed users to transfer data files and programs. Gopher programs, developed at the University of Minnesota and named after the university's mascot, allowed menu-driven access to data resources on the Internet. Search engines such as Archie and Wide Area Index Search (WAIS) gave users the ability to search the Internet's numerous libraries and indices. By the 1980s people at universities, research laboratories, private companies, and libraries were aided by a networking revolution. There were more than thirty thousand host computers and modems on the Internet. The fore-runner of the Internet was the Bitnet, which was a network of virtually every major university in the world. E-mail became routine and inexpensive, since the Internet is a parasite using the existing multibillion-dollar telephone networks of the world as its carriers.

In 1972 Ray Tomlinson invented network e-mail, which became possible with the FTP. With e-mail and FTP, the rate at which collaborative work could be conducted between researchers at participating computer science departments was greatly increased. Although it was not realized at the time, the Internet had begun. TCP (Transmission Control Protocol) breaks large amounts of data down into packets of a fixed size, sequentially numbers them to allow reassembly at the recipient's end, and transmits the packets over the Internet using the Internet protocol.

After the invention of e-mail, it wasn't long before mailing lists were invented. This was a technique by which an identical message could be sent automatically to large numbers of people. The Internet continues to grow. In fact, it is estimated that almost 65 million adults go online on the Internet in the United States every month. Presently, no one operates the Internet. Although there are entities that oversee the system, "no one is in charge." This allows for a free transfer and flow of information throughout the world.

In 1984 the National Science Foundation (NSF) developed NSFNET. Later NASA, the National Institutes of Health, and others became involved, and nodes on the Internet were divided into basic varieties that are still used today. The varieties are grouped by the six basic Internet domains of GOV, MIL, EDU, COM, ORG, and NET. The ARPANET itself formally expired in 1989, a victim of its own success, and the use of TCP/IP (Transfer Control Protocol/Internet Protocol) standards for computer networks is now global.

If Internet invention had stopped at this point, we would probably still be using the Internet primarily just for e-mail. However, in 1989 a second miracle occurred. Tim Berners-Lee, a software engineer at the CERN physics lab in Switzerland, developed a set of accepted protocols for the exchange of Internet information, and a consortium with users was formed—thus creating the World Wide Web, the standard language for encoding information. Hypertext Markup Language (HTML) was adopted. Berners-Lee proposed making the idea global to link all documents on the Internet using hypertext. This lets users jump from one document to another through highlighted words. Other web standards, such as URL (Universal Resource Language) addresses on the Web page and HTTP (Hypertext Transfer Protocol), are also Berners-Lee's inventions. Berners-Lee could have been exceedingly rich based on his invention, but he left the fortune-building to others because he "wanted to do the revolution right."

As a result of Berners-Lee's invention, in 1993 a group at the University of Illinois, headed by Mark Andreesen, wrote a graphical application called Mosaic to make use of the Web easier. The next year a few students from that group, including Andreesen, co-founded Netscape after they graduated in May and released the browser for the World Wide Web in November 1994. The World Wide Web is making the Internet easier to use and has brought two giant advantages. Until the Web, the Internet communicated text only, but the Web permits exchange of uncoded graphics, color-coded graphics, color photographs and designs, even video and sound; and it formats typed copy into flexible typographic pages. The Web also permits use of hyperlinks, whereby users can click on certain words or phrases and be shown links to other information or pictures that explain the key words or phrases. As a result of the World Wide Web and Web browsers, it became easy to find information on the Internet and the Web. Various search engines have been developed to index and retrieve this information.

Technology

1.(Lower case "i"nternet) A large network made up of a number of smaller networks.

2.(Upper case "I"nternet) The largest network in the world. It is made up of more than 350 million computers in more than 100 countries covering commercial, academic and government endeavors. Originally developed for the U.S. military, the Internet became widely used for academic and commercial research. Users had access to unpublished data and journals on a variety of subjects. Today, the "Net" has become commercialized into a worldwide information highway, providing data and commentary on every subject and product on earth.

E-Mail Was the Beginning

The Internet's surge in growth in the mid-1990s was dramatic, increasing a hundredfold in 1995 and 1996 alone. There were two reasons. Up until then, the major online services (AOL, CompuServe, etc.) provided e-mail, but only to customers of the same service. As they began to connect to the Internet for e-mail exchange, the Internet took on the role of a global switching center. An AOL member could finally send mail to a CompuServe member, and so on. The Internet glued the world together for electronic mail, and today, SMTP, the Internet mail protocol, is the global e-mail standard.

The Web Was the Explosion

Secondly, with the advent of graphics-based Web browsers such as Mosaic and Netscape Navigator, and soon after, Microsoft's Internet Explorer, the World Wide Web took off. The Web became easily available to users with PCs and Macs rather than only scientists and hackers at Unix workstations. Delphi was the first proprietary online service to offer Web access, and all the rest followed. At the same time, new Internet service providers (ISPs) rose out of the woodwork to offer access to individuals and companies. As a result, the Web grew exponentially, providing an information exchange of unprecedented proportion. The Web has also become "the" storehouse for drivers, updates and demos that are downloaded via the browser as well as a global transport for delivering information by subscription, both free and paid.

Newsgroups

Although daily news and information is now available on countless Web sites, long before the Web, information on a myriad of subjects was exchanged via Usenet (User Network) newsgroups. Still thriving, newsgroup articles can be selected and read directly from your Web browser. See Usenet.

Chat Rooms

Chat rooms provide another popular Internet service. Internet Relay Chat (IRC) offers multiuser text conferencing on diverse topics. Dozens of IRC servers provide hundreds of channels that anyone can log onto and participate in via the keyboard. See IRC.

The Original Internet

The Internet started in 1969 as the ARPAnet. Funded by the U.S. government, the ARPAnet became a series of high-speed links between major supercomputer sites and educational and research institutions worldwide, although mostly in the U.S. A major part of its backbone was the National Science Foundation's NFSNet. Along the way, it became known as the "Internet" or simply "the Net." By the 1990s, so many networks had become part of it and so much traffic was not educational or pure research that it became obvious that the Internet was on its way to becoming a commercial venture.

It Went Commercial in 1995

In 1995, the Internet was turned over to large commercial Internet providers (ISPs), such as MCI, Sprint and UUNET, which took responsibility for the backbones and have increasingly enhanced their capacities ever since. Regional ISPs link into these backbones to provide lines for their subscribers, and smaller ISPs hook either directly into the national backbones or into the regional ISPs.

The TCP/IP Protocol

Internet computers use the TCP/IP communications protocol. There are more than 100 million hosts on the Internet, a host being a mainframe or medium to high-end server that is always online via TCP/IP. The Internet is also connected to non-TCP/IP networks worldwide through gateways that convert TCP/IP into other protocols.

Life Before the Web

Before the Web and the graphics-based Web browser, the Internet was accessed from Unix terminals by academicians and scientists using command-driven Unix utilities. These utilities are still used; however, today, they reside in Windows, Mac and Linux machines as well. For example, an FTP program allows files to be uploaded and downloaded, and the Archie utility provides listings of these files. Telnet is a terminal emulation program that lets you log onto a computer on the Internet and run a program. Gopher provides hierarchical menus describing Internet files (not just file names), and Veronica lets you search Gopher sites. See FTP, Archie, Telnet, Gopher and Veronica.

The Next Internet

Ironically, some of the original academic and scientific users of the Internet have developed their own Internet once again. Internet2 is a high-speed academic research network that was started in much the same fashion as the original Internet (see Internet2). See Web vs. Internet, World Wide Web, how to search the Web, intranet, NAP, hot topics and trends, IAB, information superhighway and online service.

Modest Beginnings

These four nodes were drawn in 1969 showing the University of California at Berkeley and Los Angeles, SRI International and the University of Utah. This modest network diagram was the beginning of the ARPAnet and eventually the Internet. (Image courtesy of The Computer History Museum, www.historycenter.org)

How the Internet Is Connected

Small Internet service providers (ISPs) hook into regional ISPs, which link into major backbones that traverse the U.S. This diagram is conceptual because ISPs often span county and state lines.

Internet

A worldwide system of interconnected computer networks. The origins of the Internet can be traced to the creation of ARPANET (Advanced Research Projects Agency Network) as a network of computers under the auspices of the U.S. Department of Defense in 1969. Today, the Internet connects millions of computers around the world in a nonhierarchical manner unprecedented in the history of communications. The Internet is a product of the convergence of media, computers, and telecommunications. It is not merely a technological development but the product of social and political processes, involving both the academic world and the government (the Department of Defense). From its origins in a nonindustrial, noncorporate environment and in a purely scientific culture, it has quickly diffused into the world of commerce.

The Internet is a combination of several media technologies and an electronic version of newspapers, magazines, books, catalogs, bulletin boards, and much more. This versatility gives the Internet its power.

Technological features

The Internet 'Ls technological success depends on its principal communication tools, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). They are referred to frequently as TCP/IP. A protocol is an agreed-upon set of conventions that defines the rules of communication. TCP breaks down and reassembles packets, whereas IP is responsible for ensuring that the packets are sent to the right destination.

Data travels across the Internet through several levels of networks until it reaches its destination. E-mail messages arrive at the mail server (similar to the local post office) from a remote personal computer connected by a modem, or a node on a local-area network. From the server, the messages pass through a router, a special-purpose computer ensuring that each message is sent to its correct destination. A message may pass through several networks to reach its destination. Each network has its own router that determines how best to move the message closer to its destination, taking into account the traffic on the network. A message passes from one network to the next, until it arrives at the destination network, from where it can be sent to the recipient, who has a mailbox on that network. See also Electronic mail; Local-area networks; Wide-area networks.

TCP/IP

TCP/IP is a set of protocols developed to allow cooperating computers to share resources across the networks. The TCP/IP establishes the standards and rules by which messages are sent through the networks. The most important traditional TCP/IP services are file transfer, remote login, and mail transfer.

The file transfer protocol (FTP) allows a user on any computer to get files from another computer, or to send files to another computer. Security is handled by requiring the user to specify a user name and password for the other computer.

The network terminal protocol (TELNET) allows a user to log in on any other computer on the network. The user starts a remote session by specifying a computer to connect to. From that time until the end of the session, anything the user types is sent to the other computer.

Mail transfer allows a user to send messages to users on other computers. Originally, people tended to use only one or two specific computers. They would maintain “mail files” on those machines. The computer mail system is simply a way for a user to add a message to another user's mail file.

Other services have also become important: resource sharing, diskless workstations, computer conferencing, transaction processing, security, multimedia access, and directory services.

TCP is responsible for breaking up the message into datagrams, reassembling the datagrams at the other end, resending anything that gets lost, and putting things back in the right order. IP is responsible for routing individual datagrams. The datagrams are individually identified by a unique sequence number to facilitate reassembly in the correct order. The whole process of transmission is done through the use of routers. Routing is the process by which two communication stations find and use the optimum path across any network of any complexity. Routers must support fragmentation, the ability to subdivide received information into smaller units where this is required to match the underlying network technology. Routers operate by recognizing that a particular network number relates to a specific area within the interconnected networks. They keep track of the numbers throughout the entire process.

Domain Name System

The addressing system on the Internet generates IP addresses, which are usually indicated by numbers such as 128.201.86.290. Since such numbers are difficult to remember, a user-friendly system has been created known as the Domain Name System (DNS). This system provides the mnemonic equivalent of a numeric IP address and further ensures that every site on the Internet has a unique address. For example, an Internet address might appear as crito.uci.edu. If this address is accessed through a Web browser, it is referred to as a URL (Uniform Resource Locator), and the full URL will appear as http://www.crito.uci.edu.

The Domain Name System divides the Internet into a series of component networks called domains that enable e-mail (and other files) to be sent across the entire Internet. Each site attached to the Internet belongs to one of the domains. Universities, for example, belong to the “edu” domain. Other domains are gov (government), com (commercial organizations), mil (military), net (network service providers), and org (nonprofit organizations).

World Wide Web

The World Wide Web (WWW) is based on technology called hypertext. The Web may be thought of as a very large subset of the Internet, consisting of hypertext and hypermedia documents. A hypertext document is a document that has a reference (or link) to another hypertext document, which may be on the same computer or in a different computer that may be located anywhere in the world. Hypermedia is a similar concept except that it provides links to graphic, sound, and video files in addition to text files.

In order for the Web to work, every client must be able to display every document from any server. This is accomplished by imposing a set of standards known as a protocol to govern the way that data are transmitted across the Web. Thus data travel from client to server and back through a protocol known as the HyperText Transfer Protocol (http). In order to access the documents that are transmitted through this protocol, a special program known as a browser is required, which browses the Web. See also World Wide Web.

Commerce on the Internet

Commerce on the Internet is known by a few other names, such as e-business, Etailing (electronic retailing), and e-commerce. The strengths of e-business depend on the strengths of the Internet. Internet commerce is divided into two major segments, business-to-business (B2B) and business-to-consumer (B2C). In each are some companies that have started their businesses on the Internet, and others that have existed previously and are now transitioning into the Internet world. Some products and services, such as books, compact disks (CDs), computer software, and airline tickets, seem to be particularly suited for online business.

How computers work

Control unit
The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.
The control system's function is as follows—note that this is a simplified description and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

History of computing

The Jacquard loom was one of the first programmable devices.
The question of which was the earliest computer is a difficult one. The very definition of what a computer is has changed over the years and it is therefore impossible to definitively answer the question. Many devices once called "computers" would no longer qualify as such by today's standards.

Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device. Examples of early mechanical computing devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 87 BC). The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers.

However, none of those devices fit the modern definition of a computer because they could not be programmed. In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. While the resulting Jacquard loom is not considered to be a computer, it was an important step because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". Due to limits of finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the US Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation (CTR), which later became IBM. So by the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940). Notable achievements include:

A computer is a machine for manipulating data according to a list of instructions, or program.

The ability to store and execute stored programs—that is, programmability—makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: Any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks so long as time and storage capacity are not considerations.

Computers take numerous physical forms. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers. Today, computers can be made small enough to fit into a wrist watch and powered from a watch battery. However, large-scale computing facilities still exist for specialized scientific computation and for the transaction processing requirements of large organizations. Society has come to recognize personal computers and their portable equivalent, the laptop computer, as icons of the information age; they are what most people think of as "a computer". However, the most common form of computer in use today is by far the embedded computer. Embedded computers are small, simple devices that are often used to control other devices—for example, they are used to control machines from fighter aircraft to industrial robots, digital cameras, and even children's toys.

Here are the 10 Key Points of the Master Plan:

You must focus on a specific target market that you love and want to work with for life.

This makes it easy for you to stay motivated, it brings you more fulfillment in life, and it lets you make a very nice living in the process. Target a market that you would love to work with, even if you didn't get paid for it. Pick a group of people you can relate to.

Your goal and the purpose of your business should be to help your customers, not take their hard-earned money and run.

That's useless in the long term, and if you've been thinking about it you might as well drop the project. Please make sure that you understand this universal truth deeply enough. If you do have a sincere desire to help your customers, everybody wins and you'll be a lot more successful.

Your main goal is to build a big database of lifetime customers from your market who trust you, feel grateful to you and value your recommendation.

A one-time sale is worthless. A good relationship with loyal customers is worth a fortune. That's the most valuable thing any business can have. The key here is to build your large list of "lifetime customers who trust you." Achieve this and you're set for life.

You do this by selling your prospects something that solves their common problems and helps them achieve their dreams.

It doesn't have to be a full-length book, it doesn't have to be complicated, but you must have your own product to build this relationship. Reselling someone else's stuff is not enough. Giving something away is not enough. By having your customers pay YOU for the solution, you will gain their trust right away and they will listen to you from there on. This is what we call your front-end product, and it must be great. It must make your customers extremely satisfied.

You need to create a proven, optimized sales process and automate as much as possible.

You need a powerful sales letter that converts the maximum number of prospect to paying customers. If you don't want to lose money, it's vitally important that you TEST each step of your sales process to reach the best results. You need to test the effectiveness of your sales letter, your ads, your price and your back-end strategy. Once you know which ones are winners you can easily optimize your results and pyramid your profits.

You should start a reseller program and let other business owners recommend your product to their lists.

Many have a great relationship with a lot of people, and you can tap into that relationship. All you have to do is contact these business owners personally and offer to make a joint venture deal where you split the profits. Many will be thrilled to accept your offer, and it will bring you a ton of new customers in a very short time.

You want to build your valuable lifetime customer database fast and free.

You can do this in several ways, but there are a few easy methods that you should combine: free publicity, viral marketing, joint venture deals and advertising on a large scale. The key here is that as long as you break even or make a profit on the first sale, you can basically build your database of loyal customers as large as you want instantly and for free. From there on it's all profits.

From here on you simply continue to build your relationship with your customer list by helping them solve their problems and achieve their goals.

Do this by recommending information, products and services that will help your customers. All you have to do is create joint ventures and reseller agreements with other business owners to make money in the process - you split the profits. This is your back-end strategy, and this is where you make the REAL money.

Always, always over-deliver on your promises. Take extremely good care of your clients and subscribers.

Treat them like you would treat your best friend. Again, your main goal here is not just to "make money", but to actually HELP your clients. Never recommend a product to them that you wouldn't recommend to your best friend. Keep their interests in mind always and satisfy their needs and wants. Do this and you must succeed.

Continue with this process from here on and you'll make a fortune.

Keep selling your front-end product to add new lifetime customers to your list for free. Keep helping them reach their ultimate goals by recommending additional good, related products. You'll make a very nice living and enjoy life to the fullest, all while doing what you love. And you'll make a lot of new friends in the process.

How To Increase Your Website Traffic With Zero Cost
How to increase your website traffic with zero cost?. It’s a bold statement don’t you think. But, believe me it’s true. You can increase your traffic by 1000% with no cost involved if you do it the right way. Continue reading if you want to know how.

I’ve outlined 5 ways to reach your target. But, please keep in mind that these are not the only ways that you can do to increase your traffic. There are hundreds of techniques to increase traffic. But these one are the proven one. I’ve used it personally. More importantly, these techniques can get you FREE traffic. You’re money is saved in your pocket. Let’s go to the first one.

Technique #1: Linking strategy

Linking strategy is the easiest way to get free traffic. When I say “the easiest way” it does not mean that you can ask everybody to link to your site and do nothing after that. Compared to other techniques that you’ll discover, this one will take less time to do.

Here’s how to do it. First select the site in your niche market. Be selective. Choose one that has a high traffic. Usually a high traffic site is pretty stingy to put link to your site. So, the key here is to be persistent. Ask them how many visitors do they received per month and if they could link to your site. If they don’t answer your request, email them the second time.

Be persistent. If they don’t want to link to your site, ask them to trade link instead (reciprocal link). This is the last resort you want to have.

Word of warning: Don’t crowd your site with too many links. Only accept link trading if it’s really worth it.

Technique #2: Offer Free eBooks or articles

You’ll fall in love with this technique if you see what it can do to your site. This technique can create an excellent’Viral marketing’ effect. It can multiply the no of visitors to your site in a matter of days.This most important thing about this technique is that to offer something that is really useful to your visitor. So useful that they can only get that information from you!

You need to the ‘wants’ in your need market. What problems do they encounter? Solve these problems and you have a killer articles or e-book that you can give away for free. Remember, don’t sell it. Give it away for free. If you feel really reluctant to give your article or e-book for free, you can give your visitors a partial of it. But, make sure it’s really useful. Don’t forget to put your name and your contact information in this article or e-book. Usually, if you write an article, you need to include your resource box at the very bottom of your article.

The most important task in this technique is to offer a reprint right to your visitors. What this mean is that your visitors can publish your articles or e-book to anyone in any medium; email, Ezine, website or anything. But please state your condition: Include your contact information or resource box. This will create viral effect to your visitors.

Before I forgot, there is one particular e-book compiler that is good in doing this kind of task. The name of this e-book compiler is ‘E-book Edit Pro’. With this compiler, you can offer your visitors a customizable e-book. This is a great incentive for them to distribute your article or e-book since they can put their name and information in it. If you like to know more about the excellent compiler, please visit:http://www.ebookedit.com/

Technique #3: Classified Ad

This is the most time consuming technique compared to all 5. While it is time consuming, it is really worth it.

Tips - This technique should be used together with the above technique. Let me explain:

First, you need to write an e-book or article that you can give it away for free. Then, you need have an autoresponder. If you don’t have an autoresponder (your hosting company should provide this service for free), you can get one for free. Just type ‘free autoresponder’ in your search engine and you’ll get hundred of sites that provided free autoresponder. This is for opt in emails.
Enough talking. Let’s continue.

After you have your own autoresponder, place your free article in this autoresponder. Now, you need to advertise your autoresponder address in the classified ad website. Don’t put your email address but your autoresponder address. The best part with this technique is that you can capture you visitor’s email. You can contact them again and again if you have any offer in the future.

Technique #4: Deliver informational pack Ezine/newsletter

People surf the net to look for information. Out of 100, only 3 people surf the net to buy something. But others are doing some research or try to find something informational.

With this keep in mind, you can attract people to come to your site if you can deliver them timely information. By producing timely information, you glued these visitors to your site preventing them from going elsewhere. This can be done by giving them free newsletter or Ezine.

This is not an easy task because there is abundance of free information on the net. You need to give them something different from these ‘free’ stuff. Try to provide something unique in your Ezine. For example, if you’re publishing music Ezine, try to make a deal with music label so that you can give special price to your subscriber. Make sure your subscriber cannot get this of kind if deal in other place. If you can create this unique proposition, you’re already on top of the world. Your Ezine will spread like fire. More people will come to your site to subscribe your unique newsletter.

Technique #5: Offer affiliate program

This is the greatest FREE traffic generator technique out there. With this technique both parties win; you and your affiliate program participant. You get more traffic and sales, they get more money from referral commission.

This topic is really a large topic. I can write a whole e-book about how to create a successful affiliate program. But, I’ll discuss the basic thing about affiliate program in here.

Basically, to create an effective affiliate program, you need to create an interest for your visitor’s to join your affiliate program. You can do this by giving them high referral fees and marketing tools for them to use. Above all, you need to make them easy to promote your product or service. Don’t make them do all the hard work. It is your job.

The next thing you need to do is to motivate them to spread the word about you. Contact them in a timely manner. Don’t forget them after they’ve joined your program. Make them feel special. In fact, they are special since they are the one who will do the promotion and advertising.A well designed affiliate program can increase your website traffic and sales by unimaginable amount. But again, you need to devote all you effort in this technique if you want to have a successful affiliate program. Don’t do it half way. Even if you’ve to work 18 hours a day to create your own affiliate program, it’s really worth it in the future. The payoff is going to be thousand times your initial effort.

All of these techniques are free. You don’t have to spend a dime on them. Try it on your site. I’ve tried all these techniques to generate traffic to my site and blog. And they work!When you got more traffic,more money you got.