Friday, December 12, 2008

PARTS OF THE MOTHERBOARD


MOTHERBOARD
A motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as modern personal computers. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple computers, the logic board. It is also sometimes casually shortened to mobo.



ZIF

ZIF is an acronym for zero insertion force, a concept used in the design of IC sockets, invented to avoid problems caused by applying force upon insertion and extraction.

A normal integrated circuit (IC) socket requires the IC to be pushed into sprung contacts which then grip by friction. For an IC with hundreds of pins, the total insertion force can be very large (tens of newtons), leading to a danger of damage to the device or the PCB. Also even with relatively small pin counts each extraction is fairly awkward and carries a significant risk of bending pins (particularly if the person performing the extraction hasn't had much practice or the board is crowded), as can be seen with the unpopular front-loading mechanism of the Nintendo Entertainment System. Low insertion force (LIF) sockets reduce the issues of insertion and extraction but the lower the insertion force of a conventional socket, the less reliable the connection is likely to be.

With a ZIF socket, before the IC is inserted, a lever or slider on the side of the socket is moved, pushing all the sprung contacts apart so that the IC can be inserted with very little force (generally the weight of the IC itself is sufficient with no external downward force required). The lever is then moved back, allowing the contacts to close and grip the pins of the IC. ZIF sockets are much more expensive than standard IC sockets and also tend to take up a larger board area. Such a technique has disadvantages that the connector will occupy a large volume since it adds the space of the drawbars and the slidable space thereof. Therefore, the technique is not useful in the future development of the chips with a high density and large number of pins. Further, the connecting construction of the patent is not good enough to provide a close engagement between the pins and the drawbars. There are only two engagement points between the pins and the drawbars which will probably result in a poor connection in a short time. Also they are known to bend the IC pins at times. Therefore they are only used when there is a good reason to do so.

Large ZIF sockets are commonly mounted on PC motherboards (from about the mid 1990s forward). These CPU sockets are designed to support a particular range of CPUs, allowing computer retailers and consumers to assemble motherboard/CPU combinations based on individual budget and requirements. CPUs may also be upgraded or replaced during the lifetime of the motherboard socket. Personal computers are among the few applications expensive enough to justify elaborate socket systems. Smaller ZIF sockets are also commonly used in chip-testing and programming equipment.


Saturday, December 6, 2008

INTERNAL PARTS OF SYSTEM UNIT

IDE CABLE
It is a standard type of connection for storage devices in a PC. Generally, it refers to the types of cables and ports used to connect some hard drives and optical drives to each other and to the motherboard.




COOLING UNIT
Computer cooling is the process of removing heat from computer components.

A computer system's components produce large amounts of heat during operation, including integrated circuits such as CPUs, chipset and graphics cards, along with hard drives. This heat must be dissipated in order to keep these components within their safe operating temperatures, and both manufacturing methods and additional parts are used to keep the heat at a safe level. This is done mainly using heat sinks to increase the surface area which dissipates heat, fans to speed up the exchange of air heated by the computer parts for cooler ambient air, and in some cases softcooling, the throttling of computer parts in order to decrease heat generation.

Overheated parts generally exhibit a shorter maximum life-span and may give sporadic problems resulting in system freezes or crashes.


CMOS BATTERY
All computers that have an 80286 processor or later require a small battery on the system board that provides power to the Complementary Metal Oxide Semiconductor (CMOS) chip, even while the computer is turned off. This chip contains information about the system configuration (e.g., hard disk type, floppy drive types, date and time, and the order in which the computer will look for bootable disks).



MOTHERBOARD
A motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as modern personal computers. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple computers, the logic board. It is also sometimes casually shortened to mobo.




LAN CARD
A Network card, Network Adapter, LAN Adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It is both an OSI layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.



HARD DISK

A hard disk drive (HDD), commonly referred to as a hard drive, hard disk, or fixed disk drive, is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. Strictly speaking, "drive" refers to a device distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.





POWER SUPPLY UNIT
A power supply unit (PSU) is the component that supplies power to a computer. More specifically, a power supply is typically designed to convert 100-120 V

Saturday, November 29, 2008

EXTERNAL PART OF SYSTEM UNIT

FRONT





















BACK/REAR

Wednesday, November 19, 2008

EVOLUTION OF HARD DISK

Hard disks are one of the most important and also one of the most interesting components within the PC. They have a long and interesting history dating back to the early 1950s. Perhaps one reason that I find them so fascinating is how well engineers over the last few decades have done at improving them in every respect: reliability, capacity, speed, power usage, and more.

This excellent chart shows the evolution of IBM hard disks over the past 15 years. Several different form factors are illustrated, showing the progress that they have made over the years in terms of capacity, along with projections for the future. 250 GB hard disks in laptops in five years? Based on past history, there's a good chance that it will in fact happen! Note that the scale on the left is logarithmic, not linear, and PC hard disks have one actuator.



Dated back to 1956, the first ever HDD, the so - called 305 RAMAC (Random Access Method of Accounting and Control) was manufactured by IBM which radically revolutionized the way of data storage. This system could store only five MB (equivalent to 2.000 pages of text with 2,500 characters per page) and was a monstrosity - weighed 1 ton with fifty disks of 24 – inch diameter. Moreover, in average, RAMAC’s price/MB was US $10,000. It is unbelievable if comparing with today HDDs’ price.


5MB Hard Disk in 1956 - Its a hard disk in 1956. The Volume and Size of 5MB memory storage in 1956. In September 1956 IBM launched the 305 RAMAC, the first computer with a hard disk drive (HDD). The HDD weighed over a ton and stored 5MB of data.






The hard disk shown in the image is an old IBM hard disk dated in 1979. Notice how big it is.


As the age advanced, the size of the HDDs decreased and storage capacity increased. HDD with 750 GB storage capacity and 3.5 inch diameter storage disk weighs only a few ounces. The price/GB is also very cheap (around 0.37 US$ /GB).

Wednesday, November 12, 2008

Five Generations of Modern Computers

First Generation (1945-1956)


With the onset of the Second World War, governments sought to develop computers to exploit their potential strategic importance. This increased funding for computer development projects hastened technical progress. By 1941 German engineer Konrad Zuse had developed a computer, the Z3, to design airplanes and missiles. The Allied forces, however, made greater strides in developing powerful computers. In 1943, the British completed a secret code-breaking computer called Colossus to decode German messages. The Colossus's impact on the development of the computer industry was rather limited for two important reasons. First, Colossus was not a general-purpose computer; it was only designed to decode secret messages. Second, the existence of the machine was kept secret until decades after the war.

American efforts produced a broader achievement. Howard H. Aiken (1900-1973), a Harvard engineer working with IBM, succeeded in producing an all-electronic calculator by 1944. The purpose of the computer was to create ballistic charts for the U.S. Navy. It was about half as long as a football field and contained about 500 miles of wiring. The Harvard-IBM Automatic Sequence Controlled Calculator, or Mark I for short, was a electronic relay computer. It used electromagnetic signals to move mechanical parts. The machine was slow (taking 3-5 seconds per calculation) and inflexible (in that sequences of calculations could not change); but it could perform basic arithmetic as well as more complex equations.

Another computer development spurred by the war was the Electronic Numerical Integrator and Computer (ENIAC), produced by a partnership between the U.S. government and the University of Pennsylvania. Consisting of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints, the computer was such a massive piece of machinery that it consumed 160 kilowatts of electrical power, enough energy to dim the lights in an entire section of Philadelphia. Developed by John Presper Eckert (1919-1995) and John W. Mauchly (1907-1980), ENIAC, unlike the Colossus and Mark I, was a general-purpose computer that computed at speeds 1,000 times faster than Mark I.

In the mid-1940's John von Neumann (1903-1957) joined the University of Pennsylvania team, initiating concepts in computer design that remained central to computer engineering for the next 40 years. Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) in 1945 with a memory to hold both a stored program as well as data. This "stored memory" technique as well as the "conditional control transfer," that allowed the computer to be stopped at any point and then resumed, allowed for greater versatility in computer programming. The key element to the von Neumann architecture was the central processing unit, which allowed all computer functions to be coordinated through a single source. In 1951, the UNIVAC I (Universal Automatic Computer), built by Remington Rand, became one of the first commercially available computers to take advantage of these advances. Both the U.S. Census Bureau and General Electric owned UNIVACs. One of UNIVAC's impressive early achievements was predicting the winner of the 1952 presidential election, Dwight D. Eisenhower.

First generation computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different binary-coded program called a machine language that told it how to operate. This made the computer difficult to program and limited its versatility and speed. Other distinctive features of first generation computers were the use of vacuum tubes (responsible for their breathtaking size) and magnetic drums for data storage.

Second Generation Computers (1956-1963)


By 1948, the invention of the transistor greatly changed the computer's development. The transistor replaced the large, cumbersome vacuum tube in televisions, radios and computers. As a result, the size of electronic machinery has been shrinking ever since. The transistor was at work in the computer by 1956. Coupled with early advances in magnetic-core memory, transistors led to second generation computers that were smaller, faster, more reliable and more energy-efficient than their predecessors. The first large-scale machines to take advantage of this transistor technology were early supercomputers, Stretch by IBM and LARC by Sperry-Rand. These computers, both developed for atomic energy laboratories, could handle an enormous amount of data, a capability much in demand by atomic scientists. The machines were costly, however, and tended to be too powerful for the business sector's computing needs, thereby limiting their attractiveness. Only two LARCs were ever installed: one in the Lawrence Radiation Labs in Livermore, California, for which the computer was named (Livermore Atomic Research Computer) and the other at the U.S. Navy Research and Development Center in Washington, D.C. Second generation computers replaced machine language with assembly language, allowing abbreviated programming codes to replace long, difficult binary codes.

Throughout the early 1960's, there were a number of commercially successful second generation computers used in business, universities, and government from companies such as Burroughs, Control Data, Honeywell, IBM, Sperry-Rand, and others. These second generation computers were also of solid state design, and contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, tape storage, disk storage, memory, operating systems, and stored programs. One important example was the IBM 1401, which was universally accepted throughout industry, and is considered by many to be the Model T of the computer industry. By 1965, most large business routinely processed financial information using second generation computers.

It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. The stored program concept meant that instructions to run a computer for a specific function (known as a program) were held inside the computer's memory, and could quickly be replaced by a different set of instructions for a different function. A computer could print customer invoices and minutes later design products or calculate paychecks. More sophisticated high-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) came into common use during this time, and have expanded to the current day. These languages replaced cryptic binary machine code with words, sentences, and mathematical formulas, making it much easier to program a computer. New types of careers (programmer, analyst, and computer systems expert) and the entire software industry began with second generation computers.

Third Generation Computers (1964-1971)


Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. The quartz rock eliminated this problem. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit (IC) in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor. As a result, computers became ever smaller as more components were squeezed onto the chip. Another third-generation development included the use of an operating system that allowed machines to run many different programs at once with a central program that monitored and coordinated the computer's memory.

Fourth Generation (1971-Present)


After the integrated circuits, the only place to go was down - in size, that is. Large scale integration (LSI) could fit hundreds of components onto one chip. By the 1980's, very large scale integration (VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration (ULSI) increased that number into the millions. The ability to fit so much onto an area about half the size of a U.S. dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability. The Intel 4004 chip, developed in 1971, took the integrated circuit one step further by locating all the components of a computer (central processing unit, memory, and input and output controls) on a minuscule chip. Whereas previously the integrated circuit had had to be manufactured to fit a special purpose, now one microprocessor could be manufactured and then programmed to meet any number of demands. Soon everyday household items such as microwave ovens, television sets and automobiles with electronic fuel injection incorporated microprocessors.

Such condensed power allowed everyday people to harness a computer's power. They were no longer developed exclusively for large business or government contracts. By the mid-1970's, computer manufacturers sought to bring computers to general consumers. These minicomputers came complete with user-friendly software packages that offered even non-technical users an array of applications, most popularly word processing and spreadsheet programs. Pioneers in this field were Commodore, Radio Shack and Apple Computers. In the early 1980's, arcade video games such as Pac Man and home video game systems such as the Atari 2600 ignited consumer interest for more sophisticated, programmable home computers.

In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. Computers continued their trend toward a smaller size, working their way down from desktop to laptop computers (which could fit inside a briefcase) to palmtop (able to fit inside a breast pocket). In direct competition with IBM's PC was Apple's Macintosh line, introduced in 1984. Notable for its user-friendly design, the Macintosh offered an operating system that allowed users to move screen icons instead of typing instructions. Users controlled the screen cursor using a mouse, a device that mimicked the movement of one's hand on the computer screen.

As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. As opposed to a mainframe computer, which was one powerful computer that shared time with many terminals for many applications, networked computers allowed individual computers to form electronic co-ops. Using either direct wiring, called a Local Area Network (LAN), or telephone lines, these networks could reach enormous proportions. A global web of computer circuitry, the Internet, for example, links computers worldwide into a single network of information. During the 1992 U.S. presidential election, vice-presidential candidate Al Gore promised to make the development of this so-called "information superhighway" an administrative priority. Though the possibilities envisioned by Gore and others for such a large network are often years (if not decades) away from realization, the most popular use today for computer networks such as the Internet is electronic mail, or E-mail, which allows users to type in a computer address and send messages through networked terminals across the office or across the world.


Fifth Generation (Present and Beyond)

Defining the fifth generation of computers is somewhat difficult because the field is in its infancy. The most famous example of a fifth generation computer is the fictional HAL9000 from Arthur C. Clarke's novel, 2001: A Space Odyssey. HAL performed all of the functions currently envisioned for real-life fifth generation computers. With artificial intelligence, HAL could reason well enough to hold conversations with its human operators, use visual input, and learn from its own experiences. (Unfortunately, HAL was a little too human and had a psychotic breakdown, commandeering a spaceship and killing most humans on board.)

Though the wayward HAL9000 may be far from the reach of real-life computer designers, many of its functions are not. Using recent engineering advances, computers may be able to accept spoken word instructions and imitate human reasoning. The ability to translate a foreign language is also a major goal of fifth generation computers. This feat seemed a simple objective at first, but appeared much more difficult when programmers realized that human understanding relies as much on context and meaning as it does on the simple translation of words.

Many advances in the science of computer design and technology are coming together to enable the creation of fifth-generation computers. Two such engineering advances are parallel processing, which replaces von Neumann's single central processing unit design with a system harnessing the power of many CPUs to work as one. Another advance is superconductor technology, which allows the flow of electricity with little or no resistance, greatly improving the speed of information flow. Computers today have some attributes of fifth generation computers. For example, expert systems assist doctors in making diagnoses by applying the problem-solving steps a doctor might use in assessing a patient's needs. It will take several more years of development before expert systems are in widespread use.