search

Google

Wednesday, April 30, 2008

Random access memory

Random access memory (usually known by its acronym, RAM) is a type of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order, i.e. at random. The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its position in the memory bank and whether or not it is related to the previous piece of data.[1]

This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.

The word RAM is mostly associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. However, many other types of memory are RAM as well (i.e. Random Access Memory), including most types of ROM and a kind of flash memory called NOR-Flash.

History
The first type of random access memory was the magnetic core memory, developed in 1951, and used in all computers up until the development of the static and dynamic RAM integrated circuits in the late 1960s and early 1970s. Prior to the development of the magnetic core memory, computers used relays or vacuum tubes to perform memory functions.


[edit] Overview

[edit] Types of RAM
Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.

As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as "permanent" storage in traditional computers. Many newer products such as PDAs and small music players (up to 160 GB in Jan 2008) do not have hard disks, but often rely on flash memory to maintain data between sessions of use; the same can be said about products such as mobile phones, advanced calculators, synthesizers etc; even certain categories of personal computers, such as the OLPC XO-1, Asus Eee PC, and others, have begun replacing magnetic disk with so called flash drives. There are two basic types of flash memory: the NOR type, which is capable of true random access, and the NAND type, which is not; the former is therefore often used in place of ROM, while the latter is used in most memory cards and solid-state drives, due to a lower price.


[edit] Memory hierarchy

One module of 128MB NEC SD-RAMMany computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. (Generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.)

In most modern personal computers, the RAM comes in easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.


[edit] Swapping
If a computer becomes low on RAM during intensive application cycles, the computer can resort to swapping. In this case, the computer temporarily uses hard drive space as additional memory. Constantly relying on this type of backup memory is called thrashing, which is generally undesirable because it lowers overall system performance. In order to reduce the dependency on swapping, more RAM can be installed.


[edit] Other uses of the term
Other physical devices with read/write capability can have "RAM" in their names: for example, DVD-RAM. "Random access" is also the name of an indexing method: hence, disk storage is often called "random access" because the reading head can move relatively quickly from one piece of data to another, and does not have to read all the data in between. However the final "M" is crucial: "RAM" (provided there is no additional term as in "DVD-RAM") always refers to a solid-state device.


[edit] "RAM disks"
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. Unless the memory used is non-volatile, a RAM disk loses the stored data when the computer is shut down. However, volatile memory can retain its data when the computer is shut down if it has a separate power source, usually a battery.


[edit] Recent developments
Several new types of non-volatile RAM, which will preserve data while powered down, are under development. The technologies used include carbon nanotubes and the magnetic tunnel effect. In summer 2003, a 128 KB magnetic RAM chip manufactured with 0.18 µm technology was introduced. The core technology of MRAM is based on the magnetic tunnel effect. In June 2004, Infineon Technologies unveiled a 16 MB prototype again based on 0.18 µm technology. Nantero built a functioning carbon nanotube memory prototype 10 GB array in 2004. Whether some of these technologies will be able to eventually take a significant market share from either DRAM, SRAM, or flash-memory technology, remains to be seen however.

In 2006, "Solid-state drives" (based on flash memory) with capacities exceeding 150 gigabytes and speeds far exceeding traditional disks have become available. This development has started to blur the definition between traditional random access memory and "disks", dramatically reducing the difference in performance


[edit] Memory wall
The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. [2]

Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF):

“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”

The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However, Intel's new processors, Core 2 Duo (codenamed Conroe) show a significant improvement over previous Pentium 4 processors; due to a more efficient architecture, performance increased while clock rate actually decreased.


[edit] Security concerns
Contrary to simple models (and perhaps common belief), the contents of modern SDRAM modules isn't lost immediately when the computer is shutdown, it fades away - a process that takes only seconds at room temperatures, but which can be extended to minutes at low temperatures. As an example, it is therefore possible to get hold of an encryption key if it was stored in ordinary working memory (i.e. the SDRAM modules
read more

How RAM Works

by Jeff Tyson and Dave Coustan

Random access memory (RAM) is the best known form of computer memory. RAM is considered "random access" because you can access any memory cell directly if you know the row and column that intersect at that cell

The opposite of RAM is serial access memory (SAM). SAM stores data as a series of memory cells that can only be accessed sequentially (like a cassette tape). If the data is not in the current location, each memory cell is checked until the needed data is found. SAM works very well for memory buffers, where the data is normally stored in the order in which it will be used (a good example is the texture buffer memory on a video card). RAM data, on the other hand, can be accessed in any order.

In this article, you'll learn all about what RAM is, what kind you should buy and how to install it.


Dynamic RAM
Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors and capacitors. In the most common form of computer memory, dynamic random access memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information -- a 0 or a 1 (see How Bits and Bytes Work for information on bits). The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.

A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the capacitor's bucket is that it has a leak. In a matter of a few milliseconds a full bucket becomes empty. Therefore, for dynamic memory to work, either the CPU or the memory controller has to come along and recharge all of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the memory and then writes it right back. This refresh operation happens automatically thousands of times per second.



RAM Image Gallery



The capacitor in a dynamic RAM memory cell is like a leaky bucket.
It needs to be refreshed periodically or it will discharge to 0. See more pictures of RAM.

This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this refreshing is that it takes time and slows down the memory.

Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows (wordlines). The intersection of a bitline and wordline constitutes the address of the memory cell.



Memory is made up of bits arranged in a two-dimensional grid.
In this figure, red cells represent 1s and white cells represent 0s.
In the animation, a column is selected and then rows are charged to write data into the specific column.

DRAM works by sending a charge through the appropriate column (CAS) to activate the transistor at each bit in the column. When writing, the row lines contain the state the capacitor should take on. When reading, the sense-amplifier determines the level of charge in the capacitor. If it is more than 50 percent, it reads it as a 1; otherwise it reads it as a 0. The counter tracks the refresh sequence based on which rows have been accessed in what order. The length of time necessary to do all this is so short that it is expressed in nanoseconds (billionths of a second). A memory chip rating of 70ns means that it takes 70 nanoseconds to completely read and recharge each cell.

Memory cells alone would be worthless without some way to get information in and out of them. So the memory cells have a whole support infrastructure of other specialized circuits. These circuits perform functions such as:

Identifying each row and column (row address select and column address select)
Keeping track of the refresh sequence (counter)
Reading and restoring the signal from a cell (sense amplifier)
Telling a cell whether it should take a charge or not (write enable)
Other functions of the memory controller include a series of tasks that include identifying the type, speed and amount of memory and checking for errors.

Protect Your Stuff With Encrypted Linux Partitions

By Carla Schroder



We see the headlines all the time: "Company X Loses 30,000,000 Customer Social Security Numbers and Other Intimately Personal and Financial Data! Haha, Boy Are Our Faces Red!" And it always turns out to be some "contractor" (notice how it's never an employee) who had the entire wad on a laptop with (seemingly) a terabyte hard drive, which was then lost or stolen, but nobody is quite sure where or when. Or it's a giant box of backup tapes that was being transported by a vendor, who apparently cannot afford a vehicle with locking doors. To me it sounds pretty darned lame, even surreal; why in the heck do contractors get all that sensitive data in the first place, and why do they need the world's databases on their laptops? Why are giant boxes of sensitive backup tapes being carted around by some minimum-wage kid in a beatermobile? How come they never quite know what data is missing, and if it was encrypted or protected in any way?

So many questions, so few answers. Today let us focus on the issue of protecting sensitive data on hard drives with encrypted file systems. This is for your mobile users and anyone who needs extra data security on workstations and servers. We're going to use cryptsetup-luks because it is easy and it is strong. We will create an encrypted partition that requires a passphrase only at mount time. Then you can use it just like any other partition.

Debian, Ubuntu, and Fedora all come ready to run cryptsetup-luks. You won't need to hack kernels or anything; just install it. On Debian and the Buntu family:

# aptitude install cryptsetup

On Fedora:

# yum install cryptsetup-luks

Preparing Your System
Unfortunately cryptsetup cannot encrypt your existing data; you must create an encrypted partition, then move your data to it. The easy way to manage your partitions is with GParted. GParted (the Gnome Partition editor) is available on all the major Linux distributions, and is a nice graphical front-end to fdisk, mkfs, and other filesystem utilities. With GParted you can resize, move, delete and create partitions, and format them with your favorite filesystem. It supports all the partition types and filesystems supported by your kernel, so you can even use it on Windows partitions on your dual-boot boxes. You can use the GParted live CD on new empty hard drives.

We're just going to encrypt data partitions. There are ways to encrypt other filesystem partitions that hold potentially sensitive data, such as /var and /etc, but it is complex and tricky because these cannot be encrypted at boot. So I am going to chicken out and merely point to a page that tells how to do this in Resources, because in my own testing I have not gotten it working reliably. Yet.

It doesn't matter if you format your partition with a filesystem at this point because everything will be overwritten, and the filesystem formatted after encryption.

Your encrypted partition will be protected by a password. If you lose your password, you are so out of luck- your data will not be recoverable.

Encrypting the Partition
Once you have a nice new empty partition, you'll encrypt it with the cryptsetup command. Be very sure you are encrypting the correct partition:

# cryptsetup --verbose --verify-passphrase -c aes-cbc-plain luksFormat /dev/sda2

WARNING!
========
This will overwrite data on /dev/sda2 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:
Command successful.

This creates the encrypted partition. Now you need to create and name a mountable logical partition. In this example, it is named sda2, which could be test or fred or mysecretpartition, or anything you want:
# cryptsetup luksOpen /dev/sda2 sda2
Enter LUKS passphrase:
key slot 0 unlocked.
Command successful.

This should show as a block device in /dev/mapper:

$ ls -l /dev/mapper
total 0
crw-rw---- 1 root root 10, 63 2007-06-09 18:38 control
brw-rw---- 1 root disk 254, 0 2007-06-09 19:46 sda2

Now put a filesystem on the logical partition:

# mkfs.ext3 /dev/mapper/sda2

Now you need to make a mount point so you can mount and use this nice new encrypted partition. Remember, you must use the device name is from /dev/mapper/. I'll put it in my home directory. Watch for operations that require rootly powers:

$ mkdir /home/me/crypted
# mount /dev/mapper/sda1 /home/me/crypted

Confirm that it mounted, and write a test file:

# df -H
[...]
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sda2 7.9G 152M 7.3G 3% /home/carla/crypted
# cd /home/me/crypted
# nano test
# ls
lost+found test

Making it Available to Users
So far so good! But there is one big problem: only root can access this partition. We need our ordinary user to be able to use it. This virtual partition can be managed in /etc/fstab, just like any other partition. So add a line to /etc/fstab to allow an unprivileged user to mount and unmount the partition:

/dev/mapper/sda2 /home/carla/crypted ext3 user,atime,noauto,rw,dev,exec,suid 0 0

Now Carla can mount it herself:

$ mount ~/crypted

But Carla still cannot write to it. For this we need rootly powers one more time, to put the correct ownership and permissions on the mounted block device:

# chown carla:carla /home/carla/crypted/
# chmod 0700 /home/carla/crypted/

Ok then, that's a lot of Carlas! But now Carla has her own encrypted directory to read and write to just like any other directory in her home directory, and no one else can touch it.

You may unmount and shut off the encrypted partition manually like this:

$ umount crypted
# cryptsetup luksClose sda2

You'll need your LUKS password only when you open the encrypted device. Remember, if you lose this password you are toast. You may delete the partition and start over, but your data are unrecoverable. Once the encrypted device is open and mounted, you may use it like any other partition.

You need root powers to run cryptsetup. This is probably not ideal for your users. There are a number of different ways to handle this. One is to use sudo; *buntu users are already set up with an all-powerful sudo. Another option is to configure it to start up at boot, and close at shutdown. Or you might want to create some nice desktop icons so your users can start it up and shut it down easily on demand.

We'll learn how to do these things next week, plus we'll learn how to encrypt USB keys, and how to set up a failsafe for a lost passphrase.

read more

Create Encrypted Volumes With Cryptmount and Linux

By Carla Schroder

Cryptmount is a friendly front-end to a batch of Linux utilities used to create encrypted volumes, such as device mapper, dm-crypt, and the kernel's loopback device. It requires root privileges to create encrypted files or partitions, and then once it's set up users can mount and unmount their own encrypted volumes on demand. Its major features are:

Users can change their own passwords
Encrypted filesystems can be initialized at boot-up or on demand
Encrypted access keys are OpenSSL-compatible
Supports storing access keys on removable media
Encrypt entire partitions, or create several encrypted filesystems on a single partition
Plain text human-readable configuration files
So it has several advantages over the excellent Cryptsetup (see Resources). Users can mount and unmount their own encrypted volumes without a bunch of /etc/fstab and sudo hacks, you have more flexibility because you're not restricted to encrypting entire block devices, and making encrypted filesystems available on demand leaves the unneeded ones in a safer state. From the user's perspective it treats raw disk partitions, any individual file, loopback devices, and LVM volumes all in the same way because Cryptmount operates on an encrypted device-mapper layer (which you can see in /dev/mapper after it's created).

More File Encryption
Protect Your Stuff With Encrypted Linux Partitions
Protect Your Stuff With Encrypted Linux Partitions (Part 2)
Protect Your Mobile Users With Windows' Encrypting File System
Stuck for a definition? Look it up at Webopedia:

Cryptmount is slowly making its way into various distribution repositories. Debian Testing and Unstable currently have cryptmount 2.1. Ubuntu Feisty and Gutsy contain the moldy old 2.0 version in the Universe repository. The current stable release on Sourceforge is 2.2. You can get .debs, RPMs, and source tarballs on Sourceforge. You want at least 2.1 to get the cryptmount-setup command, plus a number of useful fixes and updates.

Encrypted Filesystem Inside a File
You don't have to encrypt an entire partition, but can create an encrypted filesystem inside an ordinary file. Use the cryptmount-setup script to do this. This example has the uninteresting bits removed:

# cryptmount-setup
Please enter a target name for your filesystem
[opaque]: mystuff

Which user should own the filesystem (leave blank for root)
[]: carla

Please specify where "mystuff" should be mounted
[/home/carla/crypt]:

Enter the filesystem size (in MB)
[64]: 1028

Enter a filename for your encrypted container
[/home/carla/crypto.fs]: /home/carla/mystuff.fs

Enter a location for the keyfile
[/etc/cryptmount/mystuff.key]:

enter password for target "mystuff":

Your new encrypted filesystem is now ready for use.
To access, try:
cryptmount mystuff
cd /home/carla/crypt
After you have finished using the filesystem, try:
cd
cryptmount --unmount mystuff

Do not choose a wimpy password, and do not forget your password, because if you lose it it's not recoverable. You can wipe out the encrypted filesystem and start over, but you cannot recover your data. You should also make backup copies of your access keys and keep them in a safe place, because losing the key also loses your data.

The defaults are in square brackets. You may invent whatever names you like, and the script will create directories for you. When you specify a filename, be sure to use the whole path. When it's finished you will have a new crypt (or whatever you named it) directory and three new files, which in this example are named container, crypto.fs, and mystuff.fs. Don't try to read these files because they are just containers.

Go ahead and mount your new encrypted filesystem- cryptmount-setup tells you exactly the command you need:

$ cryptmount mystuff
enter password for target "mystuff":
e2fsck 1.40.8 (13-Mar-2008)
/dev/mapper/mystuff: clean, 11/32768 files, 9805/131072 blocks

Now there is a new /dev/mapper/mystuff block device. Only the user you specified during setup (and root) can mount and unmount the encrypted filesystem. Play around with it— copy files in and out of it, look at it in your favorite file manager—it looks just like any other directory. Unmount it just like the setup script told you:

$ cd
$ cryptmount --unmount mystuff

A silent exit means success. If you have anything accessing your encrypted directory, such as a command prompt, file manager, or open file, you'll get the "umount: /home/carla/crypt: device is busy" error. Running the cd command first puts you back in the top level of your home directory. Sometimes the famd daemon will get in the way and you'll have to kill it. Don't use the standard umount command or it will get messed up, and you won't be able to re-mount it.

If you make a mistake and get a "specification for target "foo" contains non-absolute pathname" error, or any other error message, enter /etc/cryptmount/cmtab to correct it. Or delete the entry and start over.

If you create more than one encrypted filesystem cryptsetup -l displays a list. Users can change their passwords with cryptsetup -c [targetname].

Using a Different Linux Filesystem
cryptmount-setup defaults to using Ext3. If you want to use something else, such as ReiserFS, JFS, or XFS, first find out if your kernel supports it:

$ cat /proc/filesystems
nodev sysfs
nodev rootfs
[...]
ext3
jfs
reiserfs
xfs

nodev filesystems are pseudo filesystems that don't directly access a physical storage device. This example shows that all four major Linux filesystems are supported.

Encrypting an Entire Partition
If you would rather encrypt an entire partition it's better to create it manually. In this example we'll use a partition on a second hard drive, and mount it in the user's home directory. First create an entry in /etc/cryptmount/cmtab like this:

manual {
dev=/dev/hdb5
dir=/home/terry/manual
fstype=reiserfs
fsoptions=defaults
cipher=aes
keyfile=/etc/cryptmount/manual.key
keyformat=builtin
}

This tells Cryptmount your target name is "manual", you want /dev/hdb5 to be your encrypted partition, and to mount it in /home/terry/manual. You also should specify the filesystem type and options, and which cipher you prefer (which will be used to encrypt your filesystem) depends on what your system supports. Run this command to find out:

$ ls -l /lib/modules/$(uname -r)/kernel/crypto/
ablkcipher.ko
aes.ko
anubis.ko
arc4.ko
[...]

The correct kernel module will be automatically loaded when you mount the encrypted filesystem. man cmtab describes all the options and tells you which ones are required.

Next, generate your encryption key, specifying the size in bytes. This might involve a bit of math, since it's more common to use bits. This example creates a 32-byte/256-bit key. The key size depends on your chosen cipher, which you're going to have to research your own self:

# cryptmount --generate-key 32 manual
generating random key, please be patient
enter new password for target "manual":
confirm password:

Then run this command, using your own target name of course:

# cryptmount --prepare manual
enter password for target "manual":

Now create the filesystem:

# mkreiserfs /dev/mapper/manual

Now run:

# cryptmount --release manual

Then create the mountpoint as the user it's going to belong to for fewer permissions-fixing hassles:

# su terry
$ mkdir /home/terry/manual

Next, mount it as the user with cryptmount manual. You'll probably have to tweak file permissions to allow a non-root user to read and write to the new encrypted partition, so while it is mounted fix the permissions and ownership:

# chown terry:terry /home/terry/manual
# chmod 0700 /home/terry/manual


You're welcome to tweak the permissions however you like; this makes Terry the owner and group owner, and only Terry can access this directory. Now Terry should be able to mount and unmount the encrypted filesystem, read and write to it, create and delete directories, and change her own password.

To mount encrypted filesystems automatically at boot, enter them in /etc/default/cryptmount. Refer to the well-written man cryptmount and man cmtab for additional options. Your installation or source tarball should contain additional examples and help, such as a sample /etc/cmtab that shows how to painlessly encrypt your swap file, and how to store your access key on a USB stick instead of on your computer. It is possible to create password-less keys, so then your USB stick operates just like an ordinary door key.

read more

Can Ruckus Redefine How Enterprise WLANs are Deployed?

eliminates the need for complex, labor-intensive RF planning.

“SmartMesh is an extension of our patented technology that allows us to adapt to changes in a Wi-Fi environment. It sits on top of a chipset and has a smart antenna system, which focuses RF energy toward a client and it steers it in real-time if it experiences interference. You don’t have to do anything other than plug it in. You get a longer range signal, in some cases two to four times the distance; you get a more consistent performance, and less interference,” says Callisch.

The almighty dollar
Ruckus estimates that the cost of a typical 500-user WLAN using the industry’s most popular enterprise 802.11g WLAN system (a Cisco 4402 controller, plus 25 Cisco 1131 802.11g APs, plus 150 Ethernet drops) is approximately $35,000.

In contrast, the Ruckus SmartMesh solution (15 APs, five Ethernet drops, and a Smart WLAN controller) comes in under $15,000, offers faster 802.11n, and can be installed in roughly half the time, since extensive wiring and site planning are not needed.

“We are giving them the ability to deploy robust wireless LANs at a very low cost point and very quickly,” says Callisch. “And we’ve got some proof points, in terms of the customers who have done that, that can prove it. We think it will fundamentally change the economics of wireless LAN deployments by giving the enterprise managers a much easier and quicker way to build hi-performance wireless LANs without the high cost associated with wired wireless LANS. There’s a lot of hospitals and hotels and schools where there’s not a lot of IT guys and money is an issue, but they still have students and guests with Wi-Fi devices and they need a robust wireless LAN that provides coverage. If there’s power, we can work. Adding wireless capacity is as easy as plugging in a light bulb.”

Other offerings
In addition to SmartMesh, Ruckus Wireless also announced, the ZoneDirector 3000, a new line of scalable enterprise-class Smart WLAN controllers. With support for up to 250 ZoneFlex APs, the ZoneDirector 3000 provides a Smart WLAN controller option for large enterprise environments.

All ZoneDirectors and/or ZoneFlex APs are also now manageable through the Ruckus FlexMaster Wi-Fi management system. Using FlexMaster, enterprises can securely manage remote smart WLANs and APs in regional or branch offices from a single point over the Internet or private IP networks.

"If you had 50 hotels, and each has a WLAN, but not an IT staff, you can troubleshoot from a distance. You can see how many users, reboot, reconfigure, change access IDs--it's super granular control," says Callisch.

Ruckus in the middle

“We sell primarily to the middle market, the big chasm of the market not served by Cisco, Aruba, and Trapeze or the low-end served by consumer products like Netgear and D-Link. We target the middle: schools, hotels, hospitals. They require an affordable solution that’s easy to use,” says Callisch.

The first Ruckus customer to deploy the SmartMesh 802.11n system is Lodgian, Inc., one of the largest independent owners and operators of full-service hotels in the United States, which deployed the new Smart Wi-Fi Ruckus solution at its Crowne Plaza Beach Oceanfront Resort Hotel in Melbourne, Florida two months ago.

Prior to the deployment, the hotel received roughly four customer complaints per day about the quality if Lodgian’s legacy Wi-Fi network. Callisch says that number has been reduced to four every two weeks.

“Our access points reach really, really far,” says Callisch. “Hops are bad in mesh, and we can eliminate the needless hops that you would normally get with a meshing system. That gives the client better coverage and performance. It’s an easy-to-use, scaleable system. We have taken the best concepts from enterprise-grade and stripped out all of the complexity and made it simple to use at the middle end of the market.”

Availability and pricing
SmartMesh is currently available as a free software upgrade (ZoneDirector 6.0 software) to premium support customers with Ruckus ZoneDirector Smart WLAN controllers. It can be used with the entire family of Ruckus ZoneFlex 802.11g/n Smart Wi-Fi access points. ZoneDirector 3000 ($6,000 for 25 APs), will be available in July. The FlexMaster management system is available now for ZoneFlex APs ($5,000 for 100 APs). FlexMaster managing ZoneDirectors will be available in July.
read more

Is PCI-SSC Securing the Enterprise or Lining Pockets?

By Sonny Discini

When we were all introduced to the PCI standard, organizations right down to mom and pop operations were hopeful that the regulation would address many of the security issues involved with payment cards. Before long, security pros in the trenches realized that the initiative added a slew of technical difficulties while executives realized the crippling financial implications of the standard. Mom and pop stores were simply left in a cloud of confusion over the regulation.

And so, many still remain in that state.

Even so, we pressed on, doing our best to meet the requirements and acquire PCI certification. Many of us realized that even with massive overhauls, and the blessing of a Qualified Security Assessor (QSA), gaping issues still exist along with tons of confusion over the interpretation of the regulation.

A large Pennsylvania health care provider was faced with costs too great to maintain operations and still meet PCI regulations. Their executives decided to do what many others have already done after making failed attempts at compliance – roll the dice and hope not to get fined.

The strategy failed not once but twice.

Today, that same health care provider has what is described by staff as "crippling" lockdowns that prevent the business from actually operating. Many organizations have been financially hurt more by the regulation than from data leakage or theft.

A security auditor with a QSA outfit who asked to remain anonymous states, "We've run into many cases where interpretation of the standard by the organization drastically contradicts the interpretation by the QSA they hired. In addition, QSAs offer significantly different opinions to the same organization, which adds greater pressure, frustration, and confusion to the issue. Many times, organizations over compensate and go well beyond the requirements hoping to avoid fines and data disclosures."

Of course, the PCI Security Standards Council heard the cries from the field. How did they respond? They added more requirements such as PCI PIN Entry Device (PED) Security Requirements and the Payment Application Data Security Standard (PA-DSS) along with an anticipated revision of the main PCI-DSS regulation.

PA-DSS requirements apply to commercial payment applications that are sold, distributed or licensed to third parties. PA-DSS requirements do not apply to in-house payment applications, but these applications must still be secured in accordance with PCI-DSS.

In addition, the Council will be qualifying companies to become Payment Application Qualified Security Assessors (PA-QSAs) in the coming months. Companies that are PA-QSA approved will be recognized in a Council-maintained and published list and can begin conducting PA-DSS assessments in accordance with PA-DSS Security Audit Procedures.

All companies that were previously recognized as PA-QSAs under Visa Payment Application Best Practices (PABP) will need to enroll and re-validate as a Council PA-QSA. Payment applications validated as compliant under Visa's PABP program will transition to the PCI-SSC approved list.

But are these requirements going to simply put the squeeze on focus areas and move the threat vector somewhere else in the business process? How will this impact risk ownership?

Who's Minding the Back Door?
Let's look at Hannaford food stores for just a moment. The company said that the data breach it disclosed on March 17 involved malicious software that was found on computer servers at about 300 of the company's stores.

The software reportedly intercepted credit card data during checkout and sent captured information overseas.

It's obvious that while this organization was PCI certified, criminals still managed to load malware on 300 hosts across their enterprise and exploited data transit, for three months.

That said, the new regulations coming down from the PCI-SSC are supposed to deal with the above issues and more. Forgive me if I'm pessimistic here but from what others and I have seen, reactive regulations seem to be falling short of the mark on all fronts. In addition, they multiply the work needed to comply.

First of all, it adds a 3-card monty shell game in regards to risk. Auditors and the organization are pushing it around the table hoping to avoid being the outfit that ultimately ends up holding the bag. Now add more regulations and the situation only gets muddier.

So let's recap. PCI was introduced to deal with security issues with payment cards. The regulation caused more problems than it solved, and as a nice side effect, it generated a healthy cash flow in the way of fines. Criminals ran amuck in a PCI certified environment by exploiting 300 hosts and attacking data in transit. And now, organizations have to deal with the new regulations AND re-certify even though they already hold Visa PABP.

Today it appears that organizations are going to have to deal with a web of red tape under the new trio of PCI regulations. On top of that, a wonderful new niche market has been created for "qualified" application assessors/auditors and scanners. This of course means that you're going to see more expenses added to the PCI pile. It should be clear to many that additional regulations are not going to improve the situation we're in, or in layman's terms, you can't improve an overcooked steak by cooking it longer.

While the stated mission of the PCI Security Standards Council is to enhance payment account security by driving education and awareness of the PCI Data Security Standard and other standards that increase payment data security, criminals, executives and security practitioners understand the impact that the regulation has caused. And while security pros run around plugging leaks in the dam, and while organizations struggle to finance these plugs, criminals are simply shifting the attack vector to areas that PCI doesn't cover or hasn't identified as an issue yet.
read more

Monday, April 21, 2008

AMD and Their Struggles - CPUs

AMD’s biggest market by far is CPU sales. AMD had been doing very well until recently. Their X2 dual core processor isn’t a bad processor; it spanked the Intel Pentium D processors. It’s just that Intel came out with a product that is far superior to any that it offered before; in particular, Intel's processor is superior to the AMD X2. As far as Best Buy or Circuit City computer sales go, I would have to say that between Intel and AMD, it’s about 50/50 as to which company's CPU is in the computer. Now move to sales from places like newegg.com or zipzoomfly.com, the numbers begin to skew toward Intel. Grandma and Grandpa couldn’t care less which overclocks better; they are going to just hit the power button and want it to work. Overclockers have a different take. Whichever chip overclocks better and offers the absolute best performance is what flies off the shelves.

Dual core CPU is so last year. These days, it’s all about quad cores. Intel had theirs out in November 2006. AMD didn’t have a quad core on the market until a year later. At the time of this writing, it is still hard to find AMD’s quad core CPU for sale. If finding them weren’t hard enough, there is a major problem with them involving data corruption and system hangs. Sounds really bad, doesn’t it? Well it’s not all that uncommon for CPUs to have this issue, but most of the time the problem can be quickly fixed without a noticeable performance change. However, the patch for this problem does throw a monkey wrench into the performance. Your options are: run the patch and lose performance, or run without the patch and risk data loss. Looks like it’s time to wait and see how this problem is fixed in the next revision of the CPU.

AMD’s CPU market has taken a hit recently and can’t seem to get a great product out the doors. We finally saw AMD’s quad core, but it is flawed and not readily available. Currently, the only speeds available are 2.2 GHz and 2.3 GHz. In contrast, Intel’s bottom of the line quad core runs at 2.4 GHz and tops out at 3.0 GHz. AMD is trying to catch up in performance, trying to get their efficiency up to Intel's level, but these slower speeds are only hurting the performance.

To try and keep sales up, even with a lesser product, AMD dropped prices. If they can’t compete in clock for clock performance, they could drop the price down to where the extra speed for the same price could create a competing market. This increased sales, but drastically decreased revenue.

AMD’s new marketing scheme is green. The CPUs they are making now are more energy efficient. This will save money in the long run from an energy standpoint. Many businesses are starting to try to cut budgets and saving on electricity is a place to start. On the flip side, AMD chips don’t have as much computing power, so you will need more time and CPUs to make up for it.

read more here

AMD and Their Struggles

If you have been in the tech news loop for the past year or two, you probably haven’t heard much good news from the AMD (Advanced Micro Devices) headquarters. In this article, we will take a look back at what has happened and see where things might be headed in the near future. Will AMD end up going bankrupt or could they topple Intel? Read on to see where AMD is going.

Chances are that you have heard of AMD and probably know they make CPUs. Things have been going down a bumpy road for the company for some time. The Phenom quad cores are buggy and only the low end ones are out. ATI doesn’t have a killer card on the market yet and only recently put out a decent card in the HD 3800 series.

History

Most people's first memories of AMD chips are probably the AMD Athlon XP CPUs. Until then, most people probably only knew of Intel. The Athlon XP changed that. It offered the same or better performance in many applications and also cost less than comparable Intel chips.

AMD carried their success into their next generation of CPUs, the Athlon 64. Once again they came out on top of Intel in price and performance. They also had support for 64-bit computing, which was a step ahead of Intel, and looked set to change the computer world. Sadly, 64-bit is still at the market entry stage. A vast majority of computers being sold are running 32-bit software.

AMD’s last smashing of Intel came with the Athlon X2 dual core CPU. They didn’t beat Intel to the market with their dual core, but they managed to make it more efficient and consume less power. AMD didn’t rush to the market, but took their time. Intel slapped two cores into a single CPU, but in the long run it hurt them. AMD's slower approach meant they got it right the first time.

After this AMD started running into some snags. Intel released their Core Duo processors that overtook AMD’s best CPU clock for clock. They also were able to run at a higher frequency. From this point on, many things have continued to swing against AMD.

read more here

Sunday, April 13, 2008

Hardware description language

In electronics, a hardware description language or HDL is any language from a class of computer languages for formal description of electronic circuits. It can describe the circuit's operation, its design and organization, and tests to verify its operation by means of simulation.

A Hardware Description Language (HDL) is a standard text-based expression of the temporal behaviour and/or (spatial) circuit structure of an electronic system. In contrast to a software programming language, an HDL's syntax and semantics include explicit notations for expressing time and concurrency which are the primary attributes of hardware. Languages whose only characteristic is to express circuit connectivity between a hierarchy of blocks are properly classified as netlist languages.

HDLs are used to write executable specifications of some piece of hardware. A simulation program, designed to implement the underlying semantics of the language statements, coupled with simulating the progress of time, provides the hardware designer with the ability to model a piece of hardware before it is created physically. It is this executability that gives the illusion of HDLs being a programming language. Simulators capable of supporting discrete event (digital), and continuous time (analog) modeling exist and HDLs targeted for each are available.

It is certainly possible to represent hardware semantics using traditional programming languages such as C++ (and augmented with extensive and unwieldy class libraries.) However, the C++ language does not include any capability for expressing time explicitly and consequently is not a proper hardware description language.

Using the proper subset of virtually any (hardware description or software programming) language, a software program called a synthesizer can infer hardware logic operations from the language statements and produce an equivalent netlist of generic hardware primitives to implement the specified behaviour. This typically (as of 2004) requires the synthesizer to ignore the expression of any timing constructs in the text. The ability to have a synthesizable subset of the language does not itself make a hardware description language.

Designing a system in HDL is generally much harder and more time consuming than writing a program that would do the same thing using a programming language like C. Consequently, there has been much work done on automatic conversion of C code into HDL, but this has not reached a high level of commercial success as of 2004.
History of HDLs
The first hardware description languages were ISP, developed at Carnegie Mellon University, and KARL, developed at University of Kaiserslautern, both around 1977. ISP was, however, more like a software programming language used to describe relations between the inputs and the outputs of the design. Therefore, it could be used to simulate the design, but not to synthesize it. KARL included design calculus language features supporting VLSI chip floorplanning and Structured hardware design, which was also the basis of KARL's interactive graphic sister language ABL, implemented in the early 1980s as the ABLED graphic VLSI design editor, by the telecommunication research center CSELT at Torino, Italy. In the mid 80's, a VLSI design framework was implemented around KARL and ABL by an international consortium funded by the commission of the European Union (chapter in [1]). In 1983 Data-I/O introduced ABEL. It was targeted for describing programmable logical devices and was basically used to design finite state machines.

The first modern HDL, Verilog, was introduced by Gateway Design Automation in 1985. Cadence Design Systems later acquired the rights to Verilog-XL, the HDL-simulator which would become the de-facto standard (of Verilog simulators) for the next decade. In 1987, a request from the U.S. Department of Defense led to the development of VHDL (Very High Speed Integrated Circuit Hardware Description Language.) Initially, Verilog and VHDL were used to document and simulate circuit-designs already captured and described in another form (such as a schematic file.) HDL-simulation enabled engineers to work at a higher level of abstraction than simulation at the schematic-level, and thus increased design capacity from hundreds of transistors to thousands.

The introduction of logic-synthesis for HDLs pushed HDLs from the background into the foreground of digital-design. Synthesis tools compiled HDL-source files (written in a constrained format called "RTL") into a manufacturable gate/transistor-level netlist description. Writing synthesizeable RTL files required practice and discipline on the part of the designer; compared to a traditional schematic-layout, synthesized-RTL netlists were almost always larger in area and slower in performance. Circuit design by a skilled engineer, using labor-intensive schematic-capture/hand-layout, would almost always outperform its logically-synthesized equivalent, but synthesis's productivity advantage soon displaced digital schematic-capture to exactly those areas which were problematic for RTL-synthesis: extremely high-speed, low-power, or asynchronous circuitry. In short, logic synthesis not only propelled HDLs into a central role for digital design, it was a revolutionary technology for digital-circuit design industry.

Within a few years, both VHDL and Verilog emerged as the dominant HDLs in the electronics industry, while older and less-capable HDLs gradually disappeared from use. But VHDL and Verilog share many of the same limitations: neither HDL is suitable for analog/mixed-signal circuit simulation. Neither possesses language constructs to describe recursively-generated logic structures. Specialized HDLs (such as Confluence) were introduced with the explicit goal of fixing a specific Verilog/VHDL limitation, though none were ever intended to replace VHDL/Verilog.

Over the years, a lot of effort has gone into improving HDLs. The latest iteration of Verilog, formally known as IEEE 1800-2005 Systemverilog, introduces many new features (classes, random variables, and properties/assertions) to address the growing need for better testbench randomization, design hierarchy, and reuse. A future revision of VHDL is also in development, and is expected to match Systemverilog's improvements. Both VHDL and Verilog, with their continual refinements, are expected to remain in active use for years to come.


[edit] Design using HDL
The vast majority of modern digital circuit-design revolves around an HDL-description of the desired circuit, device, or subsystem.

Most designs begin on traditional pencil and paper, as written set of requirements or a high-level architectural diagram. The process of writing the HDL-description is highly dependent on the designer's background and the circuit's nature. The HDL is merely the 'capture language' -- designers often begin with a high-level algorithmic description (such as MATLAB or a C++ mathematical model.) Control and decision structures are often prototyped in flowchart applications, or entered in a state-diagram editor. Designers even use scripting-languages (such as PERL) to auto-generate repetitive circuit-structures in the HDL language. Advanced text-editors (such as EMACS) offer an editor template to auto-indent, color-highlight syntax keywords, and macro-expand entity/architecture/signal declaration.

As the design's implementation is fleshed out, the HDL-code invariably must undergo code review (i.e. auditing.) In preparation for synthesis, the HDL-description is subject to an array of automated checkers. The checkers enforce standardized-code guidelines (to identify ambiguous code-constructs before they can cause mis-interpretation by downstream synthesis) and check for common logical-coding errors (such as dangling ports or shorted outputs.)

In industry parlance, HDL-design generally ends at the synthesis stage. Once the synthesis-tool has mapped the HDL-description into a gate-netlist, the netlist is passed off to the 'back-end' stage. Depending on the physical technology (FPGA vs ASIC gate-array vs ASIC standard-cell), HDLs may or may not play a significant role in the back-end flow. In general, as the design-flow progresses toward a physically realizeable form, the design-database becomes progressively more laden with technology-specific information, which cannot be stored in a generic HDL-description. The end result is a silicon chip that would be manufactured in a fab.


[edit] Simulating and debugging HDL code
Essential to HDL-design is the ability to simulate HDL programs. Simulation allows a HDL-description of a design (called a model) to pass design verification, an important milestone that validates the design's intended function (specification) against the code-implementation (HDL-description.) It also permits architectural exploration. The engineer can experiment with design choices by writing multiple variations of a base design, then comparing their behavior in simulation. Thus, simulation is critical for successful HDL-design.

To simulate an HDL-model, the engineer writes a toplevel simulation environment (called a testbench.) At a minimum, the testbench contains an instantiation of the model (called the device-under-test or DUT), pin/signal declarations for the model's I/O, and a clock-waveform. The testbench-code is event-driven: the engineer writes HDL-statements to implement the (testbench-generated) reset-signal, to model interface-transactions (such as a host-bus read/write), and to monitor the DUT's output. The HDL-simulator, which is the program which executes the testbench, maintains the simulator-clock, the master reference for all events in the testbench. Events occur only at the instants dictated by the testbench-HDL (such as a reset-toggle coded into the testbench), or in reaction (by the model) to stimulus and triggering events. Modern HDL-simulators have a full-featured GUI (graphical user interface), complete with a suite of debug tools. These allow the user to stop/restart the simulation at any time, insert simulator breakpoints (independent of the HDL-code), and monitor/modify any element in the HDL-model's hierarchy. Modern-simulators can also link the HDL-environment to user-compiled libraries, through a defined PLI/VHPI interface. Linking is machine-dependent (Win32/Linux/SPARC), as the HDL-simulator and user-libraries are compiled and linked outside the HDL-environment.

Design verification is often the most time-consuming portion of the design process, due to the disconnect between a device's functional specification, designer interpretation of the specification, and imprecision of the HDL-language. The majority of the initial test/debug cycle is conducted in the HDL simulator environment, as the early stage of the design is subject to frequent and major circuit changes. An HDL-description can also be prototyped and tested in hardware -- programmable logic device are often used for this purpose. Hardware prototyping is comparatively more expensive than HDL-simulation, but offers a real-world view of the design. Prototyping is the best way to check interfacing against other hardware-devices, and hardware-prototypes, even those running on slow FPGAs, offer much faster simulation times than pure HDL-simulation.


[edit] Design Verification with HDLs
Historically, design verification was a laborious, repetitive loop of writing and running simulation testcases against the design-under-test. As chip designs have grown larger and more complex, the task of design verification has grown to the point where it now dominates the schedule of a design-team. Looking for ways to improve design productivity, the EDA industry developed the property specification language.

In formal verification terms, a property is a factual statement about the expected or assumed behavior of another object. Ideally, for a given HDL-design description, a property (or properties) can be proven true or false using formal mathematical methods. In practical terms, many properties cannot be proven because they occupy an unbounded solution space. However, if provided a set of operating assumptions or constraints, a property-checker tool can prove (or disprove) more properties, over the narrowed solution space.

The assertions do not model circuit activity, but rather, capture and document the "designer's intent" in the HDL code-listing. In a simulation environment, the simulator evaluates all specified assertions, reporting the location and severity of any violations. In a synthesis environment, the synthesis tool would probably take the policy of halting synthesis on any violation. Assertion-based verification is still in its infancy, but is expected to become an integral part of the HDL design-toolset.


[edit] HDL and programming languages
An HDL is analogous to a software programming language, but with major differences. Programming languages are inherently procedural (single-threaded), with limited syntactical and semantic support to handle concurrency. HDLs, on the other hand, can model multiple parallel processes (such as flipflops, adders, etc.) that automatically execute independently of one another. Any change to the process's input automatically triggers an update in the simulator's process stack. Both programming languages and HDLs are processed by a compiler (usually called a synthesizer in the HDL case), but with different goals. For HDLs, 'compiler' refers to synthesis, a process of transforming the HDL code-listing into a physically-realizable gate netlist. The netlist-output can take any of many forms: a "simulation" netlist with gate-delay information, a "handoff" netlist for post-synthesis place&route, or a generic industry-standard EDIF format (for subsequent conversion to a JEDEC-format file.)

On the other hand, a software compiler converts the source-code listing into a microprocessor-specific object-code, for execution on the target microprocessor. As HDLs and programming-languages borrow concepts and features from each other, the boundary between them is becoming less distinct. However, pure HDLs are unsuitable for general purpose software application development, just as general-purpose programming languages are undesirable for modeling hardware. However, as electronic systems grow increasingly complex, and reconfigurable systems become increasingly mainstream, there is growing desire in the industry for a single language that can perform some tasks of both hardware-design and software-programming. SystemC is an example of such -- embedded system hardware can be modeled as non-detailed architectural blocks (blackboxes with modeled signal-inputs and output drivers.) The target application is written in C/C++, and natively-compiled for the host-development system (as opposed to the targeting the embedded CPU, requiring host-simulation of the embedded CPU.) SystemC model's high-level of abstraction is well suited for early architecture exploration, as the architect can quickly evaluate architectural modifications, with little concern about signal-level implementation issues.


[edit] Languages

[edit] Digital circuit design
The two most widely-used and well-supported HDL varieties used in industry are:

VHDL
Verilog
Others include:

Advanced Boolean Expression Language (ABEL)
AHDL (Altera HDL, a proprietary language from Altera)
Atom (behavioral synthesis and high-level HDL based on Haskell)
Bluespec (high-level HDL originally based on Haskell, now with a SystemVerilog syntax)
Confluence (a functional HDL; has been discontinued)
CUPL (a proprietary language from Logical Devices, Inc.)
HDCaml (based on Objective Caml)
Hardware Join Java (based on Join Java)
HML (based on SML)
Hydra (based on Haskell)
JHDL (based on Java)
Lava (based on Haskell)
Lola (a simple language used for teaching)
MyHDL (based on Python)
PALASM (for Programmable Array Logic (PAL) devices)
Ruby (hardware description language)
RHDL (based on the Ruby programming language)
CoWareC, a C-based HDL by CoWare. Now discontinued in favor of SystemC
SystemVerilog, a superset of Verilog, with enhancements to address system-level design and verification
SystemC, a standardized class of C++ libraries for high-level behavioral and transaction modeling of digital hardware at a high level of abstraction, i.e. system-level

Microarchitecture

In computer engineering, microarchitecture (sometime abbreviated to µarch or uarch) is a description of the electrical circuitry of a computer, central processing unit, or digital signal processor that is sufficient for completely describing the operation of the hardware.

In academic circles, the term computer organization is used, while in the computer industry, the term microarchitecture is more often used. Microarchitecture and instruction set architecture (ISA) together constitute the field of computer architecture
Etymology of the term
Since the 1950s, many computers used microprogramming to implement their control logic which decoded the program instructions and executed them. The bits within the microprogram words controlled the processor at the level of electrical signals.

The term microarchitecture was used to describe the units that were controlled by the microprogram words, as opposed to architecture that was visible and documented for programmers. While architecture usually had to be compatible between hardware generations, the underlying microarchitecture could be easily changed.


[edit] Relation to instruction set architecture
Microarchitecture is distinct from a computer's instruction set architecture. The instruction set architecture is the abstract image of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, memory address modes, processor registers, and address and data formats. The computer organization is a lower level, more concrete, description of the system than the ISA. The computer organization shows the constituent parts of the system and how they are interconnected and how they interoperate in order to implement the architectural specification. [1][2] [3]

Different machines may have the same instruction set architecture, and thus be capable of executing the same programs, yet have different microarchitectures. These different microarchitectures (along with advances in semiconductor manufacturing technology) are what allows newer generations of processors to achieve higher performance levels as compared to previous generations. In theory, a single microarchitecture (especially if it includes microcode) could be used to implement 2 different instruction sets, by programming 2 different control stores.

The microarchitecture of a machine is usually represented as a block diagram that describes the interconnections of the registers, buses, and functional blocks of the machine. This description includes the number of execution units, the type of execution units (such as floating point, integer, branch prediction, single instruction multiple data (SIMD), the nature of the pipeline (which might include such stages as instruction fetch, decode, assign, execution, completion in a very simple pipeline), the cache memory design (level 1, level 2 interfaces), and the peripheral support.

The actual physical circuit layout, hardware construction, packaging, and other physical details is called the implementation of that microarchitecture. Two machines may have the same microarchitecture, and hence the same block diagram, but different hardware implementations.[4]


[edit] Aspects of microarchitecture
The pipelined datapath is the most commonly used datapath design in microarchitecture today. This technique is used in most modern microprocessors, microcontrollers, and DSPs. The pipelined architecture allows multiple instructions to overlap in execution, much like an assembly line. The pipeline includes several different stages which are fundamental in microarchitecture designs.[4] Some of these stages include instruction fetch, instruction decode, execute, and write back. Some architectures include other stages such as memory access. The design of pipelines is one of the central microarchitectural tasks.

Execution units are also essential to microarchitecture. Execution units include arithmetic logic units (ALU), floating point units (FPU), load/store units, branch prediction, and SIMD. These units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central microarchitectural design task. The size, latency, throughput and connectivity of memories within the system are also microarchitectural decisions.

System-level design decisions such as whether or not to include peripherals, such as memory controllers, can be considered part of the microarchitectural design process. This includes decisions on the performance-level and connectivity of these peripherals.

Unlike architectural design, where achieving a specific performance level is the main goal, microarchitectural design pays closer attention to other constraints. Since microarchitecture design decisions directly affect what goes into a system, attention must be paid to such issues as:

chip area/cost
power consumption
logic complexity
ease of connectivity
manufacturability
ease of debugging
testability

[edit] Micro-Architectural Concepts
In general, all CPUs, single-chip microprocessors or multi-chip implementations run programs by performing the following steps:

read an instruction and decode it
find any associated data that is needed to process the instruction
process the instruction
write the results out
Complicating this simple-looking series of steps is the fact that the memory hierarchy, which includes caching, main memory and non-volatile storage like hard disks, (where the program instructions and data reside) has always been slower than the processor itself. Step (2) often introduces a lengthy (in CPU terms) delay while the data arrives over the computer bus. A considerable amount of research has been put into designs that avoid these delays as much as possible. Over the years, a central goal was to execute more instructions in parallel, thus increasing the effective execution speed of a program. These efforts introduced complicated logic and circuit structures. Initially these techniques could only be implemented on expensive mainframes or supercomputers due to the amount of circuitry needed for these techniques. As semiconductor manufacturing progressed, more and more of these techniques could be implemented on a single semiconductor chip.

See Article Central Processing Unit for a more detailed discussion on operation basics.

See Article History of general purpose CPUs for a more detailed discussion on the development history of CPUs.

What follows is a survey of micro-architectural techniques that are common in modern CPUs.


[edit] Instruction Set choice
The choice of which Instruction Set Architecture to use greatly affects the complexity of implementing high performance devices. Over the years, computer architects have strived to simplify instruction sets, which enables higher performance implementations by allowing designers to spend effort and time on features which improve performance as opposed to spending their energies on the complexity inherent in the instruction set.

Instruction set design has progressed from CISC, RISC, VLIW, EPIC types. Architectures that are dealing with data parallelism include SIMD and Vectors.


[edit] Instruction pipelining
Main article: instruction pipeline
One of the first, and most powerful, techniques to improve performance is the use of the instruction pipeline. Early processor designs would carry out all of the steps above for one instruction before moving onto the next. Large portions of the circuitry were left idle at any one step; for instance, the instruction decoding circuitry would be idle during execution and so on.

Pipelines improve performance by allowing a number of instructions to work their way through the processor at the same time. In the same basic example, the processor would start to decode (step 1) a new instruction while the last one was waiting for results. This would allow up to four instructions to be "in flight" at one time, making the processor look four times as fast. Although any one instruction takes just as long to complete (there are still four steps) the CPU as a whole "retires" instructions much faster and can be run at a much higher clock speed.

RISC make pipelines smaller and much easier to construct by cleanly separating each stage of the instruction process and making them take the same amount of time — one cycle. The processor as a whole operates in an assembly line fashion, with instructions coming in one side and results out the other. Due to the reduced complexity of the Classic RISC pipeline, the pipelined core and an instruction cache could be placed on the same size die that would otherwise fit the core alone on a CISC design. This was the real reason that RISC was faster. Early designs like the SPARC and MIPS often ran over 10 times as fast as Intel and Motorola CISC solutions at the same clock speed and price.

Pipelines are by no means limited to RISC designs. By 1986 the top-of-the-line VAX (the 8800) was a heavily pipelined design, slightly predating the first commercial MIPS and SPARC designs. Most modern CPUs (even embedded CPUs) are now pipelined, and microcoded CPUs with no pipelining are seen only in the most area-constrained embedded processors. Large CISC machines, from the VAX 8800 to the modern Pentium 4 and Athlon, are implemented with both microcode and pipelines. Improvements in pipelining and caching are the two major microarchitectural advances that have enabled processor performance to keep pace with the circuit technology on which they are based.


[edit] Cache
It was not long before improvements in chip manufacturing allowed for even more circuitry to be placed on the die, and designers started looking for ways to use it. One of the most common was to add an ever-increasing amount of cache memory on-die. Cache is simply very fast memory, memory that can be accessed in a few cycles as opposed to "many" needed to talk to main memory. The CPU includes a cache controller which automates reading and writing from the cache, if the data is already in the cache it simply "appears," whereas if it is not the processor is "stalled" while the cache controller reads it in.

RISC designs started adding cache in the mid-to-late 1980s, often only 4 KB in total. This number grew over time, and typical CPUs now have about 512 KB, while more powerful CPUs come with 1 or 2 or even 4, 6, 8 or 12 MB, organized in multiple levels of a memory hierarchy. Generally speaking, more cache means more speed.

Caches and pipelines were a perfect match for each other. Previously, it didn't make much sense to build a pipeline that could run faster than the access latency of off-chip memory. Using on-chip cache memory instead, meant that a pipeline could run at the speed of the cache access latency, a much smaller length of time. This allowed the operating frequencies of processors to increase at a much faster rate than that of off-chip memory.


[edit] Branch Prediction
One of barriers to achieving higher performance through instruction-level parallelism are pipeline stalls and flushes due to branches. Normally, whether a conditional branch will be taken isn't known until late in the pipeline as conditional branches depend on results coming from a register. From the time that the processor's instruction decoder has figured out that it has encountered a conditional branch instruction to the time that the deciding register value can be read out, the pipeline might be stalled for several cycles. On average, every fifth instruction executed is a branch, so that's a high amount of stalling. If the branch is taken, its even worse, as then all of the subsequent instructions which were in the pipeline needs to be flushed.

Techniques such as branch prediction and speculative execution are used to lessen these branch penalties. Branch prediction is where the hardware makes educated guesses on whether a particular branch will be taken. The guess allows the hardware to prefetch instructions without waiting for the register read. Speculative execution is a further enhancement in which the code along the predicted path is executed before it is known whether the branch should be taken or not.


[edit] Superscalar
Even with all of the added complexity and gates needed to support the concepts outlined above, improvements in semiconductor manufacturing soon allowed even more logic gates to be used.

In the outline above the processor processes parts of a single instruction at a time. Computer programs could be executed faster if multiple instructions were processed simultaneously. This is what superscalar processors achieve, by replicating functional units such as ALUs. The replication of functional units was only made possible when the die area of a single-issue processor no longer stretched the limits of what could be reliably manufactured. By the late 1980s, superscalar designs started to enter the market place.

In modern designs it is common to find two load units, one store (many instructions have no results to store), two or more integer math units, two or more floating point units, and often a SIMD unit of some sort. The instruction issue logic grows in complexity by reading in a huge list of instructions from memory and handing them off to the different execution units that are idle at that point. The results are then collected and re-ordered at the end.


[edit] Out-of-order execution
The addition of caches reduces the frequency or duration of stalls due to waiting for data to be fetched from the memory hierarchy, but does not get rid of these stalls entirely. In early designs a cache miss would force the cache controller to stall the processor and wait. Of course there may be some other instruction in the program whose data is available in the cache at that point. Out-of-order execution allows that ready instruction to be processed while an older instruction waits on the cache, then re-orders the results to make it appear that everything happened in the programmed order.


[edit] Speculative execution
One problem with an instruction pipeline is that there are a class of instructions that must make their way entirely through the pipeline before execution can continue. In particular, conditional branches need to know the result of some prior instruction before "which side" of the branch to run is known. For instance, an instruction that says "if x is larger than 5 then do this, otherwise do that" will have to wait for the results of x to be known before it knows if the instructions for this or that can be fetched.

For a small four-deep pipeline this means a delay of up to three cycles — the decode can still happen. But as clock speeds increase the depth of the pipeline increases with it, and some modern processors may have 20 stages or more. In this case the CPU is being stalled for the vast majority of its cycles every time one of these instructions is encountered.

The solution, or one of them, is speculative execution, also known as branch prediction. In reality one side or the other of the branch will be called much more often than the other, so it is often correct to simply go ahead and say "x will likely be smaller than five, start processing that". If the prediction turns out to be correct, a huge amount of time will be saved. Modern designs have rather complex prediction systems, which watch the results of past branches to predict the future with greater accuracy.


[edit] Multiprocessing and multithreading
Computer architects have become stymied by the growing mismatch in CPU operating frequencies and DRAM access times. None of the techniques that exploited instruction-level parallelism within one program could make up for the long stalls that occurred when data had to be fetched from main memory. Additionally, the large transistor counts and high operating frequencies needed for the more advanced ILP techniques required power dissipation levels that could no longer be cheaply cooled. For these reasons, newer generations of computers have started to exploit higher levels of parallelism that exist outside of a single program or program thread.

This trend is sometimes known as throughput computing. This idea originated in the mainframe market where online transaction processing emphasized not just the execution speed of one transaction, but the capacity to deal with massive numbers of transactions. With transaction-based applications such as network routing and web-site serving greatly increasing in the last decade, the computer industry has re-emphasized capacity and throughput issues.

One technique of how this parallelism is achieved is through multiprocessing systems, computer systems with multiple CPUs. Once reserved for high-end mainframes and supercomputers, small scale (2-8) multiprocessors servers have become commonplace for the small business market. For large corporations, large scale (16-256) multiprocessors are common. Even personal computers with multiple CPUs have appeared since the 1990s.

With further transistor size reductions made available with semiconductor technology advances, multicore CPUs have appeared where multiple CPUs are implemented on the same silicon chip. Initially used in chips targeting embedded markets, where simpler and smaller CPUs would allow multiple instantiations to fit on one piece of silicon. By 2005, semiconductor technology allowed dual high-end desktop CPUs CMP chips to be manufactured in volume. Some designs, such as Sun Microsystems' UltraSPARC T1 have reverted back to simpler (scalar, in-order) designs in order to fit more processors on one piece of silicon.

Another technique that has become more popular recently is multithreading. In multithreading, when the processor has to fetch data from slow system memory, instead of stalling for the data to arrive, the processor switches to another program or program thread which is ready to execute. Though this does not speed up a particular program/thread, it increases the overall system throughput by reducing the time the CPU is idle.

Conceptually, multithreading is equivalent to a context switch at the operating system level. The difference is that a multithreaded CPU can do a thread switch in one CPU cycle instead of the hundreds or thousands of CPU cycles a context switch normally requires. This is achieved by replicating the state hardware (such as the register file and program counter) for each active thread.

A further enhancement is simultaneous multithreading. This technique allows superscalar CPUs to execute instructions from different programs/threads simultaneously in the same cycle.

See Article History of general purpose CPUs for other research topics affecting CPU design

Computer architecture

In computer engineering, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements (especially speeds and interconnections) and design implementations for the various parts of a computer — focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.

It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.

Computer architecture comprises at least three main subcategories:[1]

Instruction set architecture, or ISA, is the abstract image of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, memory address modes, processor registers, and address and data formats.
Microarchitecture, also known as Computer organization is a lower level, more concrete, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA.[2] The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.
System Design which includes all of the other hardware components within a computing system such as:
system interconnects such as computer buses and switches
memory controllers and hierarchies
CPU off-load mechanisms such as direct memory access
issues like multi-processing.
Once both ISA and microarchitecture has been specified, the actual device needs to be designed into hardware. This design process is often called implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.

Implementation can be further broken down into three pieces:

Logic Implementation/Design - where the blocks that were defined in the microarchitecture are implemented as logic equations.
Circuit Implementation/Design - where speed critical blocks or logic equations or logic gates are implemented at the transistor level.
Physical Implementation/Design - where the circuits are drawn out, the different circuit components are placed in a chip floor-plan or on a board and the wires connecting them are routed.
For CPUs, the entire implementation process is often called CPU design.

More specific usages of the term include more general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures
More sub-definitions
Some practioners of computer architecture at companies such as Intel and AMD use more fine distinctions:

Macroarchitecture - architectural layers that are more abstract than microarchitecture, e.g. ISA
ISA (Instruction Set Architecture) - as defined above
Assembly ISA - a smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations
Programmer Visible Macroarchitecture - higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture - although in practice the C microarchitecture for a particular computer includes
UISA (Microcode Instruction Set Architecture) - a family of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA.
Pin Architecture - the set of functions that a microprocessor is expected to provide, from the point of view of a hardware platform. E.g. the x86 A20M, FERR/IGNNE or FLUSH pins, and the messages that the processor is expected to emit after completing a cache invalidation so that external caches can be invalidated. Pin architecture functions are more flexible than ISA functions - external hardware can adapt to changing encodings, or changing from a pin to a message - but the functions are expected to be provided in successive implementations even if the manner of encoding them changes.

[edit] Design goals
The exact form of a computer system depends on the constraints and goals for which it was optimized. Computer architectures usually trade off standards, cost, memory capacity, latency and throughput. Sometimes other considerations, such as features, size, weight, reliability, expandability and power consumption are factors as well.

The most common scheme carefully chooses the bottleneck that most reduces the computer's speed. Ideally, the cost is allocated proportionally to assure that the data rate is nearly the same for all parts of the computer, with the most costly part being the slowest. This is how skillful commercial integrators optimize personal computers.


[edit] Cost
Generally cost is held constant, determined by either system or commercial requirements.


[edit] Performance
Computer performance is often described in terms of clock speed (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. As a result manufacturers have moved away from clock speed as a measure of performance. Computer performance can also be measured with the amount of cache a processor has. If the speed, MHz or GHz, were to be a car then the cache is like a traffic light. No matter how fast the car goes, it still will not be stopped by a green traffic light. The higher the speed, and the greater the cache, the faster a processor runs.

Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds up a program. Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

There are two main types of speed, latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking almost immediately after they have been instructed to brake.

The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). Power consumption has become important in servers and portable devices like laptops.

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general tasks.


[edit] Power consumption
Power consumption is another design criterion that factors in the design of modern computers. Power efficiency can often be traded for performance or cost benefits. With the increasing power density of modern circuits as the number of transistors per chip scales (Moore's Law), power efficiency has increased in importance. Recent processor designs such as the Intel Core 2 put more emphasis on increasing power efficiency. Also, in the world of embedded computing, power efficiency has long been and remains the primary design goal next to performance.


[edit] Historical perspective
Early usage in computer context

The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had occasion to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory; in attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements aimed at the level of “system architecture” – a term that seemed more useful than “machine organization.” Subsequently Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.” Brooks went on to play a major role in the development of the IBM System/360 line of computers, where “architecture” gained currency as a noun with the definition “what the user needs to know.” Later the computer world would employ the term in many less-explicit ways.

The first mention of the term architecture in the refereed computer literature is in a 1964 article describing the IBM System/360.[3] The article defines architecture as the set of “attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flow and controls, the logical design, and the physical implementation.” In the definition, the programmer perspective of the computer’s functional behavior is key. The conceptual structure part of an architecture description makes the functional behavior comprehensible, and extrapolatable to a range of use cases. Only later on did ‘internals’ such as “the way by which the CPU performs internally and accesses addresses in memory,” mentioned above, slip into the definition of computer architecture.

Computer hardware

Computer hardware is the physical part of a computer, including the digital circuitry, as distinguished from the computer software that executes within the hardware. The hardware of a computer is infrequently changed, in comparison with software and hardware data, which are "soft" in the sense that they are readily created, modified or erased on the computer. Firmware is a special type of software that rarely, if ever, needs to be changed and so is stored on hardware devices such as read-only memory (ROM) where it is not readily changed (and is, therefore, "firm" rather than just "soft").

Most computer hardware is not seen by normal users. It is in embedded systems in automobiles, microwave ovens, electrocardiograph machines, compact disc players, and other devices. Personal computers, the computer hardware familiar to most people, form only a small minority of computers (about 0.2% of all new computers produced in 2003).
Typical PC hardware
A typical personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:


Internals of typical personal computerImage:ASRock K7VT4A Pro Mainboard-eng-labels.jpg
Typical Motherboard found in a computer
Inside a Custom Computer
[edit] Motherboard
Main article: Motherboard
The motherboard is the "heart" of the computer, through which all other components interface.

Central processing unit (CPU) - Performs most of the calculations which enable a computer to function, sometimes referred to as the "brain" of the computer.
Computer fan - Used to lower the temperature of the computer; a fan is almost always attached to the CPU, and the computer case will generally have several fans to maintain a constant airflow. Liquid cooling can also be used to cool a computer, though it focuses more on individual parts rather than the overall temperature inside the chassis.
Random Access Memory (RAM) - Fast-access memory that is cleared when the computer is powered-down. RAM attaches directly to the motherboard, and is used to store programs that are currently running.
Firmware is loaded from the Read only memory ROM run from the Basic Input-Output System (BIOS) or in newer systems Extensible Firmware Interface (EFI) compliant
Internal Buses - Connections to various internal components.
PCI
PCI-E
USB
HyperTransport
CSI (expected in 2008)
AGP (being phased out)
VLB (outdated)
External Bus Controllers - used to connect to external peripherals, such as printers and input devices. These ports may also be based upon expansion cards, attached to the internal buses.
parallel port (outdated)
serial port (outdated)mainbatook
USB
firewire
SCSI (On Servers and older machines)
PS/2 (For mice and keyboards, being phased out and replaced by USB.)
ISA (outdated)
EISA (outdated)
MCA (outdated)

[edit] Power supply
Main article: Computer power supply
A case that holds a transformer, voltage control, and (usually) a cooling fan, and supplies power to run the rest of the computer, the most common types of power supplies are AT and BabyAT (old) but the standard for PC's actually are ATX and micro ATX computer parts or pysical component of computer[batook ake galiey]


[edit] Storage controllers
Controllers for hard disk, CD-ROM and other drives like internal Zip and Jaz conventionally for a PC are IDE/ATA; the controllers sit directly on the motherboard (on-board) or on expansion cards, such as a Disk array controller. IDE is usually integrated, unlike SCSI which is found in most servers. The floppy drive interface is a legacy MFM interface which is now slowly disappearing. All these interfaces are gradually being phased out to be replaced by SATA and SAS.[edited by jent]


[edit] Video display controller
Main article: Graphics card
Produces the output for the visual display unit. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a Graphics Card.


[edit] Removable media devices
Main article: Computer storage
CD - the most common type of removable media, inexpensive but has a short life-span.
CD-ROM Drive - a device used for reading data from a CD.
CD Writer - a device used for both reading and writing data to and from a CD.
DVD - a popular type of removable media that is the same dimensions as a CD but stores up to 6 times as much information. It is the most common way of transferring digital video.
DVD-ROM Drive - a device used for reading data from a DVD.
DVD Writer - a device used for both reading and writing data to and from a DVD.
DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.
Blu-ray - a high-density optical disc format for the storage of digital information, including high-definition video.
BD-ROM Drive - a device used for reading data from a Blu-ray disc.
BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.
HD DVD - a high-density optical disc format and successor to the standard DVD. It was a discontinued competitor to the Blu-ray format.
Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium.
Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994.
USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable and rewritable.
Tape drive - a device that reads and writes data on a magnetic tape, usually used for long term storage.

[edit] Internal storage
Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.

Hard disk - for medium-term storage of data.
Solid-state drive - a device similar to hard disk, but containing no moving parts.
Disk array controller - a device to manage several hard disks, to achieve performance or reliability improvement.
jent jent je nt\\\


[edit] Sound card
Main article: Sound card
Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade.


[edit] Networking
Main article: Computer networks
Connects the computer to the Internet and/or other computers.

Modem - for dial-up connections
Network card - for DSL/Cable internet, and/or connecting to other computers.
Direct Cable Connection - Use of a null modem, connecting two computers together using their serial ports or a Laplink Cable, connecting two computers together with their parallel ports.

[edit] Other peripherals
Main article: Peripheral
In addition, hardware devices can include external components of a computer system. The following are either standard or very common.


Wheel mouseIncludes various input and output devices, usually external to the computer system


[edit] Input
Main article: Input
Text input devices
Keyboard - a device, to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.
Pointing devices
Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.
Trackball - a pointing device consisting of an exposed portruding ball housed in a socket that detects rotation about two axes.
Xbox 360 Controller - A controller used for Xbox 360, Which with the use of the application Switchblade(tm), Can be used as an additional pointing device with the left or right thumbstick.
Gaming devices
Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.
Gamepad - a general game controller held in the hand that relies on the digits (especially thumbs) to provide input.
Game controller - a specific type of controller specialized for certain gaming purposes.
Image, Video input devices
Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.
Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.
Audio input devices
Microphone - an acoustic sensor that provides input by converting sound into an electrical signals
and it it usese are important{editedby batook}


[edit] Output
Main article: Output
Image, Video output devices
Printer - a peripheral device that produces a hard (usually paper) copy of a document.
Monitor - device that displays a video signal, similar to a television, to provide the user with information and an interface with which to interact.
Audio output devices
Speakers - a device that converts analog audio signals into the equivalent air vibrations in order to make audible sound.
Headset - a device similar in functionality to computer speakers used mainly to not disturb others nearby.