search

Google

Tuesday, May 13, 2008

Hard Drive & Storage

Storage Review
StorageReview.com attempts to address a topic that seems sadly neglected in the online community: hard disk and storage-related performance. No, they're not glamorous. No, we can't come up with loads of interesting screen shots and pictures (hard drives tend to look very similar to one another after a while). Hard drive performance is, however, vital to the overall performance of your PC.
The Whole Drive Guide
Advice for the gigabyte-addicted: How to upgrade to today's best and biggest--or keep your current hard disks running smoothly.

SCSI Planet Hard Drive Comparisons
Comparison of 150+ SCSI hard drives with a link directly to the manufacturers information and datasheets. You'll also find plenty of other SCSI info here.

Maxtor Interactive Jumper Guide
Well, obviously, they haven't made it any easier for the user since the most popular search words these days seem to be "Maxtor jumper settings". What is it about these guys not putting the simple little diagram on the drive?

ISA Multi-input/output Controller Cards Support
Can't find jumper settings for those old controller cards laying around? You might check here. I love Gigagon Data Corp's motto...striving to eliminate the nightmares of zero-support products.

Hard Drive Technology and Data Recovery
Excellent article "Data Removal and Erasure From Hard Drives" Also info on history, design, controllers, interfaces, Explained well from Data Recovery Labs.

Western Digital Drive Parameters
(including obsolete)

The BIOS IDE Harddisk Limitations
This article targets at these PCs that have a system BIOS dated from 1992 to 1998 which can limit the usable capacity of your new drive.

Quick Guide to a Win98 Fresh Start fdisk and reformat
This is a very simple, but adequate guide for starting fresh in Win98. Fdisk seems to needlessly puts the scare in us. If you have formatted your drive without fdisk...then you haven't done a real format at all.

SCSI Info Central
Gary Fields gives you the latest SCSI FAQ, SCSI Game Rules and other SCSI related stuff. The FAQ has a very nice layout. You almost forget it is a white paper. This guy knows his SCSI.

PC-Disk; Hard Drive Database
Incredible resource! A little confusing at first on picking model #'s, but dig a little further & you will be pleasantly surprised. "Big" hard disk database with over 5.000 disks, jumper settings and layouts! It passed my torture test w/ flying colors. Old, new, network, SCSI, etc.

Dan Kegel's Fast Hard Drives Page
It seems that the most important consideration in a disk drive is its rotation speed. 4500 to 5400 are no longer common speeds....

Ontrack Jumper Viewer
A graphical, interactive Java applet for quickly finding jumper settings for IDE/ATA hard drives. This viewer is similar to the one in Ontrack's Disk Manager hard drive installation utilities. But with the online version, you always have access to their most current database of hard drives.

The Red Hill Guide to Hard Drives
A real decent run down on hard drives, manufacturers, performance, SCSI, etc. with comparisons.

Zip & Jaz Drive Click Death
Find out more about a set of serious data threatening problems being encountered with increasing frequency among users of Iomega's Zip and Jaz removable media mass storage systems. Download the FREE 55Kb "Trouble In Paradise" by Steve Gibson and see if that is your problem. Make sure and check out the rest of his site here.

SCSI Troubleshooting Guide
Troubleshooting SCSI connections. Scroll down the page.

Hard Disk Partitioning: Why and How
Hard Disk Partitioning, Why and How (for MS-DOS/Windows PCs) by Stan Brown.

Hard Drive Specifications and Jumper Settings
Full specs & jumper settings for the following Maxtor & Seagate drives. The (#) is how many of that drive type. Maxtor PCMCIA (7) SCSI (36)Seagate IDE/AT (90) IPI (24) MFM (24) RLL (12) SCSI (90) SMD (18)

Enhanced IDE FAQ & Utilities
The Enhanced IDE FAQ is an attempt to answer the most common questions concerning EIDE hard disks, CD-ROMs, tapes, interfaces and setup.
source : http://hardwarehell.com

Motherboard Manufacturers / Vendors Directory

Older Motherboard Manuals
Manuals for A-trend, Fordlian, Full Yes, Edom, Superpower, QDI, Freetech / Flexus, Chicony, and other hard-to-find boards.

Motherboard Proxy
If you are needing info on a pre 2000 board, then you might check out this site. Decent database, however last entry I can find is April of 2000.

Lost Circuits BIOS Guide
This article is meant to shed some light on the various parameters accessible in the mainboard BIOS through setup.

Hacking Your Password
When you really need to hack your password...You must discharge the CMOS. This article covers an assortment of ways to do this. (At your own risk of course.)

BIOS Central
Great source for machine specific BIOS post codes and beep codes. Should be Post Code Central, but more BIOS content is expected soon.

The BIOS Web
Rundown of the BIOS including, how to identify, upgrade info, how POST operates, error messages, beep codes, setup utility info and much more. Plenty of other references. Worth checking out.

Adrian's Rojak Pot "BIOS Optimization Guide
Explains various settings in the BIOS Features Setup , Chipset Feature Setup, Integrated Peripherals and the cool part is the comments section. Post your comment, whether a question, unclear, or you just outright disagree.

The Bios Companion
Extracts from The BIOS Companion, the book that should come with your motherboard - it explains in plain English all the things you wanted to know about those secret settings, and more! The information is well presented and is well above A+ standard.

Weekly BIOS Tweaks
This page is only possible with the "valuable" input of Phil Croucher, author of "The BIOS Companion" Here are new extracts each week. (or any other info that Phil decides to throw our way, for getting the most from our computers.)

Wim's Bios Page
Allot of coverage. company lists, flash BIOS, message board etc.

Motherboard Manuals Data and More
Hey, Web Head Quarters is back! Great site for finding info and manuals for 486 and older Pentium motherboards. Diagnostic Software Tools, bios info, testing tips and more.

Intel i815 / i815E Motherboard Roundup
Good article that talks about the chipset, tells what to look for in a i815 / i815E board, and compares ten of the contenders. From Anand Tech.

Identify Your Motherboard (Award)
Part of the BIOS serial number identifies the vendor: Wim has a great breakdown of vendors by translating your bios. Also, plenty of other BIOS related material.

Identify Your Motherboard (AMI)
How to translate from the who made your motherboard for AMI based BIOS. Also, plenty of other BIOS related material.

The Flash BIOS Site
If you want general info about flash bios, want information about flash bios programmer devices or if yours is dead and it must be re-programmed, Arthur Kerkmeester will do it for $5. Chip and MB mfgrs. If you have an EPROM burner, you'll love this site.

POST (Power On Self Test) Error Codes
This is a pretty extensive list of the POST (Power On Self Test) error codes, incl. beep, for most IBM PCs and compatibles from Unicore Software. They have been around...

PC BIOS - General Guidelines
Phoenix Technologies has merged with Award Software and here you will find answers to the most frequently asked BIOS questions they receive. A very good BIOS rundown.

Motherboard Ratings Survey & User Reviews
A good motherboard ratings survey results from SysOpt's never ending "great" info collection. Pretty extensive list of user reviews.

PC Builder Motherboard Section
From the PC Buyers Guide. Excellent reviews, specs, CPU reports, benchmarks, building and upgrading info, etc. from Graeme S. Bennett Excellent resource I frequent often.

-->
Motherboard Home World
Manufacturers, Chips, Vendors, Reviews, Recommendations, Buying Secrets, Search, Chipsets, Mobo ID tools and more. (IMHO, Spot was better.)

PCGuide - Ref - System BIOS
The system BIOS is the lowest-level software in the computer; it acts an interface between the hardware (especially the chipset and processor) and the operating system.

Motherboard Manual Page
"Great" collection of slightly older motherboard manuals! Allot of good info on motherboards, BIOS, upgrading etc. From House of Hall.

Unicore Software
BIOS upgrades for your Award, Mr. Bios, Phoenix and AMI. Good info in their support area. Most of their info comes from the book above, The BIOS Companion.

FTP root at ftp.megatrends.com
BIOS for AMI boards, Upgrades & Manuals, other manuals, tech tips, Mega Raid, Utility etc. Good ftp if you know what you are looking for.

Motherboards and More
A Open Component Solutions is a component company of Acer. Get all the motherboard info, manuals, BIOS Updates etc here. (not to mention other components as well)

The AMI Bios Survival Guide
The guide is a little dated, (1997) but still an incredible source of information. There are a number of places suggested to find further information, This information was edited by Jean-Paul Rodrigue and Phil Croucher and represents the work of many contributors.

Intel Motherboard Manuals and Jumper Settings

Through PII...with thumbnails.

Intel Data Sheets and Programming Manuals
This page contains pointers to Intel documentation.

source : http://hardwarehell.com

Friday, May 9, 2008

Supercomputer

A supercomputer is a computer that is considered at the time of its introduction to be at the frontline in terms of processing capacity, particularly speed of calculation. The term "Super Computing" was first used by New York World newspaper in 1929[1] to refer to large custom-built tabulators that IBM had made for Columbia University.

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer", a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer micros in the industry.) Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Itanium, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, and open source-based software solutions such as Beowulf, WareWulf and openMosix which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community which often creates disruptive technology in this arena.


[edit] Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.

A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.


[edit] Hardware and software design

Processor board of a CRAY YMP vector computerSupercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.


[edit] Supercomputer challenges, technologies
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:

Vector processing
Liquid cooling
Non-Uniform Memory Access (NUMA)
Striped disks (the first instance of what was later called RAID)
Parallel filesystems

[edit] Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
more : http://en.wikipedia.org/wiki/Supercomputer

Supercomputer

A supercomputer is a computer that is considered at the time of its introduction to be at the frontline in terms of processing capacity, particularly speed of calculation. The term "Super Computing" was first used by New York World newspaper in 1929[1] to refer to large custom-built tabulators that IBM had made for Columbia University.

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer", a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer micros in the industry.) Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Itanium, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, and open source-based software solutions such as Beowulf, WareWulf and openMosix which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community which often creates disruptive technology in this arena.


[edit] Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.

A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.


[edit] Hardware and software design

Processor board of a CRAY YMP vector computerSupercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.


[edit] Supercomputer challenges, technologies
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:

Vector processing
Liquid cooling
Non-Uniform Memory Access (NUMA)
Striped disks (the first instance of what was later called RAID)
Parallel filesystems

[edit] Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
more : http://en.wikipedia.org/wiki/Supercomputer