Palit GeForce 9800GTX Review :: Palit 9800GTX Features

04-11-2008 · Category: Hardware - Video Cards

By Ben Sun
  • Bus interface: PCI Express 2.0 Support
  • Memory: 512MB
  • Memory Interface: 256bit
  • Core Clock: 675MHz
  • Memory Clock: 2200MHz (1100MHz x 2)
  • NVIDIA® unified architecture with GigaThread™ technology
  • Full Microsoft® DirectX® 10 Shader Model 4.0 support
  • 16x full-screen anti-aliasing
  • True 128-bit floating point high dynamic-range (HDR) lighting
  • NVIDIA® Quantum Effects™ physics processing technology
  • Dual Dual-link DVI outputs support 2560x1600 resolution displays
  • 3-way NVIDIA SLI® technology
  • NVIDIA® PureVideo™ HD technology
  • NVIDIA HybridPower™ Technology
  • OpenGL® 2.1 support
  • NVIDIA ForceWare® Unified Driver Architecture (UDA)
  • Certified for Microsoft® Windows Vista™
  • NVIDIA® Lumenex™ Engine
  • Dual 400MHz RAMDACs
  • Discrete, Programmable Video Processor
  • Hardware Decode Acceleration
  • High-Quality Scaling
  • Inverse Telecine (3:2 & 2:2 Pulldown Correction)
  • Bad Edit Correction
  • Integrated SD and HD TV Output
  • Noise Reduction
  • Edge Enhancement
  • Dynamic Contrast Enhancement
  • Dual Stream Decode Acceleration
  • Dual-link HDCP capable

Brand Name

Palit

Palit Part Number

GeForce 9800GTX

Graphics Chip

G92

Core clock

675MHz

Shader Clock

1688MHz

SPs

128

Fabrication Process

65 nm

Transistors

754 Million

Memory clock

1100 MHz/ 2200 MHz

Memory Interface

256-bit

Memory bandwidth

70.4 GB/second

Memory Size

512MB

ROPs

16

Texture Filtering Units

64

Texture Filtering Rate

43.2 Gigatexels/second

HDCP Support

Yes

HDMI Support

Yes (using DVI-to HDMI adapter)

Connectors

2x Dual-Link DVI-I, 1 7-pin HDTV Out

RAMDACs

400MHz

Bus

PCI Express 2.0

Form Factor

Dual Slot

Power Connectors

2 x6-pin

Max Board Power

 156 Watts

GPU Thermal Threshold

105 C

Palit decided to go with the standard clock speeds of the GeForce 9800GTX of 675MHZ for the core clock, 2200MHz for the memoery and 1689MHz for the Shader Clock. All three cards I’ve tested so far run at the standard clock. Differences between the three cards are the bundle, packaging, and build quality of the card as performance is not likely to be much different between three cards of the same speed.

The 9800GTX is based upon NVIDIA’s G92 chip that was first introduced with the GeForce 8800GT card from late last year. That card came with 112 Stream Processors, while the 9800GTX has 128 Stream Processors, the same amount as the Geforce 8800GTS 512MB card which was released shortly after the 8800GT.

9800GTX fully supports Microsoft’s DirectX 10 standard that was released along with Windows Vista in 2006. DirectX 10 includes support for new visual features like Geometry Shaders, Pixel Shader 4.0 and Vertex Shader 4.0. Games like Crysis look best when DirectX 10 is enabled, as the DirectX 9.0 version uses less complicated shaders and effects.  DX10 also brought support for Unified Shaders which allows the hardware to dynamically allocate Pixel and Vertex Shaders.

NVIDIA introduced SLI with their nForce 590 SLI chipset and the GeForce 6800 Ultra launch four years ago. The launch of the 8800GTX in 2006 brought the possibility of Triple-SLI or the use of three video cards to improve performance in games. Tri-SLI should provide the level of performance increase, NVIDIA claims 2.8x the performance of a single card is possible with the 9800GTX, which will not beat a 9800GX2 Quad-SLI setup in most cases.

NVIDIA has kept their anti-aliasing patterns for years since the release of the 8800GTX in 2006. 8x AA is a combination of 4x Coverage Sample Anti Aliasing and 4x AA. 8XQ is true 8x Multi-Sample Anti Aliasing. 16x is 4x RGMS AA+8 CSAA. 16XQ is 8 SSAA (Sparse Sample Anti Aliasing)+ 8 Coverage Samples. The higher the AA setting, the higher the image quality, results in a tradeoff of lower performance. With the advent of the 8800GT last year, NVIDIA also introduced new Transparency Anti Aliasing through the drivers improving image quality.