Showing posts with label Graphics. Show all posts
Showing posts with label Graphics. Show all posts

Sunday, 12 February 2012

Intel Updates Sandy Bridge Graphics Drivers

by Andrew Cunningham on 2/7/2012 6:15:00 PM
Posted in GPUs , Intel , Sandy Bridge

Intel has posted versions 15.22.54.2622 (32-bit) and 15.22.54.64.2622 (64-bit) of its drivers for the Intel HD-series lineup of integrated graphics processors, which includes both Sandy Bridge and older Nehalem-based chips in both desktop and laptop computers. The drivers are available for all editions of Windows Vista and Windows 7.

Of the Big Three players in the graphics market, Intel is the most erratic about its driver releases - their last generic driver was posted way back in September, and while that driver brought a good number of performance improvements and bug fixes, Intel's latest and greatest fixes just three documented issues: a crashing issue with a program called Interstage Studio Standard J-edition, an issue where the driver would change the refresh rate while on battery power, and an issue where content would appear strangely when rewound. Not terribly exciting, given the wait, but I'm sure that the people experiencing those problems are grateful for the fixes.

As always, Intel notes that these are generic drivers which may or may not be missing features present in the drivers provided by OEMs. I've never had issues using generic Intel drivers on any of my machines, from homemade desktops to OEM laptops to Macs running Windows, but your mileage may vary.

Source: Intel

Print This Article 4 Comments View All Comments Post a Comment I might be pleased! by Aikouka on Tuesday, February 07, 2012 "an issue where the driver would change the refresh rate while on battery power"

I think this might actually fix something odd that I noticed. I turned my AVR to my HTPC the other day, and I noticed that XBMC didn't look right at all (looked like the resolution got messed up). I realized after reading that fix that my power went out for a brief moment prior to that event, and it would have gone on battery backup. I don't know how much XBMC likes having the resolution (including refresh rate) adjusted while it's running. Aikouka Reply Same ol' Intel by MonkeyPaw on Tuesday, February 07, 2012 Didn't Intel say they were going to put more effort into graphics drivers? 5 months later and we have 3 updates. I guess they have better places to go with their record profits.

This is why Intel graphics are met with so much skepticism. Sure, more performance is found in each new version of the hardware, but the support is outright pathetic. AMD and nVidia spend lots of time, money, and effort on drivers, because it is a very big deal. MonkeyPaw Reply Who'd a thunk it? by fic2 on Tuesday, February 07, 2012 "Intel's latest and greatest fixes just three documented issues"

Must be because Intel's graphic driver is so close to perfect that nobody can find any bugs in it!

(Or maybe the one guy at intel that does the graphic driver took a holiday in December and three bugs is all he had time to fix) fic2 Reply Also, interestingly by KaarlisK on Tuesday, February 07, 2012 You can find their Ivy Bridge graphics driver at station-drivers.
And guess what... it only supports Ivy and Sandy bridge.
Which basically means that they're probably continuing to only support the two latest GPU generations in their latest branch. Which basically means if you want features and performance improvements in the long term, do not go for an Intel GPU, even though the hardware may support everything you need. KaarlisK Reply Subject Comment Post Comment Please login or register to post a comment.
User Name Password Remember me? Login 1 View All Comments Post a Comment Follow AnandTech
Latest from AnandTech Pipeline Submit News! Google Releases Chrome For Android Nokia Announces White Lumia 800 Droid 4 Available February 10th for $199 AMD Announces Turks Based FirePro V3900 LG's G6 Series: A New Kind of Google TV Partnership Clevo Refreshes X7200 With X79-Based P270WM, AVADirect First in Line Lenovo ThinkPad Tablet Gets Ice Cream Sandwich in May CDMA/LTE Galaxy Nexus 4.0.4 Update Changes Signal Reporting Micron CEO Steve Appleton Dies in a Plane Crash AMD: The Flexibility is in the Fabric Motorola Droid RAZR, RAZR MAXX Update Enables CDRX for Better LTE Battery Life DailyTech Ex-Acer CEO Slapped With Lawsuit for Joining Lenovo Quick Note: PlayBook 2.0 Calendar, "Bridge" Smartphone Control Demoed Quick Note: Google Launches Chrome Browser for ICS Smartphones, Tablets Bush Defends Auto Bailout Amid Romney Attacks Woman Ordered to Decrypt Laptop in Bank Fraud May Have "Forgotten" Password 2/7/2012 Daily Hardware Reviews China Prepares to Fine Apple, Possibly Ban iPad for Trademark Abuse Fisker Loses Access to DOE Loan, Lays Off Delaware Factory Employees Nikon Announces 36.3MP D800, D800E D-SLRs GM Makes eAssist Hybrid Standard on Buick 2013 Regal, Still Lags Behind Competition HTC Prepares Quad-Core Edge, Razor-Thin "Ville" to Fight Sinking Revenue Anonymous Spies on Secret FBI Conference Call, Posts Audio Clip Woman Receives World's First Complete 3D Printed Lower Jaw Implant 2/6/2012 Daily Hardware Reviews -- Intel 520 240GB SSD Edition AMD Concedes Die-Shrink Race to Intel, Considers ARM Cores Google Hires Former Senior Apple Director for "Secret" Project Corning, Samsung Mobile Display Announce OLED Glass Partnership Twitter RT @anandtech: Intel SSD 520 Review: Cherryville Brings Reliability to SandForce http://t.co/GaRoGhhE RT @nerdtalker: I strongly believe we need a barebones search engine alternative that cares about objective result quality and not pushing it's own services RT @anandtech: Understanding AMD's Roadmap & New Direction http://t.co/eySXYtz3 @anexanhume already did :) http://t.co/MNY0RArM RT @anandtech: AMD's 2012 - 2013 Server Roadmap: Abu Dhabi, Seoul & Delhi CPUs http://t.co/ohG0B8BE RT @anandtech: AMD's 2012 - 2013 Client CPU/GPU/APU Roadmap Revealed http://t.co/zqYWKdRS  

Copyright © 1997-2012 AnandTech, Inc. All rights reserved. Terms, Conditions and Privacy Information.
Click Here for Advertising Information Quantcast

Wednesday, 2 November 2011

Explained: The future of PC graphics

Explained: The future of PC graphics

The future of PC graphics

What's next for graphics? Why, Graphics Core Next, of course. Thanks, AMD, for that nicely pallindromic way to start off a feature.

And also for talking about the successor to the current generation of Radeon graphics cards, which is due sometime next year.

The unveiling of GCN took place at June's Fusion Developer Summit. It's the first complete architectural overhaul of GPU technology it's risked since the launch of Vista.

That also, incidentally, makes it the first totally new graphics card design for AMD that isn't based on work started by ATI before it was purchased.

Vista, and specifically DirectX 10, called for graphics cards to support a fully programmable shader pipeline.

That meant doing away with traditional bits of circuitry that dealt with specific elements of graphics processing – like pixel shaders and vertex shaders – and replacing them with something more flexible that could do it all: the unified shader (see "Why are shaders unifi ed?", next page).

Schism

During the birth of DX10 class graphics, there was something of a schism between Nvidia and AMD.

To simplify: the former opted for an interpretation of unified shader theory in its G80 GeForce chips that was quite flexible. Place a few hundred very simple processors in a large array, and send them one calculation (or, in some circumstances, two) a piece to work on until all the work is done.

It's a method that creates a bit of a nightmare for the set-up engine, but it's very flexible and for well written code that takes advantage of the way processors are bunched together on the board, dynamite.

In designing the G80 and its successors, Nvidia had its eye on applications beyond graphics. Developers could create GPGPU applications for GeForce cards written in C and more recently C++.

AMD/ATI, meanwhile, focused on the traditional requirements for a graphics card. Its unified shaders worked by combining operations into 'Very long instruction words' (VLIW) and sending them off to be processed in batches.

The basic unit in an early Nvidia DX10 card was a single 'scalar' processor, arranged in batches of 16 for parallel processing.

Inside an AMD one, it was a four way 'vector' processor and with a fifth one for special functions. Hence one name for the Radeon architecture: VLIW5. While the set-up sounds horrendous, it was actually designed to be more efficient.

The important point being that a pixel colour is defined by mixing red, green, blue and alpha (transparency) channels. So the R600 processor – which was the basis of the HD2xxx and HD3xxx series of cards – was designed to be incredibly efficient at working out those four values over and over again.

Sadly, those early R600 cards weren't great, but with time and tweaking AMD made the design work, and work well.

The HD4xxx, HD5xxx and HD6xxx cards were superlative, putting out better performance and requiring less power than Nvidia peers. Often cheaper too. But despite refinements over the last four years, the current generation of GeForce and Radeon chips are still recognisable as part of the same families as those first G80 and R600.

There have been changes to the memory interface (goodbye power hungry Radeon ring bus) and vast increases to the number of execution cores (1,536 on a single Radeon HD6970 compared to 320 on an HD2900XT), but the major change over time has been separating out the special functions unit from the processor cores.

Graphics Core Next, however, is a completely new design. According to AMD, its existing architecture is no longer the most efficient for the tasks that graphics cards are called to do.

New approach

VLIW5

FUTURE GRAPHICS: VLIW5 has four vector processing units: one each for R, G, B and alpha

Proportionally, the number of routines for physics and geometry being run on the graphics card has increased dramatically in a typical piece of game code, calling for a more flexible processor design than one geared up primarily for colouring in pixels.

As a result, the VLIW design is being abandoned in favour of one that can be programmed in C and C++.

The basic unit of GCN is a 16 wide array of execution units arranged for SIMD (single instruction, multiple data) operations. If all that sounds familiar to G80 and on, it's because it is.

Cynically, this could be seen as a tacit acknowledgement that Nvidia had it right all along, and there's no doubt that AMD is looking at GPGPU applications for its next generation of chips. But there's more to it than that.

Inside GCN, these SIMD processors are batched together in groups of four to create a 'compute unit' or CU. They are, functionally, still fourway vector units (perfect for RGBA instructions) but are also coupled to a scalar processor for one off calculations that can't be completed efficiently on the SIMD units.

Each CU has all the circuitry it needs to be virtually autonomous, too, with an L1 cache, Instruction Fetch Arbitration controller, Branch & MSG unit and so on.

There's more than the CU to GCN, though. The new architecture also supports x86 virtual memory spaces, meaning large datasets – like the megatextures id Software is employing for Rage – can be addressed when they're partially resident outside of the on-board memory.

And while it's not – as other observers have pointed out – an out-of-order processor, it is capable of using its transistors very efficiently by working on multiple threads simultaneously and switching between them if one is paused and waiting for a set of values to be returned. In other words, it's an enormously versatile chip.

After an early preview of the design, some have noted certain similarities with Intel's defunct Larrabee concepts and also with the Atom and ARM-8 chips, except much more geared up for parallel processing.

GCN

INVENTIVE NAMING: GCN will still work with RGBA data, but boasts greater flexibility

"Graphics is still our primary focus," said AMD's Eric Demers during his keynote presentation on GCN, "But we are making significant optimisations for compute... What is compute and what is graphics is blurred."

The big question now is whether or not AMD can make this ambitious chip work. Its first VLIW5 chips were a disappointment, running hotter and slower than expected. So were Nvidia's first generation Fermi-based GPUs.

Will GCN nail it in one? We've got a while to wait to find out. The first chips based on GCN are codenamed Northern Islands and will probably be officially branded as Radeon HD7xxx. They were originally planned for this year, but aren't expected now until 2012.



Thursday, 6 October 2011

Radeon HD 6950 Toxic Graphics Card Announced

type="html">

Sapphire has introduced its modern series of graphics card that is powered by b’s 40nm Cayman GPU, the Radeon HD 6950 Toxic Edition. The version Radeon HD 6950 Toxic Graphics Card arrives and furnished accompanied by a convention cooling system boasting a Vapor Chamber, a single fan and a punctured back plate. The Sapphire card arrive boasting 1408 Stream processors and the capability to be clocked at 880 MHz.

Other specifications of the Radeon HD 6950 includes a 256-bit memory interface, CrossFireX support, dual BIOS, digital VRMs, 8 + 6 pin power connectors, 2GB of GDDR5 VRAM set to 5200 MHz and a few display ports which includes dual-DVI, one HDMI and two mini DisplayPorts.

The new modern  Radeon HD 6950 Toxic Graphics Card from Sapphire Technology will be attainable with and without Dirt 3 PC Game bundle, No pricing details has been announced for the Radeon HD 6950 Toxic Graphics Card.


Tags: Radeon HD 6950 Toxic Graphics Card,Radeon HD 6950 Toxic Graphics Card price,Radeon HD 6950 Toxic Graphics Card specification,Radeon HD 6950 Toxic Graphics Card details,Radeon HD 6950 Toxic Graphics Card port,Radeon HD 6950 Toxic Graphics Card price in india,Radeon HD 6950 Toxic Graphics Card clock speed