Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Thursday, 1 March 2012

AMD is Open to Integrating 3rd Party IP in Future SoCs

by Anand Lal Shimpi on 2/2/2012 12:50:00 PM
Posted in CPUs , SoC , AMD , AMD FAD 2012 , Trade Shows

Don't expect AMD to go into much detail on this here at the Financial Analyst Day, but the slide above shows a definite step towards becoming a modern SoC company. Looking at TI, Qualcomm, NVIDIA and even Intel, integrating 3rd party IP into an SoC isn't unusual - particularly when competing in the ultra mobile space. AMD wants the same flexibility. Going forward, if AMD is successful, we will see SoCs based on AMD technologies that are combined with 3rd party IP. In theory this could come in the form of anything from a video decoder/encoder block to an ARM based CPU/GPU. AMD has mentioned ARM a few times in its presentations today but don't expect any major announcements here. The key word here is agility. AMD wants to be an SoC company that can deliver whatever combination of first and third party IP that the customer wants. 

Print This Article 1 Comments View All Comments Post a Comment AMD ARM SoC by chizow on Thursday, February 02, 2012 AMD ARM-based products would actually be pretty interesting. They obviously aren't doing too well competing in the x86 market, and Intel always threatens to pull their license every few years. Maybe they could reinvent themselves with ARM SoC with their own graphics APUs. chizow Reply Subject Comment Post Comment Please login or register to post a comment.
User Name Password Remember me? Login 1 View All Comments Post a Comment Follow AnandTech
Latest from AnandTech Pipeline Submit News! Intel Updates Sandy Bridge Graphics Drivers Google Releases Chrome For Android Nokia Announces White Lumia 800 Droid 4 Available February 10th for $199 AMD Announces Turks Based FirePro V3900 LG's G6 Series: A New Kind of Google TV Partnership Clevo Refreshes X7200 With X79-Based P270WM, AVADirect First in Line Lenovo ThinkPad Tablet Gets Ice Cream Sandwich in May CDMA/LTE Galaxy Nexus 4.0.4 Update Changes Signal Reporting Micron CEO Steve Appleton Dies in a Plane Crash AMD: The Flexibility is in the Fabric Motorola Droid RAZR, RAZR MAXX Update Enables CDRX for Better LTE Battery Life DailyTech Ex-Acer CEO Slapped With Lawsuit for Joining Lenovo Quick Note: PlayBook 2.0 Calendar, "Bridge" Smartphone Control Demoed Quick Note: Google Launches Chrome Browser for ICS Smartphones, Tablets Bush Defends Auto Bailout Amid Romney Attacks Woman Ordered to Decrypt Laptop in Bank Fraud May Have "Forgotten" Password 2/7/2012 Daily Hardware Reviews China Prepares to Fine Apple, Possibly Ban iPad for Trademark Abuse Fisker Loses Access to DOE Loan, Lays Off Delaware Factory Employees Nikon Announces 36.3MP D800, D800E D-SLRs GM Makes eAssist Hybrid Standard on Buick 2013 Regal, Still Lags Behind Competition HTC Prepares Quad-Core Edge, Razor-Thin "Ville" to Fight Sinking Revenue Anonymous Spies on Secret FBI Conference Call, Posts Audio Clip Woman Receives World's First Complete 3D Printed Lower Jaw Implant 2/6/2012 Daily Hardware Reviews -- Intel 520 240GB SSD Edition AMD Concedes Die-Shrink Race to Intel, Considers ARM Cores Google Hires Former Senior Apple Director for "Secret" Project Corning, Samsung Mobile Display Announce OLED Glass Partnership Twitter RT @anandtech: Intel SSD 520 Review: Cherryville Brings Reliability to SandForce http://t.co/GaRoGhhE RT @nerdtalker: I strongly believe we need a barebones search engine alternative that cares about objective result quality and not pushing it's own services RT @anandtech: Understanding AMD's Roadmap & New Direction http://t.co/eySXYtz3 @anexanhume already did :) http://t.co/MNY0RArM RT @anandtech: AMD's 2012 - 2013 Server Roadmap: Abu Dhabi, Seoul & Delhi CPUs http://t.co/ohG0B8BE RT @anandtech: AMD's 2012 - 2013 Client CPU/GPU/APU Roadmap Revealed http://t.co/zqYWKdRS  

Copyright © 1997-2012 AnandTech, Inc. All rights reserved. Terms, Conditions and Privacy Information.
Click Here for Advertising Information Quantcast

Friday, 23 December 2011

Doubts over Fusion Garage and Grid 10's future

Doubts over Fusion Garage and Grid 10's future

Fusion Garage, the company behind the JooJoo tablet and the Grid 10, has gone silent over rumours that its future is uncertain.

Fusion Garage tried to make a massive fuss over the arrival of the Grid 10 earlier this year by launching a secret marketing campaign under the guise of TabCo.

While there was much rumour and speculation over which company was behind TabCo, it emerged that it was Fusion Garage – who used the PR stunt to unveil the Grid10, a new Android tablet with a decidedly different user interface.

This announcement, however, was drowned out by the news that Google has acquired Motorola Mobility.

Tablet talk

Since then, the company had hoped to begin selling the Grid 10 in the US in October but Engadget is reporting that many haven't received their tablets after placing an order.

Then the company's website went down over the weekend, with many speculating this could be down to problems Fusion Garage has been having with stock quotas.

The site is now back up with a note on the product page explaining: "We are running out of stock. Thank you."

Fusion Garage's silence has been compounded with the fact that it has parted company with its PR company, which said about the split: "Unfortunately, none of our efforts have resulted in any communication from the company to the customers. Given all of this, we don't have any other choice but to cease working with FG effective tomorrow."

Both Fusion Garage's Twitter and Facebook pages are full of comments regarding Fusion Garage's lack of a response over the non-arrival of the Grid 10.

Fusion Garage had hoped the Grid 10 would bring consumers back to a company which had burned them with the disappointing JooJoo tablet.

In our Hands on: Fusion Garage Grid 10 review we felt that it actually wasn't that bad a tablet and one that when released may actually kick-start the budget Android tablet market.

That's if it is ever released at all.



Monday, 28 November 2011

Scorsese could make all future films in 3D

Scorsese could make all future films in 3D

Martin Scorsese has admitted that he would consider shooting all of his future films in 3D, following his experience directing Hugo.

The legendary filmmaker, whose new 3D fairytale adventure opens in the UK this weekend, says that his 1970s classic Taxi Driver could have benefited from being shot in 3D.

When asked by Deadline whether he'd consider going 3D-only, he said: "Quite honestly, I would."

"I don't think there's a subject matter that can't absorb 3D; that can't tolerate the addition of depth as a storytelling technique. We view everyday life with depth."

Stark turnaround

The admission from Scorsese, marks a stark turnaround from his admission that he had no interest in making a 3D film, just two years ago.

So what changed his mind?

"Well, the story of Hugo," he said. "The climate of what Jim Cameron did with Avatar and 3D seemed right and the subject matter was just perfect for it. And it was time to take a chance with it."

"(3D) shouldn't be limited to fantasy or sci-fi. Look at (Werner) Herzog's use of it (in Cave of Forgotten Dreams), Wim Wenders with Pina.

"It should be considered a serious narrative element and tool, especially when telling a story with depth as narrative."

Frightening presence

When asked which of his previous films might have benefited from the 3D medium he said that Aviator and Taxi Driver sprang to mind.

"Taxi Driver, because of the intimidation of the main character, his presence is everywhere, a frightening kind of presence."

Scorsese's eventual embracing of 3D should be considered a landmark for the medium.

Critics are calling Hugo, the most important 3D movie since Avatar and are almost universal in their praise for the title.



Wednesday, 2 November 2011

Explained: The future of PC graphics

Explained: The future of PC graphics

The future of PC graphics

What's next for graphics? Why, Graphics Core Next, of course. Thanks, AMD, for that nicely pallindromic way to start off a feature.

And also for talking about the successor to the current generation of Radeon graphics cards, which is due sometime next year.

The unveiling of GCN took place at June's Fusion Developer Summit. It's the first complete architectural overhaul of GPU technology it's risked since the launch of Vista.

That also, incidentally, makes it the first totally new graphics card design for AMD that isn't based on work started by ATI before it was purchased.

Vista, and specifically DirectX 10, called for graphics cards to support a fully programmable shader pipeline.

That meant doing away with traditional bits of circuitry that dealt with specific elements of graphics processing – like pixel shaders and vertex shaders – and replacing them with something more flexible that could do it all: the unified shader (see "Why are shaders unifi ed?", next page).

Schism

During the birth of DX10 class graphics, there was something of a schism between Nvidia and AMD.

To simplify: the former opted for an interpretation of unified shader theory in its G80 GeForce chips that was quite flexible. Place a few hundred very simple processors in a large array, and send them one calculation (or, in some circumstances, two) a piece to work on until all the work is done.

It's a method that creates a bit of a nightmare for the set-up engine, but it's very flexible and for well written code that takes advantage of the way processors are bunched together on the board, dynamite.

In designing the G80 and its successors, Nvidia had its eye on applications beyond graphics. Developers could create GPGPU applications for GeForce cards written in C and more recently C++.

AMD/ATI, meanwhile, focused on the traditional requirements for a graphics card. Its unified shaders worked by combining operations into 'Very long instruction words' (VLIW) and sending them off to be processed in batches.

The basic unit in an early Nvidia DX10 card was a single 'scalar' processor, arranged in batches of 16 for parallel processing.

Inside an AMD one, it was a four way 'vector' processor and with a fifth one for special functions. Hence one name for the Radeon architecture: VLIW5. While the set-up sounds horrendous, it was actually designed to be more efficient.

The important point being that a pixel colour is defined by mixing red, green, blue and alpha (transparency) channels. So the R600 processor – which was the basis of the HD2xxx and HD3xxx series of cards – was designed to be incredibly efficient at working out those four values over and over again.

Sadly, those early R600 cards weren't great, but with time and tweaking AMD made the design work, and work well.

The HD4xxx, HD5xxx and HD6xxx cards were superlative, putting out better performance and requiring less power than Nvidia peers. Often cheaper too. But despite refinements over the last four years, the current generation of GeForce and Radeon chips are still recognisable as part of the same families as those first G80 and R600.

There have been changes to the memory interface (goodbye power hungry Radeon ring bus) and vast increases to the number of execution cores (1,536 on a single Radeon HD6970 compared to 320 on an HD2900XT), but the major change over time has been separating out the special functions unit from the processor cores.

Graphics Core Next, however, is a completely new design. According to AMD, its existing architecture is no longer the most efficient for the tasks that graphics cards are called to do.

New approach

VLIW5

FUTURE GRAPHICS: VLIW5 has four vector processing units: one each for R, G, B and alpha

Proportionally, the number of routines for physics and geometry being run on the graphics card has increased dramatically in a typical piece of game code, calling for a more flexible processor design than one geared up primarily for colouring in pixels.

As a result, the VLIW design is being abandoned in favour of one that can be programmed in C and C++.

The basic unit of GCN is a 16 wide array of execution units arranged for SIMD (single instruction, multiple data) operations. If all that sounds familiar to G80 and on, it's because it is.

Cynically, this could be seen as a tacit acknowledgement that Nvidia had it right all along, and there's no doubt that AMD is looking at GPGPU applications for its next generation of chips. But there's more to it than that.

Inside GCN, these SIMD processors are batched together in groups of four to create a 'compute unit' or CU. They are, functionally, still fourway vector units (perfect for RGBA instructions) but are also coupled to a scalar processor for one off calculations that can't be completed efficiently on the SIMD units.

Each CU has all the circuitry it needs to be virtually autonomous, too, with an L1 cache, Instruction Fetch Arbitration controller, Branch & MSG unit and so on.

There's more than the CU to GCN, though. The new architecture also supports x86 virtual memory spaces, meaning large datasets – like the megatextures id Software is employing for Rage – can be addressed when they're partially resident outside of the on-board memory.

And while it's not – as other observers have pointed out – an out-of-order processor, it is capable of using its transistors very efficiently by working on multiple threads simultaneously and switching between them if one is paused and waiting for a set of values to be returned. In other words, it's an enormously versatile chip.

After an early preview of the design, some have noted certain similarities with Intel's defunct Larrabee concepts and also with the Atom and ARM-8 chips, except much more geared up for parallel processing.

GCN

INVENTIVE NAMING: GCN will still work with RGBA data, but boasts greater flexibility

"Graphics is still our primary focus," said AMD's Eric Demers during his keynote presentation on GCN, "But we are making significant optimisations for compute... What is compute and what is graphics is blurred."

The big question now is whether or not AMD can make this ambitious chip work. Its first VLIW5 chips were a disappointment, running hotter and slower than expected. So were Nvidia's first generation Fermi-based GPUs.

Will GCN nail it in one? We've got a while to wait to find out. The first chips based on GCN are codenamed Northern Islands and will probably be officially branded as Radeon HD7xxx. They were originally planned for this year, but aren't expected now until 2012.