-
How to use your PS4 as a media streamer without DLNA
http://www.extremetech.com/wp-conten...S4-640x353.png
As it stands, the PS4 doesn’t ship with DLNA streaming capability, ironically making the PS3 a better media center device. It’s certainly possible that we’ll see support for the standard patched in eventually, but how are we supposed to play our media library on the PS4 right now? Thankfully, there is a way, and it’s all thanks to an app called Plex.
With this handy little app, you can stream just about any video from your computer or NAS directly to your PS4. It only takes a few minutes to get going, so let’s jump right in.
http://www.extremetech.com/wp-conten...ad-640x392.png
First off, you need to install the Plex Media Server. Download it, install it, and then launch the executable. It’s simple enough, and it’s available on Windows, OS X, Linux, and FreeBSD (FreeNAS). While you’re at it, sign-up for a free Plex account if you haven’t done so already.
http://www.extremetech.com/wp-conten...ig-640x422.png
Configure server settings
Once the application is running, you can configure your settings as you please. Choose your server’s name, add your media folders to the Plex library, and tweak your networking options as you see fit. If you need to change the port configuration, you’ll need to toggle on the advanced mode by clicking the “Show Advanced” button in the upper right. Most people shouldn’t need to tinker too much, but the options are available.
http://www.extremetech.com/wp-conten.../Plex-Pass.png
Purchase a Plex Pass
For the time being, the Plex App on PS4 is only available for Plex members with paid accounts. Eventually, you’ll be able to separately buy access to the PS4 app without the Plex Pass, but the subscription is mandatory for now. So if you want the streamlined experience, head on over to the Plex website, and buy a Plex Pass.
http://www.extremetech.com/wp-conten..._o-640x360.jpg
Download the Plex app
Now that your account is properly configured, go into the PlayStation store, and navigate to the “Apps” section. You’ll find the Plex app itself is free, so initiate the download. Once it’s done installing, you’ll find the Plex app under the “TV & Video” section of the PS4’s main menu. Alternately, you can always go to the “Library” menu, and navigate to “Applications.”
http://www.extremetech.com/wp-conten.../Plex-Pin.jpeg
Generate a PIN
Launch the Plex app on your PS4, and you’ll be greeted with four alphanumeric characters. You’ll need this code to pair your account with your PS4.
http://www.extremetech.com/wp-conten...Active-PIN.png
Pair your PS4 to your account
Now, head on over to the PIN login page on the Plex website, sign in with your premium account, and enter the four characters being displayed on the PS4. Press the “Connect” button, and you’ll be greeted with a message. If it tells you that the PIN was activated, you’re ready to rock. If you get an error, go back to your PS4, and generate a new code in the Plex app.
http://www.extremetech.com/wp-conten...Interface.jpeg
Enjoy yourself
Finally, you’ll be able to stream movies and TV shows on your PS4 quickly and easily. Music and channel support isn’t implemented in the PS4 app just yet, but that functionality will be added in at a later date.
Is there another way?
More...
-
Investigating the GTX 970: Does Nvidia’s GPU have a memory problem?
http://www.extremetech.com/wp-conten...ga-640x353.jpg
Late last week, we covered claims that the GTX 970 had a major memory flaw that didn’t affect Nvidia’s top-end GPUs, like the GTX 980. According to memory bandwidth tests, the GTX 970’s performance drops above 3.2GB of memory use and craters above 3.5GB. Meanwhile, many users have published claims that the GTX 970 fights to keep RAM usage at or slightly below 3.5GB of total VRAM whereas the GTX 980 will fill the entire 4GB framebuffer.
There are three separate questions in play here, though they’ve often been conflated in the back-and-forth in various forum threads. First, does the small memory bandwidth benchmark by Nia actually test anything, or is it simply badly coded?
http://www.extremetech.com/wp-conten...ry-640x392.png
We’ve verified that this issue occurs properly.
Second, does the GTX 970 actually hold memory use to the 3.5GB limit, and if it does, is this the result of a hardware bug or other flaw? Third, does this 3.5GB limit (if it exists) result in erroneous performance degradation against the GTX 980?
Memory bandwidth and allocation on the GTX 970 vs. the GTX 980
The GTX 970, like a number of other GPUs from Nvidia (and, historically, a few from AMD) uses an asymmetric memory layout. What this means, in practice, is that the GPU has a faster access path to some of its main memory than others. We reached out to Bryan Del Rizzo at Nvidia, who described the configuration as follows:
“[T]he 970 has a different configuration of SMs than the 980, and fewer crossbar resources to the memory system. To optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section. The GPU has higher priority access to the 3.5GB section. When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rd party applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.”
In other words, the answer to the first question of “Does this memory benchmark test something accurately?” is that yes, it does. but does this limit actually impact game performance? Nvidia says that the difference in real-world applications is minimal, even at 4K with maximum details turned on.
Nvidia’s response also confirms that gamers who saw a gap between the 3.5GB of utilization on the GTX 970 and the 4GB on the GTX 980 were seeing a real difference. We can confirm that this gap indeed exists. It’s not an illusion or a configuration problem — the GTX 970 is designed to split its memory buffer in a way that minimizes the performance impact of using an asymmetric design.
We went looking for a problem with the GTX 970 vs. the 980 in two ways. First, we reconsidered our own data sets from the GTX 970 review, as well as reviews published on other sites. Even in 4K, and with all detail levels cranked, our original review shows no problematic issues. The GTX 970 may take a slightly larger hit in certain circumstances (Nvidia’s information suggests that the impact can be on the order of around 4%), but we don’t see a larger problem in terms of frame rates.
The next step was to benchmark a few additional titles. We tested the MSI program Kombustor and its RAM burner program, as well as the games Dragon Age: Inquisition and Shadows of Mordor. Both Dragon Age: Inquisition and Shadows of Mordor were tested at absolute maximum detail with all features and settings maxed out in 1080p and 4K.
More...
-
Most DirectX 12 features won’t require a new graphics card
http://www.extremetech.com/wp-conten...en-640x354.jpg
The last few updates to Microsoft’s DirectX platform have come with the requirement that you get new hardware to enjoy the benefits, but that’s not going to be the case with DirectX 12. According to Microsoft, DirectX 12 will work with most existing gaming hardware, at least for the most part. Some DX12 features will still need updated GPUs, but all the basic features should work.
Microsoft announced DirectX 12 last year at GDC, and it’s still not fully baked yet. Because it’s not technically done yet, Microsoft has been cautious about explaining exactly what will and won’t work on current GPUs. What we do know is that the basic feature set will work on all Intel fourth-gen and newer Core processors, as well as AMD’s Graphics Core Next (GCN) architecture. On the Nvidia side, DirectX 12 will support Maxwell, Kepler, and even Fermi. Basically, a DX 11.1 card will be compatible with most of the new APIs. Note, Maxwell is actually the first GPU with full DX12 support, although DX12 graphics are currently only making appearances in demos.
That makes some sense when you look at what DirectX 12 is designed to do. While past updates to DirectX have focused on new rendering effects like tessellation and more realistic shaders, DirectX 12 is an attempt to dramatically reduce driver overhead and get PC gaming closer to console levels of efficiency by learning some lessons from AMD’s Mantle API. Consoles have very narrow hardware profiles, but the hardware abstraction layer in DirectX slows things down.
Some of the more significant aspects of DirectX 12 will be included in the basic features including power efficiency and frame rate improvements. That’s really all the detail Microsoft is willing to go into on the record right now.
Redmond is probably referring to the improved threading of command lists from the CPU to GPU. The workload is shared across threads in DirectX 12, but more dependent on a single thread in DirectX 11. Splitting it up more efficiently means higher frame rates. DX12 will also support bundled commands within command lists that can be reused instead of being sent all over. That can decrease power use and further increase frame rates.
We know from some of the benchmarks released last year that reducing the CPU overhead can boost frame rates by as much as 60%. That could be the difference between a game that’s unplayable laggy and completely smooth.
http://www.extremetech.com/wp-conten...ew-640x377.jpg
Microsoft is even more vague about what features of DirectX 12 will need new hardware. According to company reps, there are several rendering pipeline features that will only be supported on new cards, but those won’t be detailed until GDC in a few months. The updated APIs should ship with Windows 10 later this year, and games utilizing the new version of DirectX will be around for the 2015 holiday season. Now that you know whether or not you’ll need new hardware, you can plan your splurging accordingly.
More...
-
Xbox One catching up to PS4, but the numbers don’t add up
http://www.extremetech.com/wp-conten...ip-640x353.jpg
For the uninitiated, the eighth generation console war has followed roughly the same path since it began — Sony’s PlayStation 4 capitalizes not only on its technically superior hardware and content exclusives, but from Microsoft’s constant stream of public relation fumbles leading up to and following the Xbox One launch. Since launch, the PS4 was soundly leading the Xbox One in both hype and sales — Sony’s console was even more fun to try to break. During its quarterly earnings statement today, Microsoft revealed that it sold 6.6 million Xbox consoles during the holiday quarter — around 2.5 million more consoles than PS4s Sony moved during the same period. Did Microsoft turn the Xbox One around?
Microsoft posted a $26.5 billion revenue, but both Xbox sales and net income were down year-over-year. Back during Microsoft’s second quarter, it revealed that it sold 7.4 million Xbox consoles, but divulged that the number was split between 3.9 million Xbox Ones and 3.5 million Xbox 360s. This time around, Microsoft used the same vague wording of “Xbox consoles,” but didn’t provide a detailed split. It did, however, claim that the Xbox One outsold the PS4 during that*time period, so you could safely assume that the Xbox One managed to sell around at least 4.2 million units. It wouldn’t be unlikely for the remainder of that*6.6 million number to be filled in with Xbox 360s.
http://www.extremetech.com/wp-conten...oxonetruck.jpg
Sales are sales, but aside from the mysterious lack of delineation between Xbox One and Xbox 360 sales, a few other factors are at play. Microsoft has dropped the price of the Xbox One from $399 to $349 twice in the past few months, which was already down from the original unit’s $499 price tag. Meanwhile, console agnostics that were part of the PS4 sales rush may now be settling in and picking up the rival platform to make sure they don’t miss any exclusives.
On the software side of things, Microsoft is doing just fine — in no small part thanks to the $2.5 billion acquisition of Minecraft developer Mojang, as well as the release of the nostalgia-laden*Halo: The Master Chief Collection and Forza Horizon 2. Despite Microsoft falling somewhere behind Sony in the console war numbers — almost by half if you believe certain tallies — its numbers themselves aren’t bad by any stretch, just in comparison to the competition. With enormous new markets opening potential doors for Microsoft, and with its (mostly) beloved Xbox Live Gold service possibly arriving on other platforms, there is still much room for the Xbone to grow, despite that loving-yet-telling moniker.
More...
-
Will the PS4 and Xbox One receive 4K support this year?
http://www.extremetech.com/wp-conten...ts-640x353.jpg
Consumer electronics companies have begun the 4K push, and now it seems everyone is scrambling to get their houses in order. The PS4 and Xbox One are already technically capable of outputting 4K video, but considering how much these consoles struggle to reach 1080p, is 4K really feasible with the existing hardware? Netflix seems to think we’re in for PS4 and Xbox One hardware revisions this year, but are Sony and Microsoft willing to burn their early adopters?
Back in January, Netflix’s Neil Hunt said publicly that Sony had supposedly promised a PS4 hardware revision with improved 4K support in mind. Earlier this week, Forbes followed up with Hunt, and he maintains that both the Xbox One and PS4 will see hardware refreshes at around the two-year mark. Specifically, he believes that they’ll include updated internals aimed at supporting 4K video playback.
More...
-
DirectX 12 confirmed as Windows 10 exclusive, AMD and Nvidia go head-to-head
http://www.extremetech.com/wp-conten...11-640x353.jpg
It’s been just over a year since AMD launched its next-generation Mantle API, with the promise that low-overhead gaming would dramatically boost frame rates and lead to fundamentally new types of game engines. One of the demos that Sunnyvale used to show off its new API was Star Swarm, a tech demo and next-generation engine from Oxide Games. Now, a new head-to-head comparison puts AMD and Nvidia head-to-head in the test — only this time, they’re both running under DirectX 12.
The performance data in*Anandtech’s comparison should be taken with a significant grain of salt. D3D12 support is baked into Windows 10, but the code is early. The drivers from AMD and Nvidia are pre-production, obviously, and the underlying OS it itself in a pre-RTM state. Windows 10 uses version 2.0 of the Windows Display Driver Model (WDDM), which means that a great deal of under-the-hood work has changed between Windows 8.1 and the latest version of the operating system. The preview is quite extensive,*they test the GTX 980 against multiple AMD cards in multiple CPU and GPU configurations and I don’t want to spoil their thunder. At the Extreme preset we see several interesting results:
http://www.extremetech.com/wp-conten...50-640x640.png
The first thing people are going to notice is that the GTX 980 is far faster than the R9 290X in a benchmark that was (rightly) believed to favor AMD as a matter of course when the company released it last year. I’ll reiterate what I said then — Star Swarm is a tech demo, not a final shipping product. While Oxide Games does have plans to build a shipping game around their engine, this particular version is still designed to highlight very specific areas where low-latency APIs can offer huge performance gains.
As impressive as the GTX 980’s performance is, I’m going to recommend that nobody take this as proof that Nvidia’s current GPU will blow the doors off AMD when D3D12 is shipping and games start to appear late this year or early next.
The second thing that some users will note is that the R9 cards offer very similar performance in Mantle vs. DirectX 12, at least for now. There was always some discussion over whether or not Mantle and D3D would offer similar performance capabilities, and at least for now, it looks as though they may — though again, that should be taken as a tentative conclusion.
AT steps through multiple benchmarks and comparisons between the two GPU families, as well as simulated performance on dual and quad-core configurations. There’s no comparison of AMD hardware, which makes sense on the one hand — AMD CPUs are not widely used for enthusiast gaming these days — but is unfortunate on the other. Mantle has always had its best showing when used to accelerate the performance of AMD CPUs or APUs, and it would’ve been interesting to see if Direct3D 12 benefited its hardware as much as its own native API has done.
Microsoft confirms: DirectX 12 will be a Windows 10 exclusive
One point update that Anandtech disclosed as well is that Windows 10 and DirectX 12 will be bundled together — D3D 12 will not come to Windows 7, 8, or 8.1. The free upgrade offer on Windows 10 will doubtlessly blunt a great deal of criticism that MS would otherwise have come in for, but users who can’t upgrade or simply don’t want to won’t be happy.
Whether or not this will breathe life into AMD’s Mantle is an interesting question. In theory, Mantle could see increased adoption if the MS userbase digs in its heels over Windows 10 the way it did over Windows 8. On the other hand, it’s possible that we’ll see increased support for the next-generation OpenGL standard (dubbed GLNext) as an alternative to DX12 and Windows 10.
More details on both DX12 and GLNext will be released at GDC this year, which kicks off in early March.
More...
-
Google, Mattel team up to offer View-Master VR in kid-friendly package
http://www.extremetech.com/wp-conten...er-640x353.png
If you grew up from the 1960s through the 1980s, chances are you or someone you knew had a distinctive red image viewer and a stack of flimsy cardboard reels. The classic View-Master reels could depict scenery, movies, TV shows, or any other visual content in stereoscopic 3D, with some models even incorporating an audio track. Now, Mattel has announced a partnership with Google to bring a Carboard version of the View-Master to life with 3D animated reels that introduce kids to the concept of VR.
The new View-Master will be available for roughly $30, new “reels” of content will cost roughly $15 in packs of four and offer a new VR experience that’s tailor-made for children. The device won’t be wearable, as such — it’ll maintain the interactive elements that’ve made the View-Master option unique, with application availability across Android, iOS, and Windows. The device will apparently fit most smartphones (compatibility has yet to be detailed) and uses an uprated version of Google Cardboard made from plastic. Content can apparently be purchased in plastic reels or downloaded via the application (exactly how this works hasn’t been disclosed yet).
http://www.extremetech.com/wp-conten...er-640x559.jpg
The new Googlefied version is clearly based on the classic styling of the original
The Verge wasn’t impressed with the initial run of viewer apps, claiming that the environments look like crude video games and that the informative captions “don’t do much to help.” The photos are getting better ratings, and the entire idea of updating a physical, reel-based system is both nostalgic for existing adults and possibly a cool idea for kids as well, if the content can be brought up to snuff.
One issue that the Verge brings up indirectly in its coverage is the simple fact that VR content will live and die on the strength of its material. This has been brought up in coverage at Ars Technica and from time to time in other areas — gaming that wants to include VR options have to be explicitly designed for it. Standard video effects that work perfectly well on a monitor aren’t well suited to a head-mounted display.
Done properly, Mattel’s View-Master could be an amazing toy that blends old-school physical hardware with brand-new content in resolutions and quality levels that children in the 1960s could only dream of. If the content is lackluster, however, the Mattel View-Master will go down as a failed kludge — a device that tried to bridge the gap between real-world toys and virtual entertainment and fell squarely into the hole instead.
More...
-
Nvidia kills mobile GPU overclocking in latest driver update, irate customers up in arms
http://www.extremetech.com/wp-conten...re-640x353.jpg
Nvidia’s mobile Maxwell parts have won significant enthusiast acclaim since launch thanks to excellent performance and relatively low power consumption. Boutique builders and enthusiasts alike also tend to enjoy pushing the envelope, and Maxwell’s manufacturing characteristics apparently make it eminently suited to overclocking. Now, apparently, Nvidia is cracking down on these options with a driver update that removes the overclocking features that apparently some vendors sold to customers.
As DailyTech points out, part of what makes this driver update problematic is that system manufacturers actively advertise their hardware as having overclock support baked in to mobile products. Asus, MSI, Dell (Alienware) and Sager have apparently all sold models with overclocking as a core feature, as shown in the copy below.
http://www.extremetech.com/wp-conten...X980M-Asus.jpg
Nvidia apparently cut off the overclocking feature with its 347.09 driver and kept it off with the 347.52 driver released last week. Mobile customers have been demanding answers in the company forums, with Nvidia finally weighing in to tell its users that this feature had previously only been available because of a “bug” and that its removal constituted a return to proper function rather than any removal of capability.
Under normal circumstances, I’d call this a simple case of Nvidia adjusting a capability whether users like it or not, but the fact that multiple vendors explicitly advertised and sold hardware based on overclocking complicates matters. It’s not clear if Asus or the other manufacturing charged extra for factory overclocked hardware or if they simply shipped the systems with higher stock speeds, but we know that OEMs typically do put a price premium on the feature.
To date, Nvidia has not responded formally or indicated if it will reconsider its stance on overclocking. The company isn’t currently under much competitive pressure to do so — it dominates the high-end GPU market, and while AMD is rumored to have a new set of cards coming in 2015, it’s not clear when those cards will launch or what the mobile flavors will look like. For now, mobile Maxwell has a lock on the enthusiast space. Some customers are claiming that they’re angry enough to quit using Team Green, but performance has a persausive siren song all its own, and the performance impact of disabling overclocking is going to be in the 5-10% range for the majority of users. If customers can prove they paid extra for the feature, that could open the door to potential claims against the OEMs themselves.
For Nvidia, this surge of attention on their mobile overclocking is a likely-unwelcome follow-up to concerns about the GTX 970’s memory allocation and the confusion and allegations swarming around mobile G-Sync. While none of these are knock-out blows, they continue to rile segments of the enthusiast community.
More...
-
PS4 will continue to outsell the Xbox One through 2018, report says
http://www.extremetech.com/wp-conten...ts-640x353.jpg
Microsoft has made massive changes to its console strategy over the last two years. The Xbox One is cheaper, less restrictive, and more feature-rich than it once was. But in spite of these strides in the right direction, the PS4 still remains dominant. And if a recent analyst report is to be believed, Microsoft’s console is doomed to play second fiddle to the PS4 well into 2018.
Recently, Strategy Analytics released a report on the sales trends of the current console generation. By the end of 2018, this analyst predicts that Sony will have sold 80 million PS4s. On the other hand, Microsoft will have sold just 57 million Xbox Ones. Neither number is anything to sneeze at, but that estimation puts Sony way out in front.
http://www.extremetech.com/wp-conten...ntrollers.jpeg
More than anything, this report just adds credence to the idea that the PS4 is Sony’s return to the glory days of the PS2. Perhaps the sales gap won’t be quite as steep as it was two generations ago, but the Xbox team must be upset that they’ve burned so much good will with early mistakes. Hot off the Xbox 360, this was Microsoft’s generation to lose, and lose it did.
What about the Wii U? Well, it looks like Nintendo’s platform will be hovering between 15 and 20 million if this report is to be believed. Even with last year’s top-notch first-party showing, hardware sales were weak. The Legend of Zelda is supposed to ship sometime in 2015, but even that won’t be able to pull the Wii U out of the gutter. I’m certainly not saying that it will sink Nintendo, but the Wii U continues to reek of failure.
Meanwhile, PS4 exclusives have been few and far between. Even worse, Sony’s The Order: 1886 has received shockingly harsh review scores. At this point, Bloodborne and Uncharted 4 are pretty much the only major exclusives on the horizon. The PS4’s hardware is superior, and Sony’s messaging has been more consistent, but the severe lack of compelling titles could hurt the PS4 in the long run.
So, can Microsoft change its fate? It’s definitely in the realm of possibilities. Redmond has been surprisingly proactive with massive shifts in strategy, and the Xbox One even managed to outsell the PS4 for a number of months in 2014. At this point, what these consoles need more than anything is content. And since Microsoft has Halo, Gears of War, and Tomb Raider on lockdown, the future looks bright for Xbox One owners even if the sales gap never shrinks.
More...
-
Can this Arduino box stop online cheating in video games?
http://www.extremetech.com/wp-conten...1-640x353.jpeg
Cheating has always existed in multiplayer games, and for the most part it’s just a minor annoyance. But now that there are millions of dollars up for grabs at eSports tournaments, cheating has become a problem with much larger stakes. So, how do we fix it? Well, a man by the name of David Titarenco thinks he’s solved part of the problem with a tiny little Arduino box he calls “Game:ref.”
A few weeks ago, Titarenco wrote a lengthy blog post about his hardware anti-cheat solution for Counter-Strike: Global Offensive. It got a fair bit of attention on Reddit, and Titarenco is now working to get this device into the hands of tournament organizers and gamers alike. It has since been dubbed Game:ref, and unsurprisingly, a Kickstarter project is in the works.
http://www.extremetech.com/wp-conten.../Game-ref.jpeg
At it’s core, this Arduino-based solution is designed to detect discrepancies between user input and what’s happening in the game. You simply pass the user’s input through the Game:ref on the way into the PC, and then compare those results with the data on the server side of things. If the two are drastically out of step, there’s reason to believe that there’s cheating software running on the user’s PC. It’s certainly not a silver bullet for every single method of cheating, but it might end up being a useful puzzle piece in the eSports scene.
Keep in mind, this concept isn’t entirely new. In fact, Titarenco himself credits Intel’s “Fair Online Gaming” concept for inspiring this implementation. Earlier this week, he told Polygon that the devices themselves will be made for under $100 each, so possibly this relatively cheap solution can gain traction where Intel’s never did.
So, can this really stop cheaters completely? Certainly not. As soon as someone has physical access to the device itself, all bets are off, and Titarenco seems aware of that. He’s going after input-based software cheating exclusively here, but there’s no real guarantee that will work perfectly either. Given enough time and financial incentive, it’s conceivable that cheaters could target this specific detection method, and find a work around. At best, I can see this working as an additional layer of protection in a tournament setting, but that’s about it.
Frankly, I find it hard to believe that a perfect anti-cheat solution will ever exist — especially with so much money on the line. The best we can do is gather as much data as possible, implement strict regulations in tournaments, and keep our ear to the ground for the latest advancements in online cheating.
More...
-
Nvidia slapped with class-action lawsuit over GTX 970 memory issues [UPDATED]
http://www.extremetech.com/wp-conten...re-348x196.jpg
Nvidia’s CEO, Jen-Hsun Huang, hasn’t directly responded to the class-action lawsuit allegations, but he has written a blog post responding to the larger situation. Said post isn’t likely to win many converts to Nvidia’s way of thinking, since Huang refers to the memory issue as a “feature,” noting that “Instead of being excited that we invented a way to increase memory of the GTX 970 from 3GB to 4GB, some were disappointed that we didn’t better describe the segmented nature of the architecture for that last 1GB of memory.”
He then claims that this was done because games “are using more memory than ever.”
The problem with Jen-Hsun’s statement is that its nearly impossible to test. There is evidence that the GTX 970 takes a performance hit in SLI mode compared to the GTX 980 at high resolutions, and that the impact might be related to either the memory segmentation. In order to argue that the GTX 970 benefits from this alternate arrangement, Nvidia would have to demonstrate that a GTX 970 with 3-3.5GB of RAM is slower than a GTX 970 with 4GB of RAM in a segmented configuration. No such evidence has been given, which makes the CEO’s statement sound like a claim that unhappy users are ungrateful and mis-evaluating the product. That’s not going to sit well with the small-but-vocal group of people who just dropped $700 on GTX 970’s in SLI.
To his credit, the Nvidia CEO repeatedly pledges to do better and to communicate more clearly, but the entire tone of the blog post suggests he doesn’t understand precisely what people are unhappy about.
Original story follows:
Last month, we detailed how Nvidia’s GTX 970 has a memory design that limits its access to the last 512MB of RAM on the card. Now the company is facing a class-action lawsuit alleging that it deliberately mislead consumers about the capabilities of the GTX 970 in multiple respects.
Nvidia has acknowledged that it “miscommunicated” the number of ROPS (Render Output units) and the L2 cache capacity of the GTX 970 (1792KB, not 2048KB), but insists that these issues were inadvertent and not a deliberate attempt to mislead customers. There’s probably some truth to this — Nvidia adopted a new approach to blocking off bad L2 blocks with Maxwell to allow the company to retain more performance. It’s possible that some of the technical ramifications of this approach weren’t properly communicated to the PR team, and thus never passed on to reviewers.
http://www.extremetech.com/wp-conten...89-640x494.jpg
The memory arrangement on the GTX 970.
Nvidia’s decision to divide the GTX 970’s RAM into partitions is a logical extension of this die-saving mechanism, but it means that the GPU core has effective access to just seven of its eight memory controllers. Accessing that eighth controller has to flow through a crossbar, and is as much as 80% slower than the other accesses. Nvidia’s solution to this problem was to tell the GPU to use just 3.5GB of its available memory pool and to avoid the last 512MB whenever possible. In practice, this means that the GTX 970 flushes old data out of memory more aggressively than the GTX 980, which will fill its entire 4GB buffer.
Of principles, practices, and performance
Nvidia has maintained that the performance impact from disabling the last 512MB of memory in single-GPU configurations is quite small, at around 4%. our own performance tests found little reason to disagree with this at playable resolutions — at 4K resolutions we saw signs that the GTX 980 might be superior to the 970 — but the frame rates had already fallen to the point where the game wasn’t very playable. At the time, I theorized that SLI performance might take a hit where single-GPUs didn’t. Not only is there a certain amount of memory overhead associated with multi-GPU performance, two graphics cards can drive playable 4K resolutions where a single card chokes. We’ve been recommending that serious 4K gamers explore multi-GPU configurations and the GTX 970 appeared to be an ideal match for SLI when it first came out.
Testing by PC Perspective confirmed that in at least some cases, the GTX 970’s SLI performance does appear to take a larger-than expected hit compared to the GTX 980. Still, they note that the issues only manifest at the highest detail levels and graphics settings — the vast majority of users are simply unlikely to encounter them.
http://www.extremetech.com/wp-conten...x1440_STUT.png
Graph courtesy of PC Perspective
One thing that makes the complaint against Nvidia interesting is that the facts of the case aren’t really in dispute. Nvidia did miscommunicate the specifications of its products and it didmisrepresent those figures to the public (advertising 4GB of RAM when only 3.5GB is available in the majority of cases). The question is whether or not those failings were deliberate and if they resulted in significant harm to end users.
The vast majority of customers who bought a GTX 970 will not be materially impacted by the limits on the card’s performance — but people who bought a pair of them in SLI configurations may have a solid argument for how Nvidia’s failure to market the card properly led them to purchase the wrong product. It’s also fair to note that this issue could change competitive multi-GPU standings. AMD’s R9 290X starts $10 below the GTX 970 at NewEgg, but has none of the same memory limitations. The GTX 970 is still a potent card, even with these limitations and caveats, but it’s not quite as strong as it seemed on launch day — and there are obviously some Nvidia customers who feel mislead.
More...
-
PowerVR goes 4K with GT7900 for game consoles
http://www.extremetech.com/wp-conten...R3-348x196.jpg
PowerVR is announcing its new high-end GPU architecture today, in preparation for both Mobile World Congress and the Game Developers Conference (MWC and GDC, respectively). Imagination Technologies has lost some ground to companies like Qualcomm in recent years, but its cores continue to power devices from Samsung, Intel, MediaTek, and of course, Apple. The new GT7900 is meant to crown the Series 7 product family with a GPU beefy enough for high-end products — including 4K support at 60fps — as well as what Imagination is classifying as “affordable” game consoles.
http://www.extremetech.com/wp-conten...00-640x516.png
First, some specifics: The Series 7 family is based on the Series 6 “Rogue” GPUs already shipping in a number of devices. But it includes support for hardware fixed-function tessellation (via the Tessellation Co-Processor), a stronger geometry front-end, and an improved Compute Data Master that PowerVR claims can schedule wavefronts much more quickly. OpenGL ES 3.1 + the Android Extension Pack is also supported. The new GT7900 is targeting a 14nm-and-16nm process, and can offer up to 800 GFLOPS in FP32 mode (what we’d typically call single-precision) and up to 1.6TFLOPS in FP16 mode.
One of the more interesting features of the Series 7 family is its support for what PowerVR calls PowerGearing. The PowerVR cores can shut down sections of the design in power-constrained scenarios, in order to ensure only areas of the die that need to be active are actually powered up. The end result should be a GPU that doesn’t throttle nearly as badly as competing solutions.
http://www.extremetech.com/wp-conten...11-640x331.jpg
On paper, the GT7900 is a beast, with 512 ALU cores and enough horsepower to even challenge the low-end integrated GPU market if the drivers were capable enough. Imagination Technologies has even created an HPC edition of the Series 7 family — its first modest foray into high-end GPU-powered supercomputing. We don’t know much about the chip’s render outputs (ROPs) or its memory support, but the older Series 6 chips had up to 12 ROPS. The GT7900 could sport 32, with presumed support for at least dual-channel LPDDR4.
Quad-channel memory configurations (if they exist) could actually give this chip enough klout to rightly call itself a competitor for last-generation consoles, if it was equipped in a set-top box with a significant thermal envelope. Imagination is also looking to push the boundaries of gaming in other ways — last year the company unveiled an architecture that would incorporate a ray tracing hardware block directly into a GPU core.
The problem with targeting the affordable console market is that every previous attempt to do this has died. From Ouya to Nvidia’s Shield, anyone who attempted to capitalize on the idea of a premium Android gaming market has either withered or been forced to drastically shift focus. Nvidia may have built two successive Shield devices, but the company chose to lead with automotive designs at CES 2015 — its powerful successor to the Tegra K1, the Tegra X1, has only been talked about as a vehicle processor. I suppose Nvidia could still announce a shield update around the X1. But considering the company didn’t even mention it at CES, where Tegra was launched as a premium mobile gaming part, speaks volumes about where Nvidia expects its revenue to come from in this space.
For its part, Imagination Technology anticipates the GT7900 to land in micro-servers, full-size notebooks, and game consoles. It’s an impressive potential resume, but we’ll see if the ecosystem exists to support such lofty goals. If I had to guess, I’d wager this first chip is the proof-of-concept that will demonstrate the company can compete outside its traditional smartphone and tablet markets. Future cores, possibly built with support for Samsung’s nascent Wide I/O standard, will be more likely to succeed.
More...
-
Report claims DirectX 12 will enable AMD and Nvidia GPUs to work side-by-side
http://www.extremetech.com/wp-conten...en-348x196.jpg
With the Games Developer Conference right around the corner we’ve started to see more gaming and technology announcements cropping up, but a new report on DirectX 12 is certain to raise the eyebrows of any PC gamer. It’s been reported that DirectX 12 — Microsoft’s upcoming, low-latency, close(r)-to-metal API that replaces DirectX 11 — will be capable of running across AMD and Nvidia GPUs at the same time.
A “source with knowledge of the matter” told Tom’s Hardware that DirectX 12 will support asynchronous workloads across multiple GPUs, and that one extension of this support is that a task could theoretically be split between two different video cards from different manufacturers.
For many users, this kind of claim is the stuff of legend. One of the factors that distinguishes the AMD – Nvidia competition from the AMD – Intel battle is that Teams Red and Green regularly switch positions. It’s not unusual for one vendor to have the absolute performance crown while the other has a strong price/performance position at the $200 mark, or for one company to lead for several years until leapfrogged by the other.
The other advantage of combining GPU technologies is that it could allow for multi-GPU performance on Intel-Nvidia systems or even systems with an AMD CPU / APU and an Nvidia GPU. We took this question to several developers we know to find out if the initial report was accurate or not. Based on what we heard, it’s true — DirectX 12 will allow developers to combine GPUs from different vendors and render to all of them simultaneously.
The future of multi-GPU support
We’re using Mantle as a jumping-off point for this conversation based on its high-level similarity to DirectX 12. The two APIs may be different at a programming level, but they’re both built to accomplish many of the same tasks. One feature of both is that developers can control GPU workloads with much more precision than they could with DX11.
http://www.extremetech.com/wp-conten...50-640x325.jpg
Mantle and DirectX 12 have similar capabilities in this regard
There are several benefits to this. For the past ten years, multi-GPU configurations from both AMD and Nvidia have been handicapped by the need to duplicate all texture and geometry data across both video cards. If you have two GPUs with 4GB of RAM each, you don’t have 8GB of VRAM — you have 2x4GB.
http://www.extremetech.com/wp-conten.../QuadSLI-2.jpg
Nvidia and AMD used to support both AFR and SFR, but DX11 was AFR-only
One of the implications of DirectX 12’s new multi-GPU capabilities is that the current method of rendering via Alternate Frame Rendering, where one GPU handles the odd frames and the other handles even frames may be superseded in some cases by superior methods. We examined the performance impact of Split Frame Rendering in Civilization: Beyond Earth last year, and found that SFR offered vastly improved frame times compared to traditional AFR.
http://www.extremetech.com/wp-conten...x2-640x352.png
The R9 295X2 in SFR (Mantle) vs. AFR (D3D) in Civilization: Beyond Earth. Smoother lines = better performance.
We expect DirectX 12 to offer the same capabilities as Mantle at a high level, but unlike Mantle, it’s explicitly designed to support multiple GPUs from Intel, AMD, and Nvidia. Let’s take a simple example — an Intel CPU with integrated graphics and an AMD or Nvidia GPU. Each GPU is exposed to the 3D application, which means the workload can theoretically be run across both GPUs simultaneously. It’s not clear which GPU would drive the monitor or how output would be handled, but companies like LucidLogix (which actually tried its hand at providing a hardware solution for multi-vendor GPU support once upon a time) later made its name with a virtualized monitor driver that served this purpose.
http://www.extremetech.com/wp-conten...or-640x353.jpg
AMD has talked up this capability for its products for quite some time.
The developers we spoke to indicated that AMD and Nvidia wouldn’t necessarily need to support the feature in-driver — there are certain kinds of rendering tasks that could be split between graphics cards by the API itself. That’s encouraging news, since features that require significant driver support tend to be less popular, but it’s not the only potential issue. The advantage of DX12 is that it gives the developer more control over how multi-GPU support is implemented, but that also means that the driver handles less of the work. Support for these features will be up to developers, and that’s assuming that AMD and Nvidia don’t take steps to discourage such cross-compatibility in driver. Historically Nvidia has been less friendly to multi-vendor GPU configurations than AMD, but DirectX 12 could be a hit reset on the feature.
In an ideal world, this kind of capability could be used to improve gaming performance on nearly all devices. The vast majority of Intel and AMD CPUs now includes GPUs onboard — the ability to tap those cores for specialized processing or just a further performance boost would be a welcome capability. DirectX 12 is expected to cut power consumption and boost performance in at least some cases, though which GPUs will offer “full” DX12 support isn’t entirely clear yet. DX12’s multiple-vendor rendering mode wouldn’t allow for other features, like PhysX, to automatically operate in such configurations. Nvidia has historically cracked down on this kind of hybrid support, and the company would have to change its policies to allow it to operate.
More...
-
Next-generation Vulkan API could be Valve’s killer advantage in battling Microsoft
http://www.extremetech.com/wp-conten...ve-348x196.jpg
Last week, we covered the announcement of the Khronos Group’s Vulkan API, as well as information on how AMD’s Mantle formed the fundamental basis of the new standard. Now that some additional information on Vulkan has become available, it seems likely that this new API will form the basis of Valve’s SteamOS push, while Direct3D 12 remains the default option for Microsoft’s PC and Xbox gaming initiatives. At first glance, this doesn’t seem much different from the current status quo. But there are reasons to think that Vulkan and D3D12 do more than hit reset on the long-standing OpenGL vs. D3D battles of yesteryear.
One critical distinction between the old API battles and the current situation is that no one seems to be arguing that either Vulkan or Direct3D have any critical, API-specific advantage that the other lacks. All of the features that AMD first debuted with Mantle are baked into Vulkan, and if Direct3D 12 offers any must-have capabilities, Microsoft has yet to say so. The big questions in play here have less to do with which API you feel is technically superior, and what you think the future of computer gaming should look like.
http://www.extremetech.com/wp-conten...ay-640x369.jpg
For more than a decade, at least on the PC side, the answer to that question has been simple: It looks like Direct3D. OpenGL support may never have technically gone away, but the overwhelming majority of games for PC have shipped with Direct3D by default, and OpenGL implemented either as a secondary option or not at all. Valve’s SteamOS may have arrived with a great sound and fury before fading away into Valve Time– but developers ExtremeTech spoke to say that Valve has been very active behind the scenes. A recent report at Ars Technica on the state of Linux gaming underscores this point, noting that Valve’s steady commitment to offering a Linux distro has increased the size of the market and driven interest in Linux as a gaming alternative.
http://www.extremetech.com/wp-conten...ay-640x351.jpg
Valve, moreover, doesn’t need to push SteamOS to encourage developers to use Vulkan. At the unveil last week, Valve was already showing off DOTA 2 running on Vulkan, as shown below.
If the Source 2 engine treats Vulkan as a preferred API, or if Valve simply encourages devs to adopt it over D3D for Steam titles, it can drive API adoption without requiring developers to simultaneously support a new operating system — while simultaneously making it much easier to port to a new OS if it decides to go that route.
It’s funny, in a way, to look back at how far we’ve come. Steam OS was reportedly born out of Gabe Newell’s anger and frustration with Microsoft Windows. Back in 2012, Newell told VentureBeat, “I think that Windows 8 is kind of a catastrophe for everybody in the PC space. I think that we’re going to lose some of the top-tier PC [original equipment manufacturers].” Valve’s decision to develop its own operating system was likely driven at least in part by the specter of the Windows Store, which had the power (in theory) to steal Steam’s users and slash its market share. In reality, of course this didn’t happen — but then, SteamOS remains more a phantom and less a shipping product. As the market turns towards Windows 10, Valve continues to have an arguably stronger hold than Microsoft over PC gaming.
http://www.extremetech.com/wp-conten...am-640x269.jpg
One could argue, though, that Microsoft’s failure to capitalize on the Windows Store or to move PC gamers to Windows 8 merely gave Valve an extended window to get its own OS and API implementations right. Windows 10 represents the real battleground, and a fresh potential opportunity for MS to disrupt Valve’s domination of PC game distribution. If you’re Valve — and keep in mind that Steam is a staggering revenue generator for the company, given that Valve gets a cut of every game sold — then a rejuvenated Windows store, with a new API and an OS handed away for free, is a potential threat.
If this seems far-fetched, consider the chain of logic. Valve knows that gaming is a key revenue source in both iOS and Android and that Microsoft, which plans to give away its Windows 10 for free to millions of qualifying customers, is going to be looking for ways to replace that revenue. The Windows Store is the most obvious choice, which also dovetails with Microsoft’s plans to unify PC and Xbox gaming as well as Windows product development. If you’re Valve, the Windows Store is still a threat.
Valve can’t force gamers to adopt SteamOS en masse, but it can at least hedge its bets by encouraging developers to optimize for an API besides Direct3D. Using Vulkan should made cross-platform games easier to develop, which in turn encourages the creation of Linux and OS X versions. The more games are supported under alternative operating systems, the easier it is (in theory) to migrate users towards those OSes, and the bigger the backstop against Direct3D and Microsoft. SteamOS might be a minor project now, but the Steam platform, as a whole, is a juggernaut. Valve’s efforts to create an API specifically for Intel platforms using Vulkan under Steam OS is an example of how it could boost development for its own platform and improve performance across third-party GPUs.
Since Direct3D 12 and Vulkan reportedly perform many of the same tasks and allow for the same types of fine-grained control, which we see adopted more widely may come down to programmer familiarity and the degree to which a developer is dependent on either Microsoft’s good graces or Valve’s. The end results for consumers should still be vastly improved multi-threading, better power consumption, and superior multi-GPU support. But the Vulkan-versus-D3D12 question could easily become a war for the future of PC gaming and its associated revenues depending on whether Valve and Microsoft make nice or not.
More...
-
Potent Penguinistas: Steam for Linux crosses 1,000-game threshold
http://www.extremetech.com/wp-conten...am-348x196.jpg
When Valve announced that it would begin porting games to Linux as part of its SteamOS initiative, the move was greeted with skepticism in many quarters. Could Valve move the industry back towards cross-platform gaming when Windows had locked it down for so long? The answer clearly seems to be yes — the Linux side has crossed a significant milestone, with more than a thousand actual games available (including software, demos, and videos, the total stands at more than 2,000 items). Mac OS and Windows still have more games in total (1,613 for Mac and 4,814 for PC), but crossing the 1,000 mark is a significant achievement and a clear psychological milestone.
That said, there’s a definite difference between the types of games available on Linux and those available for Windows. New releases for Linux include Cities: Skylines, and Hotline Miami 2: Wrong Number, but the vast majority of AAA titles are still Windows-centric.
The simplest way to check this is to simply sort the store by price, High to low. The Linux SteamOS store has two games at $59.95 and by the end of the first page (25 results) prices have dropped to $29.99. On the PC side there are 29 titles at $59 or above and more than 150 titles sell for $34.99 or higher.
http://www.extremetech.com/wp-conten...ng-640x527.jpg
We’re not suggesting that game price is an indicator of game quality, but the significant difference in game prices indicates that relatively few studios see Linux or SteamOS as a good return on investment for now. That’s not unusual or unexpected — Valve has been working with developers and game designers to change those perceptions one game and one gamer at a time. There are also early quality issues to be ironed out — when SteamOS launched, the graphical differences between the Direct3D and OpenGL versions of a title ranged from nonexistent to a clear win for the Windows platform.
The more developers sign on to bring titles over to SteamOS, the smaller the quality will gap will be, particularly if more developers move to using the next-generation Vulkan API. As for the long-term chances of Valve’s SteamOS gaining significant market share, I’ll admit that it seems unlikely — but then, not many years ago, the very idea of gaming on a Linux box was nearly a contradiction in terms. Outside of a dedicated handful of devs and some limited compatibility from Wine, if you used Linux, you did your gaming elsewhere.
That’s finally starting to change. And while it may not revolutionize the game industry or break Microsoft’s grip, it’s still a marked departure from the status quo of the past 15 years.
More...
-
Sony’s firmware 2.50 will finally bring suspend and resume to the PS4
http://www.extremetech.com/wp-conten...S4-640x353.png
Finally, suspend and resume is headed for the PS4. Sony officially confirmed that the long-awaited feature is coming in the next major firmware revision. Other convenience and performance tweaks will be added to firmware 2.50 (dubbed “Yukimura”) as well, so PS4 users will have an even better gaming experience going forward.
In a blog post, Sony’s Scott McCarthy confirmed that game suspension is being added to the PS4’s repertoire. After the update, rest mode will merely pause the game instead of closing it completely. And when you exit rest mode, your game will be exactly where you left it. This feature was promised long before the PS4 ever shipped, so it’s about time that Sony finally delivers this functionality — especially since Xbox One and Vita users already enjoy suspend and resume.
http://www.extremetech.com/wp-conten...ep-640x353.png
In addition, this firmware update will bring a number of important features. Interestingly, Remote Play and Share Play will be upgraded to allow for both 30fps and 60fps streaming. If your connection and game of choice can handle 60fps, you’ll no longer be hamstrung by Sony’s streaming technology. It might be less feasible over the internet or using WiFi, but I frequently use Remote Play over ethernet on my PlayStation TV. For a small (but vocal) subset of PS4 users, this is a big deal.
Other features: Sub-accounts created for children can now be upgraded to master accounts when the child comes of age. And if you’ve linked your Facebook account to your PS4, you’ll be able to search for Facebook friends with PSN accounts. Those are relatively small changes, but it’s always nice to see Sony smoothing out the rough edges.
If Facebook and YouTube weren’t enough, Dailymotion is now built into the video sharing functionality at the hardware level. Of course, you can always save your videos to a USB drive, and then post them wherever you please from your computer.
More...
-
Nvidia’s 2016 roadmap shows huge performance gains from upcoming Pascal architecture
http://www.extremetech.com/wp-conten...un-348x196.jpg
At Nvidia’s keynote today to kick off GTC, CEO Jen-Hsun Huang spent most of his time discussing Nvidia’s various deep learning initiatives and pushing the idea of Tegra as integral to the self-driving car. He did, however, take time to introduce a new Titan X GPU — and to discuss the future of Nvidia’s roadmap.
When Nvidia’s next-generation GPU architecture arrives next year, codenamed Pascal, it’s going to pack a variety of performance improvements for scientific computing — though their impact on the gaming world is less clear.
Let’s start at the beginning:
http://www.extremetech.com/wp-conten...l1-640x223.png
Pascal is Nvidia’s follow-up to Maxwell, and the first desktop chip to use TSMC’s 16nmFF+ (FinFET+) process. This is the second-generation follow-up to TSMC’s first FinFET technology — the first generation is expected to be available this year, while FF+ won’t ship until sometime next year. This confirms that Nvidia chose to skip 20nm — something we predicted nearly three years ago.
Jen-Hsun claims that Pascal will achieve over 2x the performance per watt of Maxwell in Single Precision General Matrix multiplication. But there are two caveats to this claim, as far as gamers are concerned. First, recall that improvements to performance per watt, while certainly vital and important, are not the same thing as improvements to top-line performance. The second thing to keep in mind is that boosting the card’s SGEMM performance doesn’t necessarily tell us much about gaming.
http://www.extremetech.com/wp-conten...MM-640x193.png
The graph above, drawn from Nvidia’s own files on Fermi-based Tesla cards compared with K20 (GK110) makes the point. While K20X was much, much faster than Fermi, it was rarely 3x faster in actual gaming tests, as this comparison from Anandtech makes clear, despite being 3.2x faster than Fermi in SGEMM calculations.
Pascal’s next improvement will be its use of HBM, or High Bandwidth Memory. Nvidia is claiming it will offer up to 32GB of RAM per GPU at 3x the memory bandwidth. That would put Pascal at close to 1TB of theoretical bandwidth depending on RAM clock — a huge leap forward for all GPUs.
http://www.extremetech.com/wp-conten...l2-640x220.png
Jen-Hsun’s napkin math claims Pascal will offer up to 10x Maxwell performance “in extremely rough estimates.”
Note: Nvidia might roll out that much memory bandwidth to its consumer products, but 32GB frame buffers are unlikely to jump to the mainstream next generation. Even the most optimistic developers would be hard-pressed to use that much RAM when the majority of the market is still using GPUs with 2GB or less.
Pascal will be the first Nvidia product to debut with variable precision capability. If this sounds familiar, it’s because AMD appears to have debuted a similar capability last year.
http://www.extremetech.com/wp-conten...SA-640x360.png
It’s not clear yet how Nvidia’s lower-precision capabilities dovetail with AMDs, but Jen-Hsun referred to 4x the FP16 performance in mixed mode compared with standard (he might have been referencing single or double-precision).
http://www.extremetech.com/wp-conten...l2-640x220.png
Jen-Hsun’s napkin math claims Pascal will offer up to 10x Maxwell performance “in extremely rough estimates.”
Finally, Pascal will be the first Nvidia GPU to use NVLink, a custom high-bandwidth solution for Nvidia GPUs. Again, for now, NVLink is aimed at enterprise customers — last year, Jen-Hsun noted that the implementations for ARM and IBM CPUs had been finished, but that x86 chips faced non-technical issues (likely licensing problems). Nvidia could still use NVLink in a consumer dual-GPU card, however.
Pascal seems likely to deliver a huge uptick in Nvidia’s performance and efficiency. And given that the company managed to eke the equivalent of a generation’s worth of performance out of Maxwell while sticking with 28nm, there’s no reason to think it won’t pull it off. In the scientific market, at least, Nvidia is gunning for Xeon Phi — AMD has very little presence in this space, and that seems unlikely to change. If Sunnyvale does launch a new architecture in the next few months, we could actually see some of these features debuting first on Team Red, but the fabled Fiji’s capabilities remain more rumor than fact.
More...
-
Nvidia GeForce GTX Titan X reviewed: Crushing the single-GPU market
http://www.extremetech.com/wp-conten...nX-348x196.jpgToday, Nvidia is launching its new, ultra-high end luxury GPU, the GeForce GTX Titan X. This is the fourth GPU to carry the Titan brand, but only the second architecture to do so. When Nvidia launched the first Titan, it used a cut-down version of its workstation and HPC processor, the GK110, with just 14 of its 15 SMX units enabled. Later cards, like the Titan Black, added RAM and enabled the last SMX unit, while the dual-GPU Titan Z packed two Titan Black cores into the same silicon with mixed results.
http://www.extremetech.com/wp-conten...00-640x457.png
GM200, full fat edition
The Titan X is based on Nvidia’s GM200 processor and ships with all 24 of its SMMs enabled (that’s the current term for Nvidia’s compute units). The chip has 3072 CUDA cores, and a whopping 12GB of GDDR5 memory. To those of you concerned about a GTX 970-style problem, rest assured: There are no bifurcated memory issues here.
http://www.extremetech.com/wp-conten...itan-Specs.png
Many of the Titan X’s specifications have landed as predicted. The card has a 384-bit memory bus, 192 texture units (TMUs), 96 render outputs (ROPS) a base clock of 1GHz, and a memory clock of ~1750MHz. Nvidia is also claiming that this GPU can overclock like gangbusters, with clock speeds of up to 1.4GHz on air cooling theoretically possible. We’ll be covering overclocking performance in a separate piece in the very near future.
Unlike the first Titan, this new card doesn’t offer full-speed double-precision floating point, but it does support the same voxel global illumination (VXGI) capabilities and improved H.265 decode capabilities that have deployed in previous GTX 900 family cards.
The first 4K single-GPU?
One of Nvidia’s major talking points for the Titan X is that this new card is designed and intended for 4K play. The way the GPU is balanced tends to bear this out. The GTX 680, released three years ago, had just 32 render outputs, which are the units responsible for the output of finished pixels that are then drawn on-screen. The GTX 780 Ti, Kepler’s full workstation implementation, increased this to 48 ROPs.
http://www.extremetech.com/wp-conten...nX-640x291.jpg
The GTX 980 increased this still further, to 64 ROPS, and now the GTX 980 has pushed it even farther — all the way to 96 render outputs. The end result of all of this pixel-pushing power is that the Titan X is meant to push 4K more effectively than any single GPU before it. Whether that’s “enough” for 4K will depend, to some extent, on what kind of image quality you consider acceptable.
Competitive positioning
If you follow the GPU market with any regularity, you’re likely aware that Nvidia has been in the driver’s seat for the past six months. AMD’s Hawaii-based R9 290 and 290X may have played merry hell with the GTX 780 family back in 2013, but Nvidia’s GTX 970 and 980 reversed that situation neatly. Given the Titan X’s price point, however, there’s only one AMD GPU that even potentially competes — the dual-GPU R9 295X2.
http://www.extremetech.com/wp-conten...er-640x427.jpg
The AMD R9 295X2 has a massive 500W TDP, but it’s still fairly quiet thanks to its massive watercooling solution.
Dual-vs-single GPU comparisons are intrinsically tricky. First, the doubled-up card is almost always the overall winner — it’s exceptionally rare for AMD or Nvidia to have such an advantage over the other that two cards can’t outpace one.
The reason dual GPUs don’t automatically sweep such comparisons is twofold: First, not all games support more than one graphics card, which leaves the second GPU effectively sitting idle. Second, even when a game does support multiple cards, it typically takes driver optimizations to fully enable it. AMD has historically lagged behind in this department compared with Nvidia — we’ll examine how Team Red has done on this front in the next few pages, and fold the results into our overall recommendations.
More...
-
Nintendo’s new plan for mobile — and what it means for the company’s consoles
http://www.extremetech.com/wp-conten...endomobile.jpg
Yesterday, Nintendo dropped a pair of bombshells on the gaming world. First, the company announced that it had partnered with Japanese mobile game development company, DeNA (pronounced DNA), and would bring its major franchises — all of them — to mobile gaming. Second, it has begun work on a next-generation console, codenamed the Nintendo “NX.”
Both of these announcements are huge shifts for the Japanese company, even if it took pains to emphasize that Nintendo remains committed to its first party franchises and its own game development efforts.
http://www.extremetech.com/wp-conten...nA-640x359.jpg
Nintendo’s partnership with DeNA
Partnering with an established company like DeNA theoretically gets Nintendo the best of both worlds. It’s only barely dipped its toes into free-to-play content, while DeNA has shipped a number of games using that formula. It has no experience in developing franchised titles for smartphones or tablets, whereas DeNA has plenty. But partnering with a third-party gives Nintendo another potential advantage — it’ll let the company effectively field test new gaming concepts and paradigms on hardware that’s at least as powerful as its own shipping systems.
Revisiting the “console quality” graphics question
One of the more annoying trends in mobile gaming the last few years has been the tendency of hardware companies to trumpet “console quality graphics” as a selling point of mobile hardware. Multiple manufacturers*have done this, but head-to-head match-ups tend to shed harsh light on mobile promises.
When it comes to Nintendo’s various consoles, however, the various mobile chips would be on much better turf. First off, there’s the Nintendo 3DS. Even the “New” 3DS is a quad-core ARMv11 architecture clocked at 268MHz with 256MB of FCRAM, 6MB of VRAM with 4MB of additional memory within the SoC, and an embedded GPU from 2005, the PICA200. Any modern smartphone SoC can absolutely slaughter that feature set, both in terms of raw GPU horsepower and supported APIs.
What about the Wii U? Here, things get a little trickier. The Wii U is built on an older process node, but its hardware is a little stranger than we used to think. The CPU is a triple-core IBM 750CL with some modifications to the cache subsystem to improve SMP, and an unusual arrangement with 2MB of L2 on one core and 512K on the other two. The GPU, meanwhile, has been identified as being derived from AMD’s HD 4000 family, but it’s not identical to anything AMD ever shipped on the PC side of the business.
http://www.extremetech.com/wp-conten...ll-640x360.png
The Wii U’s structure, with triple-core CPU
By next year, the 16nm-to-14nm SoCs we see should be more than capable of matching the Wii U’s CPU performance, at least in tablet or set-top form factors. If I had to bet on a GPU that could match the HD 4000-era core in the Wii U, I’d put money on Nvidia’s Tegra X1, with 256 GPU cores, 16 TMUs, and 16 ROPS, plus support for LPDDR4. It should match whatever the Wii U had, and by 2016, we should see more cores capable of matching it.
http://www.extremetech.com/wp-conten...03/TegraX1.jpg
Nintendo isn’t going to want to trade off perceived quality for short-term profit. The company has always been extremely protective of its franchises —*ensuring*mobile devices (at least on the high end) are capable of maintaining Nintendo-level quality will be key to the overall effort. At the same time, adapting those franchises to tablets and smartphones gives Nintendo a hands-on look at what kinds of games people*want to play, and the ways they use modern hardware to play them.
What impact will this have on Nintendo’s business?
Make no mistake: I think Nintendo wants to remain in the dedicated console business, and the “NX” next-generation console tease from yesterday supports that. Waiting several years to jump into the mobile market meant that mobile SoCs had that much more time to mature and improve, and offer something closer to the experience Nintendo prizes for its titles.
The question of whether Nintendo can balance these two equations, however, is very much open for discussion. Compared with the Wii, the Wii U has been a disaster. As this chart from VGChartz shows, aligned by month, the Wii had sold 38 million more consoles than the Wii U has. The chasm between the Wii and Wii U is larger than the number of Xbox One’s and PS4’s sold combined.
http://www.extremetech.com/wp-conten.../WiivsWiiU.png
Without more information, it’s difficult to predict what the Nintendo NX might look like. Nintendo could have a console ready to roll in 18-24 months, which would be well within the expected lifetimes of the PlayStation 4 and Xbox One — or, of course, it could double-down on handheld gaming and build a successor to the 3DS. Either way, the next-generation system will be significantly more powerful than anything Nintendo is currently shipping.
Pushing into mobile now gives Nintendo a way to leverage hardware more powerful than its own, and some additional freedom to experiment with game design on touch-screen hardware. But it could also signal a sea change in development focus. If the F2P model takes off and begins generating most of the company’s revenue, it’ll undoubtedly change how its handheld and console games are built — and possibly in ways that the existing player base won’t like.
Balancing this is going to be a difficult achievement for the company — a bunch of poorly designed F2P games might still make short-term capital, but could ruin Nintendo’s reputation as a careful guardian of its own franchises. Failing to exploit the mechanics of the F2P market, on the other hand, could rob the company of capital it needs to transition to its new console.
More...
-
AMD-backed FreeSync monitors finally shipping
http://www.extremetech.com/wp-conten...re-348x196.jpg
For the past few years, both AMD and Nvidia have been talking up their respective solutions for improving gaming display quality. Nvidia calls its proprietary implementation G-Sync, and has been shipping G-Sync displays with partner manufacturers for over a year, while AMD worked with VESA (Video Electronics Standard Association) to build support for Adaptive Sync (aka FreeSync) into the DisplayPort 1.2a standard. Now, with FreeSync displays finally shipping as of today, it’ll soon be possible to do a head-to-head comparison between them.
Introducing FreeSync
AMD first demonstrated what it calls FreeSync back in 2013. Modern Graphics Core Next (GCN) graphics cards from AMD already had the ability to control display refresh rates — that technology is baked into the embedded Display Port standard (eDP). Bringing it over to the desktop, however, was a trickier proposition. It’s taken a long time for monitors that support DisplayPort 1.2a to come to market, which gave Nvidia a significant time-to-market advantage. Now that*FreeSync displays are finally shipping, let’s review how the technology works.
Traditional 3D displays suffer from two broad problems — stuttering and tearing. Tearing occurs when Vertical Sync (V-Sync) is disabled — the monitor draws each frame as soon as it arrives, but this can leave the screen looking fractured, as two different frames of animation are displayed simultaneously.
Turning on V-Sync solves the tearing problem, but can lead to stuttering. Because the GPU and monitor don’t communicate directly, the GPU doesn’t “know” when the monitor is ready to display its next frame. If the GPU sends a frame that the GPU doesn’t draw, the result is an animation stutter, as shown below:
http://www.extremetech.com/wp-conten...c1-640x323.jpg
FreeSync solves this problem by allowing the GPU and monitor to communicate directly with each other, and adjusting the refresh rate of the display on the fly to match what’s being shown on screen.
http://www.extremetech.com/wp-conten...c2-640x327.jpg
The result is a smoother, faster gaming experience. We haven’t had a chance to play with FreeSync yet, since the monitors have only just started shipping. But if the experience is analogous to what G-Sync offers, the end result will be gorgeous.
As of today, multiple displays from Acer, BenQ, LG, Nixeus, Samsung, and ViewSonic are shipping with support for FreeSync (aka DisplayPort 1.2a). A graph of those displays and their various capabilities is shown below. AMD is claiming that FreeSync monitors are coming in at least $50 cheaper than their G-Sync counterparts, which we’ll verify as soon as these monitors are widely available in-market.
Which is better — FreeSync or G-Sync?
One of the things that’s genuinely surprised me over the past year has been how ardently fans of AMD and Nvidia have defended or attacked FreeSync and G-Sync, despite the fact that it was literally impossible to compare the two standards, because nobody had hardware yet. Now that shipping hardware does exist, we’ll be taking the hardware head-to-head.
AMD, of course, is claiming that FreeSync has multiple advantages over G-Sync, including its status as an open standard as compared to a proprietary solution and the fact that G-Sync can incur a small performance penalty of 3-5%. (Nvidia has previously stated the 3-5% figure, AMD’s graph actually shows a much smaller performance hit).
http://www.extremetech.com/wp-conten...it-640x248.jpg
AMD is claiming that FreeSync has no such issues, and again, we’ll check that once we have displays in hand.
There’s one broader issue of “better” we can address immediately, however, which is this: Which standard is better for consumers as a whole? Right now, FreeSync is practically a closed standard, even if AMD and VESA don’t intend that to be the case. If you want the smooth frame delivery that Nvidia offers, you buy G-Sync. If you want the AMD flavor, you buy FreeSync. There’s currently no overlap between the two. To be clear, that lack of overlap only applies to the G-Sync and FreeSync technologies themselves. A FreeSync display will function perfectly as a standard monitor if you hook it up to an Nvidia GPU, and a G-Sync monitor works just fine when attached to an AMD graphics card.
The best outcome for consumers is for AMD, Nvidia, and Intel to collectively standardize on a single specification that delivers all of these capabilities. For now, that seems unlikely. Adaptive Sync has been defined as an optional standard for both DisplayPort 1.2 and 1.3, which means manufacturers won’t be forced to integrate support, and may treat the capability as a value-added luxury feature for the foreseeable future.
How this situation evolves will depend on which standard enthusiasts and mainstream customers embrace, and whether Intel chooses to add support for DP 1.2a or stay out of the fight. For now, if you buy a display with either technology, you’re locking yourself to a corresponding GPU family.
More...
-
The uncertain future of the Xbox One
http://www.extremetech.com/wp-conten...ne-640x353.jpg
It’s difficult to know what to make of the Xbox One. When Microsoft first debuted the console nearly two years ago, its vision of the future of gaming slammed face first into the rock wall of consumer expectations. Microsoft offered a second-generation motion tracker with voice commands and an “always on” capability — but consumers didn’t want it. The company declared that online and retail disc purchases would be treated the same, only to find that customers valued the ability to trade in games at a local store. It promised a future in which families and friends could share games out of a common library — but at the cost of offline play.
Two years later, many of these initial failings have been fixed. But rumors that Microsoft would like out of the Xbox business continue to swirl, prompted partly by stealthy executive departures and ongoing legal issues surrounding the Xbox 360’s disc-scratching problems.
Discussions of whether Microsoft wants to keep the Xbox One business for itself tend to devolve into arguments over whether the console is profitable (or profitable “enough”), or assume that any divestment would, by necessity, mean the end of the Xbox One as we know it. The former is inaccurate and the second improbable. Microsoft is actually well positioned to spin the Xbox One division off to another company — Redmond has decades of experience in providing software tools that other businesses use and rely on. A spin-off might change the branding and the long-term vision, but the hardware would remain fundamentally reliant on Microsoft operating systems, APIs, and development tools, at least through the end of this generation. Integration with another major company’s core services or software products could be layered on top of the existing Xbox One OS — since the box already runs a modified version of Windows, this would be fairly simple to arrange.
The argument for selling the Xbox One relies less on proclamations of doom and gloom and more on the question of where Satya Nadella wants to focus. Despite some departures and changes, I think Microsoft’s own roadmap for the Xbox One — and its integration with both Windows 10 and DirectX 12 — tell us most of what we need to know about the future of the platform.
The impact of DirectX 12, Windows 10 streaming
Microsoft made multiple high-profile announcements around the Xbox One earlier this year, when it declared Windows 10 would have the native ability to stream Xbox games to anywhere across the home network. We’ve advocated for this kind of feature for years, and are thrilled to see it happening — game streaming is a category that Microsoft should have owned already, thanks to its huge share of the PC market. You could even argue giving away Windows 10 is a way to further sweeten the deal, since it increases the chance more users will upgrade.
http://www.extremetech.com/wp-conten...10-640x328.jpg
DirectX 12 is another interesting feature that could improve the Xbox One. At GDC, Stardock’s Brad Wardell argued that Microsoft, AMD, Nvidia, and Intel have all been lying about the benefits of DX12 because they don’t want to admit just how badly DirectX 11 was broken. Admitting the benefits of DX12 would, according to Wardell, “mean acknowledging that for the past several years, only one of your cores was talking to the GPU, and no one wants to go, ‘You know by the way, you know that multi-core CPU? It was useless for your games.’ Alright? No one wants to be that guy. People wonder, saying ‘Gosh, doesn’t it seem like PC games have stalled? I wonder why that is?”
http://www.extremetech.com/wp-conten...re-640x450.png
CPU performance in DirectX 12
If D3D12 offers the same performance improvements as Mantle, we’ll see it boosting gameplay in titles where the CPU, rather than the GPU, is the primary bottleneck. So far, this doesn’t appear to be the case in many games — the Sony PS4 is often somewhat faster than the Xbox One, despite the fact that the Xbox One has a higher CPU clock speed. Whether this is the result of some other programming issue is undoubtedly game-dependent, but DX12 simply doesn’t look like an automatic speed boost for the Xbox One — it’s going to depend on the game and the developer.
Taken as a whole, however, the Windows 10 integration and D3D12 work mean that two of Microsoft’s largest core businesses — its PC OS and its gaming platform — are now separated almost entirely by function rather than any kind of intrinsic capability.
Microsoft’s last, best hope
As a recent GamesIndustry.biz piece points out, Microsoft may be stuck with Xbox One for a very simple reason: There aren’t many companies that have both the capital and the interest in gaming to buy the segment at anything like a fair price. It’s entirely possible that Nadella would prefer to be out of gaming, but he’s not willing to defund and destroy the segment if he can’t find a buyer.
Regardless of whether Microsoft has explored selling the unit, the company is finally taking the kinds of steps that its customers are likely to value — steps that could allow it to leverage the strengths of PC and Xbox gaming side-by-side, rather than simply walling off the two groups of customers in separate gardens. It’s not hard to see how Microsoft could eventually extend things the other direction as well, offering PC game buyers with Xbox One’s the ability to stream PC titles to the television. True, this would compete more closely with some of Steam’s features, but Microsoft has to be aware that a company other than itself controls the keys to PC gaming — and doubtless has ideas about how it could change that situation. The fact that it didn’t play out this generation doesn’t mean it won’t, long term.
The Xbox One may have had one of the most disastrous debuts in the history of modern marketing, and it has a great deal of ground to make up, but Microsoft has proven willing to adapt the console to better suit the needs of its target audience. Taking the long view, it’s hard to argue that Microsoft’s system is at a greater disadvantage than the PS3 was at launch, with terrible sales, few games, and a huge price tag. The Xbox 360 led the PS3 in total sales for most of last generation, even after the RROD debacle, but in the final analysis the two platforms ended up selling virtually the same.
If Microsoft’s gambits work, the Xbox One’s Windows 10 streaming and future cross-play opportunities could take it from also-ran to preferred-platform status.
More...
-
Hands on: PS4 firmware 2.50 with suspend and resume, 60fps Remote Play
http://www.extremetech.com/wp-conten...de-640x353.png
Earlier this week, Sony released PS4 firmware 2.50 dubbed “Yukimura.” There are numerous changes with this latest version, but the biggest two are definitely the addition of suspend/resume and 60fps Remote Play. Of course, we knew that this update was coming, but I wanted to know how well these features actually worked. Is the process seamless? Does the higher frame rate cause any problems?
More...
-
DirectX 12, LiquidVR may breathe fresh life into AMD GPUs thanks to asynchronous shading
http://www.extremetech.com/wp-conten...rd-640x353.jpg
With DirectX 12 coming soon with Windows 10, VR technology ramping up from multiple vendors, and the Vulkan API already debuted, it’s an exceedingly interesting time to be in PC gaming. AMD’s GCN architecture is three years old at this point, but certain features baked into the chips at launch (and expanded with Hawaii in 2013) are only now coming into their own, thanks to the improvements ushered in by next-generation APIs.
One of the critical technologies underpinning this argument is the Asynchronous Command Engine (ACEs) that are part of every GCN-class video card. The original HD 7900 family had two ACE’s per GPU, while AMD’s Hawaii-class hardware bumped that even further, to eight.
http://www.extremetech.com/wp-conten...ne-640x359.jpg
AMD’s Hawaii, Kaveri, and at least the PS4 have eight ACE’s. The Xbox One may be limited to just two, but does retain the capability.
AMD’s Graphics Core Next (GCN) GPUs are capable of asynchronous execution to some degree, as are Nvidia GPUs based on the GTX 900 “Maxwell” family. Previous Nvidia cards like Kepler and even the GTX Titan were not.
What’s an Asynchronous Command Engine?
The ACE units inside AMD’s GCN architecture are designed for flexibility. The chart below explains the difference — instead of being forced to execute a single queue via pre-determined order, even when it makes no sense to do so, tasks from different queues can be scheduled and completed independently. This gives the GPU some limited ability to execute tasks out-of-order — if the GPU knows that a time-sensitive operation that only needs 10ns of compute time is in the queue alongside a long memory copy that isn’t particularly time sensitive, but will take 100,000ns, it can pull the short task, complete it, and then run the longer operation.
http://www.extremetech.com/wp-conten...on-640x197.jpg
Asynchronous vs. synchronous threading
The point of using ACE’s is that they allow the GPU to process and execute multiple command streams in parallel. In DirectX11, this capability wasn’t really accessible — the API was heavily abstracted, and multiple developers have told us that multi-threading support in DX11 was essentially broken from Day 1. As a result, there’s been no real way to tell the graphics card to handle graphics and compute in the same workload.
http://www.extremetech.com/wp-conten...es-640x333.jpg
GPU pipelines in DX11 vs. DX12
AMD’s original GCN hardware may have debuted with just two ACEs, but AMD claims that it added six ACE units to Hawaii as part of a forward-looking plan, knowing that the hardware would one day be useful. That’s precisely the sort of thing you’d expect a company to say, but there’s some objective evidence that Team Red is being honest. Back when GCN and Nvidia’s Kepler were going head to head, it quickly became apparent that while the two companies were neck and neck in gaming, AMD’s GCN was far more powerful than Nvidia’s GK104 and GK110 in many GPGPU workloads. The comparison was particularly lopsided in cryptocurrency mining, where AMD cards were able to shred Nvidia hardware thanks to a more powerful compute engine and support for some SHA-1 functions in hardware.
When AMD built Kaveri and the SoCs for the PS4 and Xbox One, it included eight ACEs in the first two (the Xbox One may have just two). The thinking behind that move was that adding more asynchronous compute capability would allow programmers to use the GPU’s computational horsepower more effectively. Physics and certain other types of in-game calculations, including some of the work that’s done in virtual reality simulation, can be handled in the background.
http://www.extremetech.com/wp-conten...rf-640x324.jpg
Asynchronous shader performance in a simulated demo.
AMD’s argument is that with DX12 (and Mantle / Vulkan), developers can finally use these engines to their full potential. In the image above, the top pipeline is the DX11 method of doing things, in which work is mostly being handled serially. The bottom image is the DX12 methodology.
Whether programmers will take advantage of these specific AMD capabilities is an open question, but the fact that both the PS4 and Xbox one have a full set of ACEs to work with suggests that they may. If developers are writing the code to execute on GCN hardware already, moving that support over to DX12 and Windows 10 is no big deal.
http://www.extremetech.com/wp-conten...ch-640x300.png
A few PS4 titles and just one PC game use asynchronous shaders now, but that could change.
Right now, AMD has only released information on the PS4’s use of asynchronous shaders, but that doesn’t mean the Xbox One can’t. It’s possible that the DX12 API push that Microsoft is planning for that console will add the capability.
http://www.extremetech.com/wp-conten...nc-640x312.jpg
AMD is also pushing ACE’s as a major feature for its LiquidVR platform — a fundamental capability that it claims will give Radeon cards an edge over their Nvidia counterparts. We’ll need to see final hardware and drivers before making any such conclusions, of course, but the compute capabilities of the company’s cards are well established. It’s worth noting that while AMD did have an advantage in this area over Kepler, which had only one compute and one graphics pipeline, Maxwell has one graphics pipeline and 32 compute pipes, compared to just 8 AMD ACEs. Whether this impacts performance or not in shipping titles is something we’ll only be able to answer once DX12 games that specifically use these features are in-market.
The question, from the end-user perspective, obviously boils down to which company is going to offer better performance (or price/performance ratio) in the next-generation DX12 API. It’s far too early to make a determination on that front — recent 3DMark 12 benchmarks put AMD’s R9 290X out in front of Nvidia’s GTX 980, while Star Swarm results from earlier this year reversed that result.
What is clear is that DX12 and Vulkan are reinventing 3D APIs and, by extension, game development in ways we haven’t seen in years. The new capabilities of these frameworks are set to improve everything from multi-GPU configurations to VR displays. Toss in features like 4K monitors and FreeSync / G-Sync support, and it’s an exciting time for the PC gaming industry.
More...
-
Microsoft targets Halo Online modders with DMCA takedown
http://www.extremetech.com/wp-conten...ne-640x353.jpg
Nobody likes to be told they can’t have something just because they live in the wrong part of the world. Case in point — Microsoft has earned the ire of gamers across the globe for its decision to make the upcoming free-to-play Halo Online PC title available only in Russia. Modders have gotten their hands on the game, though, and are busy removing the region lock. In response, Microsoft is unleashing the lawyers.
Halo Online is simpler than modern Halo games on the Xbox 360 and Xbox One. It’s based on a heavily modified Halo 3 engine that has been tweaked to run well on low-power PCs. That said, the gameplay videos of Halo Online look perfectly serviceable. The game is played entirely online, so it’s multiplayer only. Microsoft doesn’t plan to create any sort of campaign for Halo Online.
People are not taking kindly to Microsoft’s decision to launch Halo Online in closed beta for Russia only, but you can guess at the reasoning. The rates of piracy in Russia are higher than in North America or Europe, but free-to-play games tend to pull in some revenue from people that would otherwise just grab all their games from BitTorrent. The low spec requirements will also expand the user base dramatically. Microsoft would likely offer players the option of buying additional equipment and accessories in the game for real money, but there are no details what the cost structure will be yet.
No sooner did Microsoft announce Halo Online then a leaked copy of the game showed up online. With access to the game, modders set to work getting around the region lock. It wasn’t long before a custom launcher called ElDorito (a joke based on the games official launcher, called ElDorado) showed up on Github. ElDorito is intended to make the game playable everywhere, but Microsoft doesn’t want that. And this is how the lawyers came to be involved.
The ElDorito Github listing was hit with a DMCA takedown notice by Microsoft’s legal team yesterday. In the document, Microsoft asserts a copyright claim to ElDorito and demands it be taken down. Github is obliged to comply with any DMCA letters it gets, but the ElDorito team can appeal if they choose. It’s important to note that ElDorito isn’t the actual game — it’s just a launcher created by the community. It’s still possible it uses something from the official launcher or game, so Microsoft’s claim could be completely valid.
The game files needed to play Halo Online are floating around online in the form of a 2.1GB ZIP file. As this is an online game, Microsoft can probably block leaked versions going forward. Still, modders aren’t going to stop developing workarounds until there’s a legitimate way to play Halo Online. Microsoft has said that any expansion of Halo Online to other markets would require changes to the experience, and it’s not focusing on that right now. Maybe someday, though.
More...
-
Relying on server connections is ruining video games
http://www.extremetech.com/wp-conten...4-640x353.jpeg
As time goes on, more games are relying on online elements. And while that makes perfect sense for multiplayer games, single-player games are being impacted as well. When used sparingly, it’s no big deal, but developers and*publishers*seem totally willing to sacrifice the user experience for online hooks. And unsurprisingly, consumers aren’t happy with the situation. So, when are developers going to get the picture, and stop demanding online participation?
At the end of March, 2K Sports shut down the servers for NBA 2K14. While it’s not particularly surprising to see a sports game’s multiplayer mode shut down after a year or so, the server shutdown left the save files for many single-player games unusable. If your “career mode” save used the online hooks, it stopped working completely. Users do have the option of going offline exclusively, but that means starting over completely.
The reaction from users was incredibly harsh, and 2K Sports quickly turned the servers back on. Instead of a paltry 16-month window, 2K Sports promised 18 to 27 months of online support. It’s better than nothing, but that’s little more than a bandaid on a gaping wound. Unless the devs push out a patch to convert online saves to offline saves, players will still be left out in the cold eventually.
http://www.extremetech.com/wp-conten...e-Servers.jpeg
Of course, this problem goes well beyond sports titles. Infamously, SimCity and Diablo III both required Internet connections at launch. After serious performance issues and consumer outcry, offline modes were later patched into both of those titles. Even after these massive failures of the always-on ideology, it seems developers and publishers are still willing to inconvenience the players to push online functionality.
Now, not all online connectivity is bad. For example, I think BioWare’s Dragon Age Keep is a very clever solution to the save game problem. Despite my enthusiasm, it’s still a flawed implementation. If your Internet connection is down or the Keep servers are offline, you can’t customize your world state at all. At some point, there was talk about side-loading world states over USB, but that feature never materialized. And when EA pulls the plug on the Keep someday, the game will be significantly worse off.
Obviously, there are benefits for everyone when online functionality is included as an optional feature. But locking major single-player functionality behind an online gate almost always ends in heartbreak. What we need is a balance between online and offline, but the industry continues to fumble on this issue. How much consumer outrage is it going to take before the developers and publishers wise up?
More...
-
The EFF, ESA go to war over abandonware, multiplayer gaming rights
http://www.extremetech.com/wp-conten...FF-348x196.jpgOne of the murky areas of US copyright law where user rights, corporate ownership, and the modern digital age sometimes collide is the question of abandonware. The term refers to software for which support is no longer available and covers a broad range of circumstances — in some cases, the original company no longer exists, and the rights to the product may or may not have been acquired by another developer, who may or may not intend to do anything with them.
The EFF (Electronic Frontier Foundation) has filed a request with the Library of Congress, asking that body to approve an exception to the DMCA (Digital Millennium Copyright Act) that would allow “users who wish to modify lawfully acquired copies of computer programs” to do so in order to guarantee they can continue to access the programs — and the Entertainment Software Association, or ESA, has filed a counter-comment strongly opposing such a measure.
Let’s unpack that a bit.
The Digital Millennium Copyright Act, or DMCA, is a copyright law in the United States. One of the areas of law that it deals with is specifying what kinds of access end-users and owners of both software and some computer hardware are legally entitled to. One of its most controversial passages establishes that end users have no right to break or bypass any form of copy protection or security without the permission of the rights holder, regardless of how effective that protection actually is. In other words, a company that encrypts a product so poorly that a hacker can bypass it in seconds can still sue the individual for disclosing how broken their system actually is.
The Librarian of Congress has the authority to issue exemptions to this policy, with the instruction that they do so when evidence demonstrates that access-control technology has had an adverse impact on those seeking to make lawful use of protected works. Exemptions expire after three years, and must be resubmitted at that time (this caused problems after the Librarian authorized jailbreaking cell phones in 2010 was not renewed in 2013).
http://www.extremetech.com/wp-conten...wn-640x431.jpg
The GameSpy multiplayer service shutdown flung many titles into legal limbo and uncertain futures
The EFF has requested that the Library of Congress allow legal owners of video games that require authentication checks with a server, or that wish to continue playing multiplayer, to be allowed to remove such restrictions and operate third-party matchmaking servers in the event that the original publisher ceases to exist or to support the title. The request covers both single-player and multiplayer games, and defines “no longer supported by the developer” as “We mean the developer and its authorized agents have ceased to operate authentication or matchmaking servers.”
This is a significant problem across gaming, and the shift to online and cloud-based content has only made the problem worse. The EFF does not propose that this rule apply to MMOs like EVE Online or World of Warcraft, but rather to games with a distinct multiplayer component that were never designed to function as persistent worlds. To support its claim that this is an ongoing issue, the EFF notes that EA typically only supports offline play for its sports titles for 1.5 *to 2 years, and that more than 150 games lost online support in 2014 across the entire industry.
The ESA staunchly opposes
The ESA — which was on record as supporting SOPA — has filed a joint comment with the MPAA and RIAA, strongly opposing such measures. In its filing, the ESA argues the following:
- The EFF’s request should be rejected out of hand because “circumvention related to videogame consoles inevitably increases piracy.”
- Servers aren’t required for single-player gameplay. In this alternate universe, Diablo III, Starcraft 2, and SimCity don’t exist.
- Video game developers charge separate fees for multiplayer, which means consumers who buy games for multiplayer aren’t harmed when multiplayer is cancelled. Given that the number of games that don’t charge for multiplayer vastly exceeds those that do, I’m not sure who proofed this point.
- The purpose of such practices is to create the same experience as the multiplayer once provided, which means that it can’t be derivative, which means there is no grounds to allow anyone to play a multiplayer game after the initial provider*has decided to stop supporting the server infrastructure.
This line of thinking exemplifies the corporation-uber-alles attitude that pervades digital rights policy in the 21st century. The EFF is not asking the Library of Congress to force companies to support an unprofitable infrastructure. It’s not asking the Library to compel the release of source code, or to bless such third-party efforts, or to require developers to support the development of a replacement matchmaking service in any way.
The EFF’s sole argument is this: Users who legally paid for a piece of software should have the right to try to create a replacement server infrastructure in the event that the company who operated the official version quits. The ESA, in contradiction, argues that multiplayer function over the Internet “is not a ‘core’ functionality of the video game, and permitting circumvention to access such functionality would provide the user greater benefits than those bargained and paid for. Under these facts, consumers are not facing any likely, substantial, adverse effects on the ability to play the games they have purchased.”
http://www.extremetech.com/wp-conten...04/SimCity.jpg
The disastrous debut of SimCity demonstrated everything wrong with always-online play.
Try to wrap your head around that one, if you can. If you bought a game to play multiplayer, and EA disables the multiplayer function, the ESA argues that this does not adversely impact your ability to play the game, despite arguing in the same sentence that restoring your ability to play the game’s multiplayer would confer upon you greater benefit than you initially paid for.
It is literally impossible to simultaneously receive greater benefit than you paid for if functionality is restored while losing nothing if that functionality is denied.
The real reasoning is a great deal simpler: If you’re happy playing an old game, you might be less likely to buy a new one — and in an industry that’s fallen prey to kicking out sequels on a yearly basis, regardless of the impact on game quality, the phantom of lost sales is too frightening to ignore.
More...
-
Intel’s erratic Core M performance leaves an opening for AMD
http://www.extremetech.com/wp-conten...-M-348x196.jpg
When Intel announced its 14nm Core M processor it declared that this would be the chip that eliminated consumer perceptions of an x86 “tax” once and for all.* Broadwell, it was said, would bring big-core x86 performance down to the same fanless, thin-and-light form factors that Android tablets used, while simultaneously offering performance no Android tablet could match. It was puzzling, then, to observe that some of the first Core M-equipped laptops, including Lenovo’s Yoga 3 Pro, didn’t review well and were dinged for being pokey to downright sluggish in some cases.
A new report from Anandtech delves into why this is, and comes away with some sobering conclusions. Ever since Intel built Turbo Mode into its processors, enthusiasts have known that “Turbo” speeds were best-case estimates, not guarantees. If you think about it, the entire concept of Turbo Mode was a brilliant marketing move. Instead of absolutely guaranteeing that a chip will reach a certain speed at a given temperature or power consumption level, simply establish that frequency range as a “maybe” and push the issue off on OEMs or enthusiasts to deal with. It helped a great deal that Intel set its initial clocks quite conservatively. Everyone got used to Turbo Mode effectively functioning as the top-end frequency, with the understanding that frequency stair-stepped down somewhat as the number of threads increased.
Despite these qualifying factors, users have generally been able to expect that a CPU in a Dell laptop will perform identically to that same CPU in an HP laptop. These assumptions aren’t trivial — they’re actually critical to reviewing hardware and to buying it.
The Core M offered OEMs more flexibility in building laptops than ever before, including the ability to detect the skin temperature of the SoC and adjust performance accordingly. But those tradeoffs have created distinctly different performance profiles for devices that should be nearly identical to one another. In many tests, the Intel Core M 5Y10 — a chip with an 800MHz base frequency and a 2GHz top clock — is faster than a Core M 5Y71 with a base frequency of 1.2GHz and a max turbo frequency of 2.9GHz. In several cases, the gaps in both CPU and GPU workloads are quite significant — and favor the slower processor.
http://www.extremetech.com/wp-conten...74-640x344.png
While this issue is firmly in the hands of OEMs and doesn’t reflect a problem with Core M as such, it definitely complicates the CPU buying process. The gap between two different laptops configured with a Core M 5Y71 reached as high as 12%, but the gap between the 5Y10 and the 5Y71 was as high as 36% in DOTA 2. The first figure is larger than we like, while the second is ridiculous.
None of this means that the Core M is a bad processor as such. But it’s clear that its operation and suitability for any given task is far more temperamental than has historically been the case. Even a 12% difference between two different OEMs is high for our taste — if you can’t trust that the CPU you buy is the same as the core you’d get from a different manufacturer, you can’t trust much about the system.
Is this an opportunity for AMD’s Carrizo?
Officially, AMD’s Carrizo and Intel’s Core M shouldn’t end up fighting over the same space; the Core M is intended for systems that draw up to 6W of power, and Carrizo’s lowest known power envelope is a 12W TDP. That doesn’t mean, however, that AMD can’t wring some marketing and PR margins out of the Core M’s OEM-dependent performance.
http://www.extremetech.com/wp-conten...C6-640x330.jpg
When AMD talked about Carrizo at ISSCC, it didn’t just emphasize new features like skin-temperature monitoring, it also discussed how each chip would use Adaptive Voltage and Frequency Scaling, as opposed to Dynamic Voltage and Frequency Scaling. AVFS allows for much finer-grained power management across the entire die — it requires incorporating more control and logic circuitry, but it can give better power savings and higher frequency headroom as a result.
If AVFS offers OEMs more consistent performance and better characteristics (albeit in a higher overall power envelope), AMD may have a marketing opportunity to work with — assuming, of course, that it can ship Carrizo in the near future and that the chip is competitive in lower power bands to start with. While that’s a tall order, it’s not quite as tall as it might seem — AMD’s Kaveri competed more effectively against Intel at lower power than in higher-power desktop form factors.
Leaving AMD out of the picture, having seen both the Core M and the new Core i5-based Broadwells, I’d have to take a newer Core i5, hands down. Core M may allow for an unprecedented level of thinness, but the loss of ports, performance, and battery life doesn’t outweigh the achievement of stuffing an x86 core into a form factor this small — at least, not for me. Feel differently? Sound off below.
More...
-
Leaked details, if true, point to potent AMD Zen CPU
http://www.extremetech.com/wp-conten...er-348x196.jpgFor more than a year, information on AMD’s next-generation CPU architecture, codenamed Zen, has tantalized the company’s fans — and those who simply want a more effective competitor against Intel. Now, the first concrete details have begun to appear. And if they’re accurate, the next-generation chip could pack a wallop.
Bear in mind, this is a single leaked slide of the highest-end part. Not only could details change dramatically between now and launch, but the slide itself might not be accurate. Let’s take a look:
http://www.extremetech.com/wp-conten...en-640x535.jpg
According to Fudzilla, the new CPU will offer up to 16 Zen cores, with each core supporting up to two threads for a total of 32 threads. We’ve heard rumors that this new core uses Simultaneous Multithreading, as opposed to the Clustered Multi-Threading that AMD debuted in the
Bulldozer family and has used the last four years.
Each CPU core is backed by 512K of L2 cache, with 32MB of L3 cache across the entire core. Interestingly, the L3 cache is shown as 8MB contiguous blocks rather than a unified design. This suggests that Zen inherits its L3 structure from Bulldozer, which used a similar approach — though hopefully the cache has been overhauled for improved performance. The integrated GPU also supposedly offers double-precision floating point at 1/2 single-precision speed.
Supposedly the core supports up to 16GB of attached HBM (High Bandwidth Memory) at 512GB/s, plus a quad DDR4 controller with built-in DDR4-3200 capability, PCIe 3.0, and SATA Express support.
Too good to be true?
The CPU layout shown above makes a lot of sense. We’re clearly looking at a modular part, and AMD has defined one Zen “module” as consisting of four CPU cores, eight threads, 2MB of L2, and an undoubtedly-optional L3 cache. But it’s the HBM interface, quad-channel DDR4, and 64 lanes of PCIe 3.0 that raise my eyebrows.
Here’s why: Right now, the highest-end servers you can buy from Intel pack just 32 PCI-Express lanes. Quad-channel DDR4 is certainly available, but again, Intel’s high-end servers support 4x DDR4-2133. Server memory standards typically lag behind desktops by a fair margin. It’s not clear when ECC DDR4-3200 will be ready for prime time. That’s before we get to the HBM figures.
Make no mistake, HBM is coming, and integrating it on the desktop and in servers would make a huge difference — but 16GB of HBM memory is a lot. Furthermore, building a 512GB/s memory interface into a server processor at the chip level is another eyebrow-arching achievement. For all the potential of HBM — and make no mistake, it’s got a lot of potential –that’s an extremely ambitious target for a CPU that’s supposed to debut in 12 to 18 months, even in the server space.
Nothing in this slide is impossible, and if AMD actually pulled it off while hitting its needed IPC and power consumption targets, it would have an absolutely mammoth core. But the figures on this slide are so ambitious, it looks as though someone took a chart of all the most optimistic predictions that’ve been made about the computing market in 2016, slapped them together on one deck, and called it good.
I’ll be genuinely surprised if AMD debuts a 16-core chip with a massive integrated graphics processor, and 16GB of HBM memory, and 64 lanes of PCI-Express, and a revamped CPU core, and a new quad-channel DDR4 memory controller,*and*a TDP that doesn’t crack 200W for a socketed processor.
But hey — you never know.
More...
-
US government blocks Intel, Chinese collaboration over nuclear fears, national security
http://www.extremetech.com/wp-conten...-2-640x353.jpg
A major planned upgrade to the most powerful supercomputer in the world, the Chinese-built Tianhe-2, has been effectively canceled by the US government. The original plan was to boost its capability, currently at ~57 petaflops (PFLOPS), up to 110 PFLOPS using faster Xeon processors and Xeon Phi add-in boards. The Department of Commerce has now scuttled that plan. Under US law, the DoC is required to regulate sales of certain products if it has information that indicates the hardware will be used in a manner “contrary to the national security or foreign policy interests of the United States.”
http://www.extremetech.com/wp-conten...sification.png
According to the report, the license agreement granted to China’s National University of Defense Technology was provisional and subject to case-by-case agreement. Intel, in other words, never had free reign to sell hardware in China. Its ability to do so was dependent on what the equipment was used for. The phrase “nuclear explosive activities” is defined as including: “Research on or development, design, manufacture, construction, testing, or maintenance of any nuclear explosive device, or components or subsystems of such a device.”
In order to levy such a classification, the government is required to certify that it has “more than positive knowledge” that the new equipment would be used in such activities. But the exact details remain secret for obvious reasons. For now, the Tianhe-2 will remain stuck at its existing technology level. Intel, meanwhile, will still have a use for those Xeon Phis — the company just signed an agreement with the US government to deliver the hardware for two supercomputers in 2016 and 2018.
Implications and politics
There are several ways to read this new classification. On the one hand, it’s unlikely to harm Intel’s finances — Intel had sold hardware to the Tianhe-2 project at a steep discount according to many sources, and such wins are often valued for their prestige and PR rather than their profitability. This new restriction won’t allow another company to step in, even if such a substitution were possible — it’s incredibly unlikely that IBM, Nvidia, or AMD could step in to offer an alternate solution.
It’s also possible that this classification is a way of subtly raising pressure on the Chinese as regards to political matters in the South China Sea. China has been pumping sand on to coral atolls in the area, in an attempt to*boost its*territorial claim to the Spratly Islands. The Spratly Islands are claimed by a number of countries, including China, which has argued that its borders and territorial sovereignty should extend across the area. Other nations, including the Philippines, Brunei, Vietnam, and the US have taken a dim view of this. Refusing to sell China the parts to upgrade its supercomputer could be a not-so-subtle message about the potential impact of Chinese aggression in this area.
http://www.extremetech.com/wp-conten...on-640x339.jpg
China’s Loongson processor. The latest version is a eight-core chip built on 28nm.
Restricting China’s ability to buy high-end x86 hardware could lead the country to invest more heavily in building its own CPU architectures and investing with alternative companies. But this was almost certainly going to happen, no matter what. China is ambitious and anxious to create its own homegrown OS and CPU alternatives. The Loongson CPU has continued to evolve over the last few years, and is reportedly capable of executing x86 code at up to 70% of the performance of native binaries thanks to hardware-assisted emulation. Tests on the older Loongson 2F core showed that it lagged ARM and Intel in power efficiency, but the current 3B chip is several generations more advanced. These events might spur China to invest even more heavily in that effort, even though the chip was under development long before these issues arose.
More...
-
New Samsung 840 Evo firmware will add ‘periodic refresh’ capability
http://www.extremetech.com/wp-conten...re-348x196.jpg
When Samsung shipped the 840 Evo, it seemed as though the drive struck a perfect balance between affordability and high-speed performance. Those impressions soured somewhat after it became clear that many 840 EVO’s suffered performance degradation when accessing older data. Samsung released a fix last year that was supposed to solve the problem for good, but a subset of users have begun experiencing issues again. Earlier this year, the company announced that a second fix was in the works.
Tech Report now has some details on how the company’s second attempt to repair the problem will work. Apparently, the upcoming firmware will add a “periodic refresh” function to the drive. When the drive detects that data stored on it has reached a certain age, it will rewrite that data in the background. This fits with what we heard back when the problem was first uncovered — some users were able to solve it by copying the data to a different part of the drive.
http://www.extremetech.com/wp-conten...02/TLCNAND.png
The original problem with the 840 Evo was traced to shifting cell voltage levels. The drive controller expects cell voltages to operate within a specific range. As the NAND flash aged without being refreshed, those voltage levels passed outside their original programmed tolerances, and the SSD had trouble reading data from the affected sectors. The last firmware solution that Samsung released was supposed to solve the problem by reprogramming the range of values that the NAND management algorithms expected and could tolerate.
This solution seems to be of a different order. Instead of patching the problem directly by addressing the corner cases, Samsung is adding a refresh feature to prevent the situations that cause an issue to start with. While this may be the smarter way of fixing whatever is throwing off the results, it does raise some questions: Does Samsung’s TLC NAND have a long-term problem with data retention — and will this new solution hurt long-term drive longevity?
The good news, at least on the longevity front, is that even TLC-based drives proved to be capable of hundreds of TBs worth of write cycles, well above their listed parameters. Rewriting a relatively small portion of the drive’s total capacity every few months shouldn’t have a meaningful impact on lifespan. Samsung does note, however, that if you leave the drive powered off for months at a time, you may need to run its Drive Magician software — the algorithm is designed to run when the system is idle and can’t operate if the machine is powered off.
It’s not clear what the future of TLC NAND is at this point. Samsung has introduced the TLC-NAND backed 850 Pro, but that chip is built on the 40nm process node. Higher (older) process nodes were actually better for NAND flash when it comes to reliability and longevity metrics, which means it may buffer this problem from appearing in future products. To date, very few manufacturers have introduced TLC NAND at 2D (planar) geometries — it may simply not be worth it for most products.
More...
-
Xbox One vs. PS4: How the hardware specs compare (updated for 2015)
http://www.extremetech.com/wp-conten...51-640x352.jpgIt’s been over a year and a half since the Xbox One and PS4 first debuted here in the U.S. In that time, they’ve both earned their keep as the bearer of current-generation game consoles. Microsoft realized some months after release that it needed a $400, price-competitive version with the PS4 that lacked the Kinect camera, and has since remedied what was once a bit of a lopsided, apples-and-oranges comparison.
Since then, both the Xbox One and PS4 have sold pretty well, with the edge on Sony’s side — although January’s $50 price cut for the Kinect-less Xbox One is helping Microsoft catch up. That said, how do they directly compare with each other in 2015? If you’re thinking about buying one of these two consoles–or just want ammunition for bragging rights–here’s what you need to know.
One note before we get started: Unlike all previous console generations, the PS4 and Xbox One are almost identical hardware-wise. With an x86 AMD APU at the heart of each, the Sony and Microsoft consoles are essentially PCs — and their hardware specs, and thus relative performance, can be compared in the same way you would compare two x86-based laptops or ARM-based Android tablets. Read on for our Xbox One-versus-PS4 hardware specs comparison.
PS4 vs. Xbox One: CPUs compared
http://www.extremetech.com/wp-conten...oc-294x300.jpg
For the PS4 and Xbox One, Microsoft and Sony both opted for a semi-custom AMD APU — a 28nm part fabricated by TSMC that features an eight-core Jaguar CPU, paired with a Radeon 7000-series GPU. (We’ll discuss the GPU in the next section.) The PS4 and Xbox One CPU is virtually identical, except the Xbox One is clocked at 1.75GHz, while the PS4 is at 1.6GHz.
The Jaguar CPU core itself isn’t particularly*exciting. In PCs, Jaguar is used in AMD’s Kabini and Temash parts, which were aimed at laptops and tablets respectively. If you’re looking for a tangible comparison, CPUs based on the Jaguar core are roughly comparable to Intel’s Bay Trail Atom. With eight cores (as opposed to two or four in a regular Kabini-Temash setup), both the PS4 and Xbox One will have quite a lot of CPU power on tap. The large core count allows*both consoles to excel at multitasking — important for modern living room and media center use cases, and doubly so for the Xbox One, which runs two different operating systems side-by-side. Ultimately, though, despite the Xbox One having a slightly faster CPU, it makes little*difference to either console’s relative games performance.
http://www.extremetech.com/wp-conten...ds-640x541.jpg
PS4 vs. Xbox One: GPUs compared
Again, by virtue of being an AMD APU, the Xbox One and PS4 GPUs are technologically very similar — with the simple difference that the PS4 GPU is larger. In PC terms, the Xbox One has a GPU that’s similar to the entry-level Bonaire GPU in the older Radeon HD 7790, while the PS4 is outfitted with the midrange Pitcairn that can be found in the HD 7870. In numerical terms, the Xbox One GPU has 12 compute units (768 shader processors), while the PS4 has 18 CUs (1152 shaders). The Xbox One is slightly ahead on GPU clock speed (853MHz vs. 800MHz for the PS4).
In short, the PS4’s GPU is — on paper — 50% more powerful than the Xbox One. The Xbox One’s slightly higher GPU clock speed ameliorates some of the difference, but really, the PS4’s 50% higher CU count is a serious advantage for the Sony camp. Furthermore, Microsoft says that 10% of the Xbox One’s GPU is reserved for Kinect. Games on the PS4 will have a lot more available graphics power on tap. Beyond clock speeds and core counts, both GPUs are identical. They’re both based on the Graphics Core Next (GCN) architecture, and thus support OpenGL 4.3, OpenCL 1.2, and Direct3D 11.2.
PS4 vs. Xbox One: RAM subsystem and bandwidth
Once we leave the CPU and GPU, the hardware specs of the Xbox One and PS4*begin*to diverge, with the RAM being the most notable difference. While both consoles are outfitted with 8GB of RAM, the PS4 opts for 5500MHz GDDR5 RAM, while the Xbox One uses the more PC-like 2133MHz DDR3 RAM. This leads to an absolutely massive bandwidth advantage in favor of the PS4 — the PS4’s CPU and GPU will have 176GB/sec of bandwidth to system RAM, while the Xbox One will have just 68.3GB/sec.
http://www.extremetech.com/wp-conten...-1-640x480.png
More...
-
Minecraft exploit that made it easy to crash servers gets patched
http://www.extremetech.com/wp-conten...42-640x351.png
It turns out that for the past two years, you could crash a Minecraft server pretty easily. A security researcher published the exploit Thursday and said he*first discovered it*in version 1.6.2 back in July 2013, which is almost two years ago. He claims Mojang ignored him and did nothing to fix the problem, despite his repeated attempts at following standard protocol and contacting the company in private.
“This vulnerability exists on almost all previous and current minecraft versions as of 1.8.3; the packets used as attack vectors are the 0x08: Block Placement Packet and 0x10: Creative Inventory Action,” Ammar Askar wrote. The exploit takes advantage of the way a Minecraft server decompresses and parses data, and causes it to generate “several million Java objects including ArrayLists,” running out of memory and pegging CPU load in the process.
“The fix for this vulnerability isn’t exactly that hard, [as] the client should never really send a data structure as complex as NBT of arbitrary size and if it must, some form of recursion and size limits should be implemented. These were the fixes that I recommended to Mojang 2 years ago.” Askar posted a proof of concept of the exploit to GitHub that he says has been tested with Python 2.7. Askar has since updated his blog post twice after finally making contact with Mojang. What he says essentially confirms that the company either didn’t test a claimed fix against his proof of concept, or lied about having one in the first place.
Today, it*looks like Mojang has responded (at least indirectly) to the post with a patch. The company announced today that it is releasing version 1.8.4: “This release fixes a few reported security issues, in addition to some other minor bug fixes & performance tweaks.”
The release notes make no direct*mention of the*exploit Askar wrote about, and comments are closed on the post. But notably, two of the fixes listed are Bug MC-79079, “Malicious clients can force a server to freeze,” and Bug MC-79612, “Malicious clients can force a server to go out memory [sic]:”
http://www.extremetech.com/wp-conten....05.49-PM1.png
At the time of this writing, Askar has yet to update his blog post a third time acknowledging the patch and/or commenting on whether it fixes the exploit.
Back in September, Microsoft announced it was buying Mojang for $2.5 billion, with company founder Notch moving on something new. The game is available on all major platforms, including PC, Mac, PS3, PS4, Xbox 360, Xbox One, iOS, Android, Windows Phone, and Amazon Kindle Fire.
More...
-
EA shows off Star Wars Battlefront footage at fan convention
https://www.youtube.com/watch?t=84&v=ZwWLns7-xN8
Nearly ten years after the series went dormant with Star Wars: Battlefront 2, a new shooter set in the Star Wars universe game series is set to hit the PC, Xbox One, and PS4 on November 17.
At the Star Wars Celebration in Anaheim, members of the DICE development team showed off a pre-alpha trailer for the Frostbite engine-powered game, which is simply being called Star Wars: Battlefront. While the footage didn't show off much direct gameplay, the whole thing was reportedly rendered in real-time on a PS4.
The in-game footage displayed the team's use of photogrammetry to render models directly from pictures of the actual models used in the movies rather than 3D figures created whole cloth by digital artists. The development team said they took trips to shooting locations for the original Star Wars films and referenced materials from the libraries at Skywalker Ranch to add further to the authenticity to the source material. Skywalker Sound will be providing the audio for the game.
As far as gameplay, players will be able to can take part in story missions, either solo or with a co-op partner, and online multiplayer shootouts with up to 40 people, all from either a first- or third-person viewpoint. On-field power-ups will let players transform into well-known characters like Luke Skywalker, Darth Vader, and Boba Fett. Vehicles like speeder bikes, snowspeeders, X-wings, TIE Fighters, AT-ATs, and even the Millennium Falcon will be pilotable, but those vehicles unfortunately can not be taken out into space—this is a strictly planet-based affair.
The new Battlefront takes place during the time of the original film trilogy, across familiar locations like Hoth, Endor, and Tatooine, as well as the newly explorable planet Sullust (which is based on the Icelandic countryside). As a tie-in with the upcoming Force Awakens movie, players will also have access to a free bit of DLC detailing the "Battle of Jakku," which takes place decades before the events shown in the film's new trailer. Players that pre-order the game will get that DLC mission on December 1, while everyone else will get it December 8.
The new Battlefield was originally announced at E3 2013 and was shown in slightly more detail at last year's show, but EA and developer DICE have been relatively silent regarding the details of the game until now. Back in December, DICE said it was "not moving onto future projects," including Battlefront, until it fixed nagging server issues with Battlefield 4.
Battlefront is the first game in a previously announced ten-year deal giving Electronic Arts the exclusive rights to all Star Wars-based games.
Source
-
The 20 best free PC games
http://www.extremetech.com/wp-conten...the-Exile.jpeg
Free PC games used to be the realm of quirky flash games or weird indie projects, but the free-to-play phenomenon has really taken off in the last couple of years. Now, the $60 AAA games that once ruled the roost are getting some real competition from games that offer hundreds of hours of gameplay for free.
There are innumerable free-to-play games available for the PC, and with that comes both good and bad. The large selection means that there is something to fit just about any taste, but the signal-to-noise ratio is truly atrocious. Instead of trudging through dozens of clones and half-hearted cash grabs, let me separate the wheat from the chaff for you. Today, I’m highlighting twenty of the very best*free games*on the PC to help you find something you’ll really love. There’s a lot to cover, so follow along, and something here is bound to strike your fancy.
http://www.extremetech.com/wp-conten...80-640x360.jpg
Dota 2
Based on the popular Warcraft III mod called Defense of the Ancients (DotA for short), Valve’s Dota 2*is a model free-to-play game. Without spending one red cent, you get access to the entire gameplay experience. Of course, Valve makes a tidy profit from selling cosmetic and ancillary items. The Bellevue company is well-versed in the realm of free-to-play games, so don’t be surprised if you find yourself buying new loot for this “free” game once you’re hooked.
http://www.extremetech.com/wp-conten...14/08/LoL.jpeg
League of Legends
Just like Dota 2, League of Legends is a MOBA (multiplayer online batter arena) derived from the same Warcraft III mod. However, the folks at Riot Games have a very different pricing model than Valve’s. You can play a select number of characters for free, but access to additional characters is going to cost you. Regardless of the value proposition compared to other MOBAs, this game remains insanely popular across the globe.
http://www.extremetech.com/wp-conten...5/04/HotS.jpeg
Heroes of the Storm
As if there wasn’t enough competition in the MOBA space, Blizzard is getting in on the action as well. Heroes of the Storm takes elements from all of Blizzard’s various franchises, and melds them all together in a single DotA-like. This particular iteration of the MOBA concept has been lauded as more approachable than others in the genre, but it’s still in beta testing. If you want to play it, you’ll need to apply for access. Of course, Blizzard is more than happy to take your money for heroes and skins regardless. Most purchases range between $4 and $20, but there are a few outliers here and there. Anything could change, though, so buy carefully. A nerf is always around the corner.
http://www.extremetech.com/wp-conten...arthstone.jpeg
Hearthstone: Heroes of Warcraft
Based on the artwork and setting of Blizzard’s incredibly popular Warcraft franchise, Hearthstone is a phenomenon in and of itself. This turn-based collectable card game is hugely successful on PC and mobile, and the low barrier to entry is the reason why. All you need is a free Battle.net account, and you can join in on the fun. As it stands, there are two single player campaigns that cost $25 each, and you can spend anywhere from $3 to $70 at a shot on booster packs. But if you just buckle down and play the game on a regular basis, you’ll soon earn enough good cards that you won’t really need to buy boosters to stay competitive.
http://www.extremetech.com/wp-conten...Capitalist.png
AdVenture Capitalist
If you’ve ever played a game like Cookie Clicker or Candy Box, you’ll be very familiar with the way this game works. When broken down into its component parts, you do little more than click buttons and watch numbers grow higher, but there’s something so viscerally satisfying about this style of game. AdVenture Capitalist, fundamentally, is a dopamine machine. You can download it for free on Steam, and you can spend anywhere from $2 to $100 at a shot for currency that will essentially speed up your progress. The problem is… the progress is all this game has to offer. What is even the point of clicking these damned buttons if you don’t enjoy the slow progress of it all? It’s hard to explain the appeal, but since it’s free, you can simply go see for yourself.
More...
-
AMD leaks Microsoft’s plans for upcoming Windows 10 launch
http://www.extremetech.com/wp-conten...-1-348x196.jpg
One last tidbit related to AMD’s last conference call that doesn’t have much to do with AMD itself, but sheds light on Microsoft’s plans for Windows 10. During the call, CEO Lisa Su was asked whether she could clarify how Q2 results were expected to play out with regards to both the semicustom and embedded business (Xbox One and PS4) against the more mainstream APU and GPU business. Su responded by saying that she expected the semicustom business to be up “modestly,” and that AMD would move some inventory in Q2 and begin ramping Carrizo in greater volume. She then went on to say:
“What we also are factoring in is, you know, with the Windows 10 launch at the end of July, we are watching sort of the impact of that on the back-to-school season, and expect that it might have a bit of a delay to the normal back-to-school season inventory build-up.” AMD could be wrong, of course, but it’s unlikely — the company almost certainly knows when Microsoft intends to launch the new operating system, and this isn’t the first time we’ve heard rumors of a summer release date for the OS.
It’s possible that AMD intends to capitalize on the launch with a new push around the GPU architecture its currently expected to launch in approximately the same time frame. The company wouldn’t need to launch alongside the operating system, but hitting the streets a few weeks early would build buzz around DX12 and other performance enhancements baked into Windows 10 before the actual OS debuts.
The big question on everyone’s mind in the PC industry is going to be whether Windows 10 lives up to the hype. So far, it seems as if it will. I’m not claiming people will be lining up around the blocks to grab it, but I think there’s a decent chance that
Windows 7 users, who gave Windows 8 a pass, will investigate Windows 10 — especially given the long-term advantages of DirectX 12.
It’s not clear yet how quickly games will transition to the new API, or how long they’ll support both DX12 and DX11. From past experience, we’d expect to see high-end AAA games popping out in DX12 in short order (the Star Wars Battlefront title that debuted last week will be DX12). Plenty of other games will transition more gradually, however, and it wouldn’t surprise us to see a few DX9 titles still knocking about — indie games and small developers have very different budgets compared with the big game studios.
More...
-
Rounding up GTA V on the PC: How do AMD, Intel, and Nvidia perform?
http://www.extremetech.com/wp-conten...PC-640x353.jpg18 months after it debuted on the Xbox 360 and PS3, Grand Theft Auto V has made it over the PC side of the fence. Videos and previews before the launch teased a constant 60 FPS frame rate and enhanced visuals and capabilities that would leave last-gen consoles in the dust. A number of sites have published comprehensive overviews of GTA’s performance, including a focus on CPUs, GPUs, and the performance impact of various settings. We’ve broken down the big-picture findings, with additional links to specific coverage.
CPU Scaling
Let’s start with CPU scaling, since there’s going to be questions there. Whether at 1080p or 1440p, GTA V is playable on quad cores and above from both AMD and Intel, but the Intel chips continue to have a definite advantage overall. At normal detail levels with FXAA (using a GTX Titan to avoid GPU bottlenecks), Techspot reports that the FX-9590, AMD’s highest-end, 220W chip, is the only CPU from Team Green to beat the Core i5-2500K — a midrange CPU from Intel nearly five years old.
That doesn’t mean AMD CPUs don’t offer a playable frame rate. But most of AMD’s cores are between 55-72 FPS, while Intel locks down the 70+ FPS range. GamersNexus offers a more detailed look at some CPUs, again using a Titan X to eliminate any chance of a GPU bottleneck. What’s most notable about AMD’s CPU performance, even at 1080p, is the gap in minimum frame rate.
http://www.extremetech.com/wp-conten...ax-640x422.jpg
AMD’s chips hit a minimum frame rate of 30 FPS, compared with 37 FPS for the Core i7-4790K. That gap is significant — Intel’s high-end cores are hitting a 23% higher frame rate. That said, these gaps can be reduced by lowering visual quality from Max to something a bit less strenuous — many of the Advanced graphics features and post-processing effects in GTA V incur a heavy performance hit.
The bottom line is this: While CPU brand matters in GTA V, it’s not the major factor. Every chip, save for the Intel Pentium G3258, can run the game. Low-end AMD owners may have to put up with a significant performance hit, while most users with AMD Athlon 760K-class processors likely aren’t trying to run GTA V in the first place.
GPU scaling
GPU scaling is a more interesting animal. First, the game scales exceptionally well to a wide variety of video cards from both AMD and Nvidia. Nvidia cards from the 900 series generally have the upper hand at the various detail levels, but at 1080p “High,” for example, even the R9 285 returns a 0.01% frame rate (meaning that 99.99% of the frames were higher than this) of 34 FPS and an average frame rate of 69. Even the Nvidia GTX 750 Ti is capable of hitting 52 FPS in this mode.
http://www.extremetech.com/wp-conten...h2-640x430.jpg
The high-end GPUs from AMD and Nvidia can both compete at higher resolutions — at 1440p at max detail, Tweaktown reports that the R9 290X 8GB flavor hits 77 FPS, just barely behind Nvidia’s far more expensive Titan Black. The GTX 900 family continues to dominate overall, with a moderate-but-significant performance edge.
If you want to run at 4K and max detail, however, you’re going to be disappointed. Even the mighty GTX Titan has a bad day at 4K on Ultra, with frame rates that can’t break 40 FPS. Interestingly enough, however, there’s a disparity between what we’ve seen reported at websites like GamersNexus, which has the GTX Titan X at 40 FPS, vs Tweaktown, which clocks it at 72 FPS. GamersNexus didn’t use the prerecorded benchmark, while Tweaktown did — this may have impacted the final overall results.
in the benchmark compared to in-title. EuroIf you prefer video frame rates, Eurogamer’s in-game results also may point to a discrepancy between how AMD and NV perform gamer reported much stronger minimum frame rates for AMD when using a high-end Core i7 compared to a midrange Core i3; other sites showed no such difference.
Some features, like high-end textures, incur a relatively modest performance hit, while others, like those under the game’s Advanced Graphics Options, exact far steeper penalties. The game will try to keep you from setting visual options that your GPU won’t support, but you can override this from within the Graphics menu. GamersNexus has an exhaustive post on the topic of which game options hit performance the most, along with a number of comparison shots.
Tying it all together
The general consensus is that Grand Theft Auto V scales quite well. Virtually any modern GPU + CPU combo can run the game, though enthusiasts with low-end AMD CPUs may have to make a number of compromises. The game is optimized for certain Nvidia GameWorks features, like Percentage-Closer Soft Shadows (PCSS), but also supports AMD’s contact Hardening Shadows (CHS).
http://www.extremetech.com/wp-conten...SS-640x360.jpg
Nvidia’s Percentage-Closer Soft Shadows
http://www.extremetech.com/wp-conten...HS-640x359.jpg
AMD’s Contact Hardening Shadows
Some readers may be concerned about the GTX 970’s performance at high memory loads, but evidence for a problem is mixed. Techspot’s review shows the GTX 970 hanging even with the GTX 980 at 1080p with normal textures (88 FPS) and the R9 290X following at 80 FPS. At 4K with Very High Textures and FXAA, the R9 290X had advanced to third place, at 33 FPS, behind the GTX 980 with 36 FPS. The GTX 970, in contrast, had fallen back to 30 FPS. No one has reported unusual stuttering or other problems on the GTX 970, however, at least not yet.
Based on the results we’ve seen to date, we’d say that Rockstar has delivered the definitive and best-looking version of the game for PCs, with a 60 FPS option available for almost any video card and CPU combination. The controls and key mappings are terrible and clearly designed for consoles. But as far as the frame rate and eye candy is concerned, GTA V delivers. 4K at max detail, however, remains well beyond the range of even the most powerful Nvidia GPU.
More...
-
AMD details new power efficiency improvements, update on ’25×20
http://www.extremetech.com/wp-conten...MD-640x353.jpg
Energy efficiency is (if you’ll pardon the pun) a hot topic. Foundries and semiconductor manufacturers now trumpet their power saving initiatives with the same fervor they once reserved for clock speed improvements and performance improvements. AMD is no exception to this trend, and the company has just published a new white paper that details the work it’s doing as part of its ’25×20
More...
-
General Motors, John Deere want to make tinkering, self-repair illegal
http://www.extremetech.com/wp-conten...D1-348x196.jpg
The ability to modify a vehicle you’ve purchased is, in many ways, a fundamental part of America’s car culture — and, to some extent, embedded in our culture, period. From the Fast & Furious saga to Han Solo’s “She may not look like much, but she’s got it where it counts, kid. I’ve made a lot of special modifications,” we value the right to tinker. More practically, that right can be critically important when it comes to fixing heavy farm equipment. That’s why it’s significant that companies like John Deere and General Motors have joined forces to argue that no, you don’t actually own the equipment you purchase at all.
http://www.extremetech.com/wp-conten...015/04/MF1.png
Without those modifications, what are the chances that the Falcon could make .5 past light speed? I am a huge nerd.
The tractor and farm equipment manufacturer doesn’t mince words. “In the absence of an express written license in conjunction with the purchase of the vehicle [to operate its software], the vehicle owner receives an implied license for the life of the vehicle to operate the vehicle, subject to any warranty limitations, disclaimers or other contractual limitations in the sales contract or documentation.”
GM, meanwhile, alleges that “Proponents incorrectly conflate ownership of a vehicle with ownership of the underlying computer software in a vehicle.” The problem with these arguments is that while existing software laws confirm that individuals are licensing code rather than purchasing it when they buy a license from Adobe or Microsoft, the cases in question did not generally anticipate that the code would be used to artificially create extremely high barriers to repair. As tractors have gone high tech, John Deere has aggressively locked away critical information needed for adjusting either aspects of the vehicle’s timing and performance or the necessary information to troubleshoot problems. Kyle Wiens wrote about the issue a few months back, noting how John Deere’s own lockouts and high-tech “solutions” have supposedly caused a spike in demand for older, simpler vehicles. Farmers, it seems, don’t like having to pay expensive technicians. As a result, the used tractor business is booming.
Both companies go on to assert that the Copyright Office shouldn’t consider allowing tractor owners or car enthusiasts make any sort of modifications to their vehicles because doing so might enable piracy through the vehicle entertainment system. While this is theoretically possible in a modern car, I suppose, if improbable, it’s downright laughable in a tractor. A new tractor can cost upwards of $100,000 — is anyone seriously going to pirate One Direction CDs while ploughing a field?
The other argument — that users will abuse these capabilities to engage in unsafe or dangerous activities — ignores the fact that Americans enjoyed this level of tweaking and tuning for decades. True, a modern computer system might make it easier to modify certain characteristics, but it’s not as if the concept of tuning a car got invented alongside OBD-II.
http://www.extremetech.com/wp-conten...or-640x353.jpg
Arr! Raise the Jolly Roger! Ahead ploughing speed!
John Deere makes a number of scare tactic allegations around the very idea of a modifiable tractor, including this gem: “Third-party software developers, pirates, and competing vehicle manufacturers will be encouraged to free-ride off the creativity and significant investment in research and development of innovative and leading vehicle manufacturers, suppliers, and authors of vehicle software. The beneficiaries of the proposed exemption will not be individual vehicle owners who allegedly want to repair, redesign or tinker with vehicle software, but rather third-party software developers or competing vehicle manufacturers who — rather than spending considerable resources to develop software from scratch — instead would be encouraged to circumvent TPMs in order to make unauthorized reproductions of, and derivative works based on, the creativity of others.”
Just in case you’re confused, we’re still talking about tractors — not someone stealing the crown intellectual property of a literary genius.
The long-term risk of locking-out repair
I’ve never modded my car and I haven’t been near a tractor in decades. But the reason this fight matters is directly rooted in the ability to repair anything. With multiple manufacturers shoving the concept of an Internet of Things full throttle, how long before basic appliance repair has become something you can only perform with a licensed technician? It’s not an altogether crazy concept. Device and appliance manufacturers have fought to build repair monopolies for decades in various industries, with generally limited success.
Thanks to the DMCA and the rules of software licensing, those efforts could finally succeed in the 21st century. As the IoT advances, virtually everything can be locked off and limited to only those shops that can afford sophisticated diagnostic equipment — thereby limiting user choice and freedoms.
More...
-
Analyst: Intel will adopt quantum wells, III-V semiconductors at 10nm node
http://www.extremetech.com/wp-conten...r1-348x196.jpg
=
As the pace of Moore’s Law has slowed and shifted, every process node has become more difficult and complicated to achieve. The old days, where a simple die shrink automatically brought faster chips and lower power consumption, are now gone. Today, companies perform a die shrink (which makes most aspects of the chip perform worse, not better), and then find supplementary technologies that can improve performance and bring yields and power consumption back to full. Analyst and RealWorldTech owner David Kanter has published a paper on where he thinks Intel is headed at the 10nm node, and predicts the tech giant will deploy a pair of new technologies at that node — quantum well FETs and III-V semiconductors.
We’ve talked about III-V semiconductors first, so we’ll start there. Intel has been evaluating next-generation semiconductor materials for years. We first spoke with Mark Bohr about the company’s efforts back in 2012. The III-V semiconductors are called that because the materials are drawn from Groups III and V of the periodic table. Many of these materials have superior performance compared with silicon — either they use less power or they allow for drastically higher clock speeds. But they cost more and often require extremely sophisticated manufacturing techniques. Finding materials to replace both the p-channel and n-channel has been difficult, since some of the compounds used for one type of structure don’t work well for the other.
Kanter predicts that Intel will use either InGaAs (Indium Gallium Arsenide) or InSb (indium tin) for n-type, and strained germanium or additional III-V materials for the p-type channel. The net effect of this adoption could cut operating voltages as far as 0.5v. This is critical to further reducing idle and low-use power consumption, because power consumption today rises as the square or cube of voltage increase (depending on leakage characteristics and overall transistor type).
The other major advantage that Kanter thinks Intel may deploy is the use of quantum well structures. Quantum wells trap electrons by surrounding them with an insulating structure that leaves a limited number of dimensions for the electrons to move in. This new fin structure and gate are shown in the image below, from an Intel IEDM paper in 2011.
http://www.extremetech.com/wp-conten...aAs_FinFET.jpg
Combined, these new structures would allow Intel to substantially cut power consumption while simultaneously improving other characteristics of the transistor’s performance. It’s not clear if this would enable substantially faster chips. Intel has focused primarily on making its CPUs more efficient in the past five years, as opposed to making them faster. And while a Core i7-4790K is unquestionably quicker than a Core i7-2600K from 2010, the gap isn’t what it would’ve been ten years ago.
It’s possible that these new structures would run at substantially higher clock speeds. But building chips smaller also means decreasing feature sizes, which increases the formation of hot spots on the die. The ability to run chips at 0.5v is great for the Internet of Things, but it’s not necessarily great if the chip needs 1.1V to hit maximum clock.
Uncertainty at 10nm and below
Kanter addresses why he wrote this piece, noting he believes that “industry experts should make insightful, specific, verifiable predictions that have a definite time horizon.” It makes sense that Intel would go for some of these capabilities. The firm is known to be researching III-Vs, and quantum wells have apparently been worked on for a decade or more. Intel likes to talk up its manufacturing capabilities and advantages, and this type of innovation at the 10nm node could give them a huge leg up over the competition.
Samsung and TSMC haven’t revealed much about their own plans, but it’s highly likely that both firms would stick with conventional FinFET designs at 10nm, just as Intel spent two design cycles using their own standard Tri-gate technology. If TSMC actually closes the gap with Intel at 10nm, III-Vs and QWFETs would give Intel an argument for claiming it’s not just the number of nanometers that makes the difference — it’s the process tech as well.
If Intel adopts these technologies at 10nm, it’ll push back the window on EUV farther, back to 7nm — and possibly require additional verification and validation that the tool sets are compatible with both the new semiconductor manufacturing equipment and the requirements of EUV itself. Such a move would likely push EUV introduction back into the 2018-2019 timeframe, assuming that 10nm shipments begin in 2016-2017, with 7nm ready 2-3 years later.
Intel supposedly delayed installation of 10nm equipment until December of this year, but is planning to ramp production at its facilities in Israel.
More...
-
ARM details its upcoming Cortex-A72 microarchitecture
http://www.extremetech.com/wp-conten...72-348x196.jpg
Earlier this year, ARM announced its Cortex-A72 — a new custom microarchitecture from the CPU designer that builds on and refines the 64-bit Cortex-A57. Ordinarily it takes up to 24 months for new ARM cores to come to market, after the company announces a new CPU design. But Qualcomm has*told us to expect Cortex-A72 cores by the end of the year. If true, that would make this one of the company’s fastest CPU ramps, ever — so what can the new core do?
If ARM hits its targets, quite a lot.
New process, new product
The Cortex-A72 is based on the Cortex-A57, but ARM has painstakingly refined its original implementation of that chip. The company is claiming that the A72 will draw 50% less power than the Cortex-A15 (a notoriously power-hungry processor) at 28nm and 75% less power at its target 16nmFF+ / 14nm process node. Compared to the Cortex-A57 at 28nm, ARM still expects the A72 to draw 20% less power.
http://www.extremetech.com/wp-conten...er-640x365.png
ARM is supposedly aiming for the Cortex-A72 to be capable of sustained operation at its maximum frequency, which is a topic we touched on yesterday when covering the Snapdragon 810’s throttling problem. The CPU is targeting improved performance of 1.16x to 1.5x over the Cortex-A57, clock-for-clock. Making this happen required revamping the branch predictor, cutting misprediction by 50%, and a 25% reduction in speculation power consumption. The chip can also bypass its branch predictor completely in circumstances where it is performing poorly and save additional power in the process.
The Cortex-A72 is still capable of decoding three instructions per clock cycle, but apparently adds some instruction fusion capability to increase efficiency. Each of these components has been power-optimized as well. AnandTech reports that ARM’s dispatch stage can break fused ops back into micro-ops for increased execution granularity, effectively turning a three-wide decoder into a five-wide machine in some cases.
http://www.extremetech.com/wp-conten...-1-640x357.jpg
ARM is also amping up its game in SIMD execution units. Instruction latencies have been slashed, pipelines shortened, and cache bandwidths boosted. There are no huge changes in organization or capability, but the CPU core should see significant improvements thanks to these adjustments. ARM has even managed to shave off some die size — the Cortex-A72 is supposed to be about 10% smaller than the Cortex-A57, even on the same process.
http://www.extremetech.com/wp-conten...rf-640x353.png
Ars Technica reports that according to ARM, the Cortex-A72 can even beat the Core M in certain circumstances. Such predictions must be*taken with a grain of salt — they assume, for example, that the Core M will be thermally limited (we’ve seen that this can vary depending on OEM design). Tests like SPECint and SPECfp tend to be quite dependent on compiler optimizations, and while the multi-threaded comparison is fair as far it goes, ARM is still assuming that the Cortex-A72 won’t be thermally limited. Given that all smartphones and tablets throttle at present, the company will need to prove the chip doesn’t throttle before such claims can be taken seriously.
All the same, this new chip should be an impressive leap forward by the end of the year. Whether it’ll compete well against Apple’s A9 or Qualcomm’s next-generation CPU architecture is another question.
More...