Page 10 of 97 FirstFirst ... 8 9 10 11 12 20 60 ... LastLast
Results 91 to 100 of 962
Like Tree1Likes

Game Tech News

This is a discussion on Game Tech News within the Electronics forums, part of the Non-Related Discussion category; Nvidia’s CEO, Jen-Hsun Huang, hasn’t directly responded to the class-action lawsuit allegations, but he has written a blog post responding ...

      
   
  1. #91
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Nvidia slapped with class-action lawsuit over GTX 970 memory issues [UPDATED]


    Nvidia’s CEO, Jen-Hsun Huang, hasn’t directly responded to the class-action lawsuit allegations, but he has written a blog post responding to the larger situation. Said post isn’t likely to win many converts to Nvidia’s way of thinking, since Huang refers to the memory issue as a “feature,” noting that “Instead of being excited that we invented a way to increase memory of the GTX 970 from 3GB to 4GB, some were disappointed that we didn’t better describe the segmented nature of the architecture for that last 1GB of memory.”

    He then claims that this was done because games “are using more memory than ever.”

    The problem with Jen-Hsun’s statement is that its nearly impossible to test. There is evidence that the GTX 970 takes a performance hit in SLI mode compared to the GTX 980 at high resolutions, and that the impact might be related to either the memory segmentation. In order to argue that the GTX 970 benefits from this alternate arrangement, Nvidia would have to demonstrate that a GTX 970 with 3-3.5GB of RAM is slower than a GTX 970 with 4GB of RAM in a segmented configuration. No such evidence has been given, which makes the CEO’s statement sound like a claim that unhappy users are ungrateful and mis-evaluating the product. That’s not going to sit well with the small-but-vocal group of people who just dropped $700 on GTX 970’s in SLI.

    To his credit, the Nvidia CEO repeatedly pledges to do better and to communicate more clearly, but the entire tone of the blog post suggests he doesn’t understand precisely what people are unhappy about.

    Original story follows:


    Last month, we detailed how Nvidia’s GTX 970 has a memory design that limits its access to the last 512MB of RAM on the card. Now the company is facing a class-action lawsuit alleging that it deliberately mislead consumers about the capabilities of the GTX 970 in multiple respects.

    Nvidia has acknowledged that it “miscommunicated” the number of ROPS (Render Output units) and the L2 cache capacity of the GTX 970 (1792KB, not 2048KB), but insists that these issues were inadvertent and not a deliberate attempt to mislead customers. There’s probably some truth to this — Nvidia adopted a new approach to blocking off bad L2 blocks with Maxwell to allow the company to retain more performance. It’s possible that some of the technical ramifications of this approach weren’t properly communicated to the PR team, and thus never passed on to reviewers.



    The memory arrangement on the GTX 970.

    Nvidia’s decision to divide the GTX 970’s RAM into partitions is a logical extension of this die-saving mechanism, but it means that the GPU core has effective access to just seven of its eight memory controllers. Accessing that eighth controller has to flow through a crossbar, and is as much as 80% slower than the other accesses. Nvidia’s solution to this problem was to tell the GPU to use just 3.5GB of its available memory pool and to avoid the last 512MB whenever possible. In practice, this means that the GTX 970 flushes old data out of memory more aggressively than the GTX 980, which will fill its entire 4GB buffer.

    Of principles, practices, and performance


    Nvidia has maintained that the performance impact from disabling the last 512MB of memory in single-GPU configurations is quite small, at around 4%. our own performance tests found little reason to disagree with this at playable resolutions — at 4K resolutions we saw signs that the GTX 980 might be superior to the 970 — but the frame rates had already fallen to the point where the game wasn’t very playable. At the time, I theorized that SLI performance might take a hit where single-GPUs didn’t. Not only is there a certain amount of memory overhead associated with multi-GPU performance, two graphics cards can drive playable 4K resolutions where a single card chokes. We’ve been recommending that serious 4K gamers explore multi-GPU configurations and the GTX 970 appeared to be an ideal match for SLI when it first came out.

    Testing by PC Perspective confirmed that in at least some cases, the GTX 970’s SLI performance does appear to take a larger-than expected hit compared to the GTX 980. Still, they note that the issues only manifest at the highest detail levels and graphics settings — the vast majority of users are simply unlikely to encounter them.


    Graph courtesy of PC Perspective

    One thing that makes the complaint against Nvidia interesting is that the facts of the case aren’t really in dispute. Nvidia did miscommunicate the specifications of its products and it didmisrepresent those figures to the public (advertising 4GB of RAM when only 3.5GB is available in the majority of cases). The question is whether or not those failings were deliberate and if they resulted in significant harm to end users.

    The vast majority of customers who bought a GTX 970 will not be materially impacted by the limits on the card’s performance — but people who bought a pair of them in SLI configurations may have a solid argument for how Nvidia’s failure to market the card properly led them to purchase the wrong product. It’s also fair to note that this issue could change competitive multi-GPU standings. AMD’s R9 290X starts $10 below the GTX 970 at NewEgg, but has none of the same memory limitations. The GTX 970 is still a potent card, even with these limitations and caveats, but it’s not quite as strong as it seemed on launch day — and there are obviously some Nvidia customers who feel mislead.


    More...

  2. #92
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    PowerVR goes 4K with GT7900 for game consoles


    PowerVR is announcing its new high-end GPU architecture today, in preparation for both Mobile World Congress and the Game Developers Conference (MWC and GDC, respectively). Imagination Technologies has lost some ground to companies like Qualcomm in recent years, but its cores continue to power devices from Samsung, Intel, MediaTek, and of course, Apple. The new GT7900 is meant to crown the Series 7 product family with a GPU beefy enough for high-end products — including 4K support at 60fps — as well as what Imagination is classifying as “affordable” game consoles.



    First, some specifics: The Series 7 family is based on the Series 6 “Rogue” GPUs already shipping in a number of devices. But it includes support for hardware fixed-function tessellation (via the Tessellation Co-Processor), a stronger geometry front-end, and an improved Compute Data Master that PowerVR claims can schedule wavefronts much more quickly. OpenGL ES 3.1 + the Android Extension Pack is also supported. The new GT7900 is targeting a 14nm-and-16nm process, and can offer up to 800 GFLOPS in FP32 mode (what we’d typically call single-precision) and up to 1.6TFLOPS in FP16 mode.

    One of the more interesting features of the Series 7 family is its support for what PowerVR calls PowerGearing. The PowerVR cores can shut down sections of the design in power-constrained scenarios, in order to ensure only areas of the die that need to be active are actually powered up. The end result should be a GPU that doesn’t throttle nearly as badly as competing solutions.


    On paper, the GT7900 is a beast, with 512 ALU cores and enough horsepower to even challenge the low-end integrated GPU market if the drivers were capable enough. Imagination Technologies has even created an HPC edition of the Series 7 family — its first modest foray into high-end GPU-powered supercomputing. We don’t know much about the chip’s render outputs (ROPs) or its memory support, but the older Series 6 chips had up to 12 ROPS. The GT7900 could sport 32, with presumed support for at least dual-channel LPDDR4.

    Quad-channel memory configurations (if they exist) could actually give this chip enough klout to rightly call itself a competitor for last-generation consoles, if it was equipped in a set-top box with a significant thermal envelope. Imagination is also looking to push the boundaries of gaming in other ways — last year the company unveiled an architecture that would incorporate a ray tracing hardware block directly into a GPU core.

    The problem with targeting the affordable console market is that every previous attempt to do this has died. From Ouya to Nvidia’s Shield, anyone who attempted to capitalize on the idea of a premium Android gaming market has either withered or been forced to drastically shift focus. Nvidia may have built two successive Shield devices, but the company chose to lead with automotive designs at CES 2015 — its powerful successor to the Tegra K1, the Tegra X1, has only been talked about as a vehicle processor. I suppose Nvidia could still announce a shield update around the X1. But considering the company didn’t even mention it at CES, where Tegra was launched as a premium mobile gaming part, speaks volumes about where Nvidia expects its revenue to come from in this space.

    For its part, Imagination Technology anticipates the GT7900 to land in micro-servers, full-size notebooks, and game consoles. It’s an impressive potential resume, but we’ll see if the ecosystem exists to support such lofty goals. If I had to guess, I’d wager this first chip is the proof-of-concept that will demonstrate the company can compete outside its traditional smartphone and tablet markets. Future cores, possibly built with support for Samsung’s nascent Wide I/O standard, will be more likely to succeed.

    More...

  3. #93
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Report claims DirectX 12 will enable AMD and Nvidia GPUs to work side-by-side



    With the Games Developer Conference right around the corner we’ve started to see more gaming and technology announcements cropping up, but a new report on DirectX 12 is certain to raise the eyebrows of any PC gamer. It’s been reported that DirectX 12 — Microsoft’s upcoming, low-latency, close(r)-to-metal API that replaces DirectX 11 — will be capable of running across AMD and Nvidia GPUs at the same time.

    A “source with knowledge of the matter” told Tom’s Hardware that DirectX 12 will support asynchronous workloads across multiple GPUs, and that one extension of this support is that a task could theoretically be split between two different video cards from different manufacturers.

    For many users, this kind of claim is the stuff of legend. One of the factors that distinguishes the AMD – Nvidia competition from the AMD – Intel battle is that Teams Red and Green regularly switch positions. It’s not unusual for one vendor to have the absolute performance crown while the other has a strong price/performance position at the $200 mark, or for one company to lead for several years until leapfrogged by the other.

    The other advantage of combining GPU technologies is that it could allow for multi-GPU performance on Intel-Nvidia systems or even systems with an AMD CPU / APU and an Nvidia GPU. We took this question to several developers we know to find out if the initial report was accurate or not. Based on what we heard, it’s true — DirectX 12 will allow developers to combine GPUs from different vendors and render to all of them simultaneously.

    The future of multi-GPU support


    We’re using Mantle as a jumping-off point for this conversation based on its high-level similarity to DirectX 12. The two APIs may be different at a programming level, but they’re both built to accomplish many of the same tasks. One feature of both is that developers can control GPU workloads with much more precision than they could with DX11.



    Mantle and DirectX 12 have similar capabilities in this regard

    There are several benefits to this. For the past ten years, multi-GPU configurations from both AMD and Nvidia have been handicapped by the need to duplicate all texture and geometry data across both video cards. If you have two GPUs with 4GB of RAM each, you don’t have 8GB of VRAM — you have 2x4GB.



    Nvidia and AMD used to support both AFR and SFR, but DX11 was AFR-only

    One of the implications of DirectX 12’s new multi-GPU capabilities is that the current method of rendering via Alternate Frame Rendering, where one GPU handles the odd frames and the other handles even frames may be superseded in some cases by superior methods. We examined the performance impact of Split Frame Rendering in Civilization: Beyond Earth last year, and found that SFR offered vastly improved frame times compared to traditional AFR.



    The R9 295X2 in SFR (Mantle) vs. AFR (D3D) in Civilization: Beyond Earth. Smoother lines = better performance.

    We expect DirectX 12 to offer the same capabilities as Mantle at a high level, but unlike Mantle, it’s explicitly designed to support multiple GPUs from Intel, AMD, and Nvidia. Let’s take a simple example — an Intel CPU with integrated graphics and an AMD or Nvidia GPU. Each GPU is exposed to the 3D application, which means the workload can theoretically be run across both GPUs simultaneously. It’s not clear which GPU would drive the monitor or how output would be handled, but companies like LucidLogix (which actually tried its hand at providing a hardware solution for multi-vendor GPU support once upon a time) later made its name with a virtualized monitor driver that served this purpose.



    AMD has talked up this capability for its products for quite some time.

    The developers we spoke to indicated that AMD and Nvidia wouldn’t necessarily need to support the feature in-driver — there are certain kinds of rendering tasks that could be split between graphics cards by the API itself. That’s encouraging news, since features that require significant driver support tend to be less popular, but it’s not the only potential issue. The advantage of DX12 is that it gives the developer more control over how multi-GPU support is implemented, but that also means that the driver handles less of the work. Support for these features will be up to developers, and that’s assuming that AMD and Nvidia don’t take steps to discourage such cross-compatibility in driver. Historically Nvidia has been less friendly to multi-vendor GPU configurations than AMD, but DirectX 12 could be a hit reset on the feature.

    In an ideal world, this kind of capability could be used to improve gaming performance on nearly all devices. The vast majority of Intel and AMD CPUs now includes GPUs onboard — the ability to tap those cores for specialized processing or just a further performance boost would be a welcome capability. DirectX 12 is expected to cut power consumption and boost performance in at least some cases, though which GPUs will offer “full” DX12 support isn’t entirely clear yet. DX12’s multiple-vendor rendering mode wouldn’t allow for other features, like PhysX, to automatically operate in such configurations. Nvidia has historically cracked down on this kind of hybrid support, and the company would have to change its policies to allow it to operate.


    More...

  4. #94
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Next-generation Vulkan API could be Valve’s killer advantage in battling Microsoft


    Last week, we covered the announcement of the Khronos Group’s Vulkan API, as well as information on how AMD’s Mantle formed the fundamental basis of the new standard. Now that some additional information on Vulkan has become available, it seems likely that this new API will form the basis of Valve’s SteamOS push, while Direct3D 12 remains the default option for Microsoft’s PC and Xbox gaming initiatives. At first glance, this doesn’t seem much different from the current status quo. But there are reasons to think that Vulkan and D3D12 do more than hit reset on the long-standing OpenGL vs. D3D battles of yesteryear.

    One critical distinction between the old API battles and the current situation is that no one seems to be arguing that either Vulkan or Direct3D have any critical, API-specific advantage that the other lacks. All of the features that AMD first debuted with Mantle are baked into Vulkan, and if Direct3D 12 offers any must-have capabilities, Microsoft has yet to say so. The big questions in play here have less to do with which API you feel is technically superior, and what you think the future of computer gaming should look like.



    For more than a decade, at least on the PC side, the answer to that question has been simple: It looks like Direct3D. OpenGL support may never have technically gone away, but the overwhelming majority of games for PC have shipped with Direct3D by default, and OpenGL implemented either as a secondary option or not at all. Valve’s SteamOS may have arrived with a great sound and fury before fading away into Valve Time– but developers ExtremeTech spoke to say that Valve has been very active behind the scenes. A recent report at Ars Technica on the state of Linux gaming underscores this point, noting that Valve’s steady commitment to offering a Linux distro has increased the size of the market and driven interest in Linux as a gaming alternative.



    Valve, moreover, doesn’t need to push SteamOS to encourage developers to use Vulkan. At the unveil last week, Valve was already showing off DOTA 2 running on Vulkan, as shown below.

    If the Source 2 engine treats Vulkan as a preferred API, or if Valve simply encourages devs to adopt it over D3D for Steam titles, it can drive API adoption without requiring developers to simultaneously support a new operating system — while simultaneously making it much easier to port to a new OS if it decides to go that route.

    It’s funny, in a way, to look back at how far we’ve come. Steam OS was reportedly born out of Gabe Newell’s anger and frustration with Microsoft Windows. Back in 2012, Newell told VentureBeat, “I think that Windows 8 is kind of a catastrophe for everybody in the PC space. I think that we’re going to lose some of the top-tier PC [original equipment manufacturers].” Valve’s decision to develop its own operating system was likely driven at least in part by the specter of the Windows Store, which had the power (in theory) to steal Steam’s users and slash its market share. In reality, of course this didn’t happen — but then, SteamOS remains more a phantom and less a shipping product. As the market turns towards Windows 10, Valve continues to have an arguably stronger hold than Microsoft over PC gaming.



    One could argue, though, that Microsoft’s failure to capitalize on the Windows Store or to move PC gamers to Windows 8 merely gave Valve an extended window to get its own OS and API implementations right. Windows 10 represents the real battleground, and a fresh potential opportunity for MS to disrupt Valve’s domination of PC game distribution. If you’re Valve — and keep in mind that Steam is a staggering revenue generator for the company, given that Valve gets a cut of every game sold — then a rejuvenated Windows store, with a new API and an OS handed away for free, is a potential threat.
    If this seems far-fetched, consider the chain of logic. Valve knows that gaming is a key revenue source in both iOS and Android and that Microsoft, which plans to give away its Windows 10 for free to millions of qualifying customers, is going to be looking for ways to replace that revenue. The Windows Store is the most obvious choice, which also dovetails with Microsoft’s plans to unify PC and Xbox gaming as well as Windows product development. If you’re Valve, the Windows Store is still a threat.

    Valve can’t force gamers to adopt SteamOS en masse, but it can at least hedge its bets by encouraging developers to optimize for an API besides Direct3D. Using Vulkan should made cross-platform games easier to develop, which in turn encourages the creation of Linux and OS X versions. The more games are supported under alternative operating systems, the easier it is (in theory) to migrate users towards those OSes, and the bigger the backstop against Direct3D and Microsoft. SteamOS might be a minor project now, but the Steam platform, as a whole, is a juggernaut. Valve’s efforts to create an API specifically for Intel platforms using Vulkan under Steam OS is an example of how it could boost development for its own platform and improve performance across third-party GPUs.

    Since Direct3D 12 and Vulkan reportedly perform many of the same tasks and allow for the same types of fine-grained control, which we see adopted more widely may come down to programmer familiarity and the degree to which a developer is dependent on either Microsoft’s good graces or Valve’s. The end results for consumers should still be vastly improved multi-threading, better power consumption, and superior multi-GPU support. But the Vulkan-versus-D3D12 question could easily become a war for the future of PC gaming and its associated revenues depending on whether Valve and Microsoft make nice or not.


    More...

  5. #95
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Potent Penguinistas: Steam for Linux crosses 1,000-game threshold


    When Valve announced that it would begin porting games to Linux as part of its SteamOS initiative, the move was greeted with skepticism in many quarters. Could Valve move the industry back towards cross-platform gaming when Windows had locked it down for so long? The answer clearly seems to be yes — the Linux side has crossed a significant milestone, with more than a thousand actual games available (including software, demos, and videos, the total stands at more than 2,000 items). Mac OS and Windows still have more games in total (1,613 for Mac and 4,814 for PC), but crossing the 1,000 mark is a significant achievement and a clear psychological milestone.

    That said, there’s a definite difference between the types of games available on Linux and those available for Windows. New releases for Linux include Cities: Skylines, and Hotline Miami 2: Wrong Number, but the vast majority of AAA titles are still Windows-centric.

    The simplest way to check this is to simply sort the store by price, High to low. The Linux SteamOS store has two games at $59.95 and by the end of the first page (25 results) prices have dropped to $29.99. On the PC side there are 29 titles at $59 or above and more than 150 titles sell for $34.99 or higher.



    We’re not suggesting that game price is an indicator of game quality, but the significant difference in game prices indicates that relatively few studios see Linux or SteamOS as a good return on investment for now. That’s not unusual or unexpected — Valve has been working with developers and game designers to change those perceptions one game and one gamer at a time. There are also early quality issues to be ironed out — when SteamOS launched, the graphical differences between the Direct3D and OpenGL versions of a title ranged from nonexistent to a clear win for the Windows platform.

    The more developers sign on to bring titles over to SteamOS, the smaller the quality will gap will be, particularly if more developers move to using the next-generation Vulkan API. As for the long-term chances of Valve’s SteamOS gaining significant market share, I’ll admit that it seems unlikely — but then, not many years ago, the very idea of gaming on a Linux box was nearly a contradiction in terms. Outside of a dedicated handful of devs and some limited compatibility from Wine, if you used Linux, you did your gaming elsewhere.

    That’s finally starting to change. And while it may not revolutionize the game industry or break Microsoft’s grip, it’s still a marked departure from the status quo of the past 15 years.

    More...

  6. #96
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Sony’s firmware 2.50 will finally bring suspend and resume to the PS4


    Finally, suspend and resume is headed for the PS4. Sony officially confirmed that the long-awaited feature is coming in the next major firmware revision. Other convenience and performance tweaks will be added to firmware 2.50 (dubbed “Yukimura”) as well, so PS4 users will have an even better gaming experience going forward.

    In a blog post, Sony’s Scott McCarthy confirmed that game suspension is being added to the PS4’s repertoire. After the update, rest mode will merely pause the game instead of closing it completely. And when you exit rest mode, your game will be exactly where you left it. This feature was promised long before the PS4 ever shipped, so it’s about time that Sony finally delivers this functionality — especially since Xbox One and Vita users already enjoy suspend and resume.



    In addition, this firmware update will bring a number of important features. Interestingly, Remote Play and Share Play will be upgraded to allow for both 30fps and 60fps streaming. If your connection and game of choice can handle 60fps, you’ll no longer be hamstrung by Sony’s streaming technology. It might be less feasible over the internet or using WiFi, but I frequently use Remote Play over ethernet on my PlayStation TV. For a small (but vocal) subset of PS4 users, this is a big deal.

    Other features: Sub-accounts created for children can now be upgraded to master accounts when the child comes of age. And if you’ve linked your Facebook account to your PS4, you’ll be able to search for Facebook friends with PSN accounts. Those are relatively small changes, but it’s always nice to see Sony smoothing out the rough edges.
    If Facebook and YouTube weren’t enough, Dailymotion is now built into the video sharing functionality at the hardware level. Of course, you can always save your videos to a USB drive, and then post them wherever you please from your computer.


    More...

  7. #97
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Nvidia GeForce GTX Titan X reviewed: Crushing the single-GPU market

    Today, Nvidia is launching its new, ultra-high end luxury GPU, the GeForce GTX Titan X. This is the fourth GPU to carry the Titan brand, but only the second architecture to do so. When Nvidia launched the first Titan, it used a cut-down version of its workstation and HPC processor, the GK110, with just 14 of its 15 SMX units enabled. Later cards, like the Titan Black, added RAM and enabled the last SMX unit, while the dual-GPU Titan Z packed two Titan Black cores into the same silicon with mixed results.



    GM200, full fat edition

    The Titan X is based on Nvidia’s GM200 processor and ships with all 24 of its SMMs enabled (that’s the current term for Nvidia’s compute units). The chip has 3072 CUDA cores, and a whopping 12GB of GDDR5 memory. To those of you concerned about a GTX 970-style problem, rest assured: There are no bifurcated memory issues here.



    Many of the Titan X’s specifications have landed as predicted. The card has a 384-bit memory bus, 192 texture units (TMUs), 96 render outputs (ROPS) a base clock of 1GHz, and a memory clock of ~1750MHz. Nvidia is also claiming that this GPU can overclock like gangbusters, with clock speeds of up to 1.4GHz on air cooling theoretically possible. We’ll be covering overclocking performance in a separate piece in the very near future.

    Unlike the first Titan, this new card doesn’t offer full-speed double-precision floating point, but it does support the same voxel global illumination (VXGI) capabilities and improved H.265 decode capabilities that have deployed in previous GTX 900 family cards.

    The first 4K single-GPU?


    One of Nvidia’s major talking points for the Titan X is that this new card is designed and intended for 4K play. The way the GPU is balanced tends to bear this out. The GTX 680, released three years ago, had just 32 render outputs, which are the units responsible for the output of finished pixels that are then drawn on-screen. The GTX 780 Ti, Kepler’s full workstation implementation, increased this to 48 ROPs.


    The GTX 980 increased this still further, to 64 ROPS, and now the GTX 980 has pushed it even farther — all the way to 96 render outputs. The end result of all of this pixel-pushing power is that the Titan X is meant to push 4K more effectively than any single GPU before it. Whether that’s “enough” for 4K will depend, to some extent, on what kind of image quality you consider acceptable.

    Competitive positioning


    If you follow the GPU market with any regularity, you’re likely aware that Nvidia has been in the driver’s seat for the past six months. AMD’s Hawaii-based R9 290 and 290X may have played merry hell with the GTX 780 family back in 2013, but Nvidia’s GTX 970 and 980 reversed that situation neatly. Given the Titan X’s price point, however, there’s only one AMD GPU that even potentially competes — the dual-GPU R9 295X2.



    The AMD R9 295X2 has a massive 500W TDP, but it’s still fairly quiet thanks to its massive watercooling solution.

    Dual-vs-single GPU comparisons are intrinsically tricky. First, the doubled-up card is almost always the overall winner — it’s exceptionally rare for AMD or Nvidia to have such an advantage over the other that two cards can’t outpace one.
    The reason dual GPUs don’t automatically sweep such comparisons is twofold: First, not all games support more than one graphics card, which leaves the second GPU effectively sitting idle. Second, even when a game does support multiple cards, it typically takes driver optimizations to fully enable it. AMD has historically lagged behind in this department compared with Nvidia — we’ll examine how Team Red has done on this front in the next few pages, and fold the results into our overall recommendations.

    More...

  8. #98
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Nvidia’s 2016 roadmap shows huge performance gains from upcoming Pascal architecture


    At Nvidia’s keynote today to kick off GTC, CEO Jen-Hsun Huang spent most of his time discussing Nvidia’s various deep learning initiatives and pushing the idea of Tegra as integral to the self-driving car. He did, however, take time to introduce a new Titan X GPU — and to discuss the future of Nvidia’s roadmap.

    When Nvidia’s next-generation GPU architecture arrives next year, codenamed Pascal, it’s going to pack a variety of performance improvements for scientific computing — though their impact on the gaming world is less clear.
    Let’s start at the beginning:



    Pascal is Nvidia’s follow-up to Maxwell, and the first desktop chip to use TSMC’s 16nmFF+ (FinFET+) process. This is the second-generation follow-up to TSMC’s first FinFET technology — the first generation is expected to be available this year, while FF+ won’t ship until sometime next year. This confirms that Nvidia chose to skip 20nm — something we predicted nearly three years ago.

    Jen-Hsun claims that Pascal will achieve over 2x the performance per watt of Maxwell in Single Precision General Matrix multiplication. But there are two caveats to this claim, as far as gamers are concerned. First, recall that improvements to performance per watt, while certainly vital and important, are not the same thing as improvements to top-line performance. The second thing to keep in mind is that boosting the card’s SGEMM performance doesn’t necessarily tell us much about gaming.


    The graph above, drawn from Nvidia’s own files on Fermi-based Tesla cards compared with K20 (GK110) makes the point. While K20X was much, much faster than Fermi, it was rarely 3x faster in actual gaming tests, as this comparison from Anandtech makes clear, despite being 3.2x faster than Fermi in SGEMM calculations.

    Pascal’s next improvement will be its use of HBM, or High Bandwidth Memory. Nvidia is claiming it will offer up to 32GB of RAM per GPU at 3x the memory bandwidth. That would put Pascal at close to 1TB of theoretical bandwidth depending on RAM clock — a huge leap forward for all GPUs.



    Jen-Hsun’s napkin math claims Pascal will offer up to 10x Maxwell performance “in extremely rough estimates.”

    Note: Nvidia might roll out that much memory bandwidth to its consumer products, but 32GB frame buffers are unlikely to jump to the mainstream next generation. Even the most optimistic developers would be hard-pressed to use that much RAM when the majority of the market is still using GPUs with 2GB or less.

    Pascal will be the first Nvidia product to debut with variable precision capability. If this sounds familiar, it’s because AMD appears to have debuted a similar capability last year.



    It’s not clear yet how Nvidia’s lower-precision capabilities dovetail with AMDs, but Jen-Hsun referred to 4x the FP16 performance in mixed mode compared with standard (he might have been referencing single or double-precision).



    Jen-Hsun’s napkin math claims Pascal will offer up to 10x Maxwell performance “in extremely rough estimates.”

    Finally, Pascal will be the first Nvidia GPU to use NVLink, a custom high-bandwidth solution for Nvidia GPUs. Again, for now, NVLink is aimed at enterprise customers — last year, Jen-Hsun noted that the implementations for ARM and IBM CPUs had been finished, but that x86 chips faced non-technical issues (likely licensing problems). Nvidia could still use NVLink in a consumer dual-GPU card, however.

    Pascal seems likely to deliver a huge uptick in Nvidia’s performance and efficiency. And given that the company managed to eke the equivalent of a generation’s worth of performance out of Maxwell while sticking with 28nm, there’s no reason to think it won’t pull it off. In the scientific market, at least, Nvidia is gunning for Xeon Phi — AMD has very little presence in this space, and that seems unlikely to change. If Sunnyvale does launch a new architecture in the next few months, we could actually see some of these features debuting first on Team Red, but the fabled Fiji’s capabilities remain more rumor than fact.

    More...

  9. #99
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Nintendo’s new plan for mobile — and what it means for the company’s consoles




    Yesterday, Nintendo dropped a pair of bombshells on the gaming world. First, the company announced that it had partnered with Japanese mobile game development company, DeNA (pronounced DNA), and would bring its major franchises — all of them — to mobile gaming. Second, it has begun work on a next-generation console, codenamed the Nintendo “NX.”

    Both of these announcements are huge shifts for the Japanese company, even if it took pains to emphasize that Nintendo remains committed to its first party franchises and its own game development efforts.


    Nintendo’s partnership with DeNA

    Partnering with an established company like DeNA theoretically gets Nintendo the best of both worlds. It’s only barely dipped its toes into free-to-play content, while DeNA has shipped a number of games using that formula. It has no experience in developing franchised titles for smartphones or tablets, whereas DeNA has plenty. But partnering with a third-party gives Nintendo another potential advantage — it’ll let the company effectively field test new gaming concepts and paradigms on hardware that’s at least as powerful as its own shipping systems.

    Revisiting the “console quality” graphics question


    One of the more annoying trends in mobile gaming the last few years has been the tendency of hardware companies to trumpet “console quality graphics” as a selling point of mobile hardware. Multiple manufacturers*have done this, but head-to-head match-ups tend to shed harsh light on mobile promises.

    When it comes to Nintendo’s various consoles, however, the various mobile chips would be on much better turf. First off, there’s the Nintendo 3DS. Even the “New” 3DS is a quad-core ARMv11 architecture clocked at 268MHz with 256MB of FCRAM, 6MB of VRAM with 4MB of additional memory within the SoC, and an embedded GPU from 2005, the PICA200. Any modern smartphone SoC can absolutely slaughter that feature set, both in terms of raw GPU horsepower and supported APIs.

    What about the Wii U? Here, things get a little trickier. The Wii U is built on an older process node, but its hardware is a little stranger than we used to think. The CPU is a triple-core IBM 750CL with some modifications to the cache subsystem to improve SMP, and an unusual arrangement with 2MB of L2 on one core and 512K on the other two. The GPU, meanwhile, has been identified as being derived from AMD’s HD 4000 family, but it’s not identical to anything AMD ever shipped on the PC side of the business.



    The Wii U’s structure, with triple-core CPU

    By next year, the 16nm-to-14nm SoCs we see should be more than capable of matching the Wii U’s CPU performance, at least in tablet or set-top form factors. If I had to bet on a GPU that could match the HD 4000-era core in the Wii U, I’d put money on Nvidia’s Tegra X1, with 256 GPU cores, 16 TMUs, and 16 ROPS, plus support for LPDDR4. It should match whatever the Wii U had, and by 2016, we should see more cores capable of matching it.



    Nintendo isn’t going to want to trade off perceived quality for short-term profit. The company has always been extremely protective of its franchises —*ensuring*mobile devices (at least on the high end) are capable of maintaining Nintendo-level quality will be key to the overall effort. At the same time, adapting those franchises to tablets and smartphones gives Nintendo a hands-on look at what kinds of games people*want to play, and the ways they use modern hardware to play them.

    What impact will this have on Nintendo’s business?


    Make no mistake: I think Nintendo wants to remain in the dedicated console business, and the “NX” next-generation console tease from yesterday supports that. Waiting several years to jump into the mobile market meant that mobile SoCs had that much more time to mature and improve, and offer something closer to the experience Nintendo prizes for its titles.

    The question of whether Nintendo can balance these two equations, however, is very much open for discussion. Compared with the Wii, the Wii U has been a disaster. As this chart from VGChartz shows, aligned by month, the Wii had sold 38 million more consoles than the Wii U has. The chasm between the Wii and Wii U is larger than the number of Xbox One’s and PS4’s sold combined.




    Without more information, it’s difficult to predict what the Nintendo NX might look like. Nintendo could have a console ready to roll in 18-24 months, which would be well within the expected lifetimes of the PlayStation 4 and Xbox One — or, of course, it could double-down on handheld gaming and build a successor to the 3DS. Either way, the next-generation system will be significantly more powerful than anything Nintendo is currently shipping.

    Pushing into mobile now gives Nintendo a way to leverage hardware more powerful than its own, and some additional freedom to experiment with game design on touch-screen hardware. But it could also signal a sea change in development focus. If the F2P model takes off and begins generating most of the company’s revenue, it’ll undoubtedly change how its handheld and console games are built — and possibly in ways that the existing player base won’t like.

    Balancing this is going to be a difficult achievement for the company — a bunch of poorly designed F2P games might still make short-term capital, but could ruin Nintendo’s reputation as a careful guardian of its own franchises. Failing to exploit the mechanics of the F2P market, on the other hand, could rob the company of capital it needs to transition to its new console.



    More...

  10. #100
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    AMD-backed FreeSync monitors finally shipping



    For the past few years, both AMD and Nvidia have been talking up their respective solutions for improving gaming display quality. Nvidia calls its proprietary implementation G-Sync, and has been shipping G-Sync displays with partner manufacturers for over a year, while AMD worked with VESA (Video Electronics Standard Association) to build support for Adaptive Sync (aka FreeSync) into the DisplayPort 1.2a standard. Now, with FreeSync displays finally shipping as of today, it’ll soon be possible to do a head-to-head comparison between them.

    Introducing FreeSync


    AMD first demonstrated what it calls FreeSync back in 2013. Modern Graphics Core Next (GCN) graphics cards from AMD already had the ability to control display refresh rates — that technology is baked into the embedded Display Port standard (eDP). Bringing it over to the desktop, however, was a trickier proposition. It’s taken a long time for monitors that support DisplayPort 1.2a to come to market, which gave Nvidia a significant time-to-market advantage. Now that*FreeSync displays are finally shipping, let’s review how the technology works.

    Traditional 3D displays suffer from two broad problems — stuttering and tearing. Tearing occurs when Vertical Sync (V-Sync) is disabled — the monitor draws each frame as soon as it arrives, but this can leave the screen looking fractured, as two different frames of animation are displayed simultaneously.
    Turning on V-Sync solves the tearing problem, but can lead to stuttering. Because the GPU and monitor don’t communicate directly, the GPU doesn’t “know” when the monitor is ready to display its next frame. If the GPU sends a frame that the GPU doesn’t draw, the result is an animation stutter, as shown below:



    FreeSync solves this problem by allowing the GPU and monitor to communicate directly with each other, and adjusting the refresh rate of the display on the fly to match what’s being shown on screen.



    The result is a smoother, faster gaming experience. We haven’t had a chance to play with FreeSync yet, since the monitors have only just started shipping. But if the experience is analogous to what G-Sync offers, the end result will be gorgeous.

    As of today, multiple displays from Acer, BenQ, LG, Nixeus, Samsung, and ViewSonic are shipping with support for FreeSync (aka DisplayPort 1.2a). A graph of those displays and their various capabilities is shown below. AMD is claiming that FreeSync monitors are coming in at least $50 cheaper than their G-Sync counterparts, which we’ll verify as soon as these monitors are widely available in-market.

    Which is better — FreeSync or G-Sync?


    One of the things that’s genuinely surprised me over the past year has been how ardently fans of AMD and Nvidia have defended or attacked FreeSync and G-Sync, despite the fact that it was literally impossible to compare the two standards, because nobody had hardware yet. Now that shipping hardware does exist, we’ll be taking the hardware head-to-head.

    AMD, of course, is claiming that FreeSync has multiple advantages over G-Sync, including its status as an open standard as compared to a proprietary solution and the fact that G-Sync can incur a small performance penalty of 3-5%. (Nvidia has previously stated the 3-5% figure, AMD’s graph actually shows a much smaller performance hit).



    AMD is claiming that FreeSync has no such issues, and again, we’ll check that once we have displays in hand.

    There’s one broader issue of “better” we can address immediately, however, which is this: Which standard is better for consumers as a whole? Right now, FreeSync is practically a closed standard, even if AMD and VESA don’t intend that to be the case. If you want the smooth frame delivery that Nvidia offers, you buy G-Sync. If you want the AMD flavor, you buy FreeSync. There’s currently no overlap between the two. To be clear, that lack of overlap only applies to the G-Sync and FreeSync technologies themselves. A FreeSync display will function perfectly as a standard monitor if you hook it up to an Nvidia GPU, and a G-Sync monitor works just fine when attached to an AMD graphics card.

    The best outcome for consumers is for AMD, Nvidia, and Intel to collectively standardize on a single specification that delivers all of these capabilities. For now, that seems unlikely. Adaptive Sync has been defined as an optional standard for both DisplayPort 1.2 and 1.3, which means manufacturers won’t be forced to integrate support, and may treat the capability as a value-added luxury feature for the foreseeable future.

    How this situation evolves will depend on which standard enthusiasts and mainstream customers embrace, and whether Intel chooses to add support for DP 1.2a or stay out of the fight. For now, if you buy a display with either technology, you’re locking yourself to a corresponding GPU family.



    More...

Page 10 of 97 FirstFirst ... 8 9 10 11 12 20 60 ... LastLast

LinkBacks (?)

  1. 10-25-2014, 03:45 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •