Page 11 of 97 FirstFirst ... 9 10 11 12 13 21 61 ... LastLast
Results 101 to 110 of 965
Like Tree1Likes

Game Tech News

This is a discussion on Game Tech News within the Electronics forums, part of the Non-Related Discussion category; It’s difficult to know what to make of the Xbox One. When Microsoft first debuted the console nearly two years ...

      
   
  1. #101
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    The uncertain future of the Xbox One


    It’s difficult to know what to make of the Xbox One. When Microsoft first debuted the console nearly two years ago, its vision of the future of gaming slammed face first into the rock wall of consumer expectations. Microsoft offered a second-generation motion tracker with voice commands and an “always on” capability — but consumers didn’t want it. The company declared that online and retail disc purchases would be treated the same, only to find that customers valued the ability to trade in games at a local store. It promised a future in which families and friends could share games out of a common library — but at the cost of offline play.

    Two years later, many of these initial failings have been fixed. But rumors that Microsoft would like out of the Xbox business continue to swirl, prompted partly by stealthy executive departures and ongoing legal issues surrounding the Xbox 360’s disc-scratching problems.

    Discussions of whether Microsoft wants to keep the Xbox One business for itself tend to devolve into arguments over whether the console is profitable (or profitable “enough”), or assume that any divestment would, by necessity, mean the end of the Xbox One as we know it. The former is inaccurate and the second improbable. Microsoft is actually well positioned to spin the Xbox One division off to another company — Redmond has decades of experience in providing software tools that other businesses use and rely on. A spin-off might change the branding and the long-term vision, but the hardware would remain fundamentally reliant on Microsoft operating systems, APIs, and development tools, at least through the end of this generation. Integration with another major company’s core services or software products could be layered on top of the existing Xbox One OS — since the box already runs a modified version of Windows, this would be fairly simple to arrange.

    The argument for selling the Xbox One relies less on proclamations of doom and gloom and more on the question of where Satya Nadella wants to focus. Despite some departures and changes, I think Microsoft’s own roadmap for the Xbox One — and its integration with both Windows 10 and DirectX 12 — tell us most of what we need to know about the future of the platform.

    The impact of DirectX 12, Windows 10 streaming


    Microsoft made multiple high-profile announcements around the Xbox One earlier this year, when it declared Windows 10 would have the native ability to stream Xbox games to anywhere across the home network. We’ve advocated for this kind of feature for years, and are thrilled to see it happening — game streaming is a category that Microsoft should have owned already, thanks to its huge share of the PC market. You could even argue giving away Windows 10 is a way to further sweeten the deal, since it increases the chance more users will upgrade.



    DirectX 12 is another interesting feature that could improve the Xbox One. At GDC, Stardock’s Brad Wardell argued that Microsoft, AMD, Nvidia, and Intel have all been lying about the benefits of DX12 because they don’t want to admit just how badly DirectX 11 was broken. Admitting the benefits of DX12 would, according to Wardell, “mean acknowledging that for the past several years, only one of your cores was talking to the GPU, and no one wants to go, ‘You know by the way, you know that multi-core CPU? It was useless for your games.’ Alright? No one wants to be that guy. People wonder, saying ‘Gosh, doesn’t it seem like PC games have stalled? I wonder why that is?”



    CPU performance in DirectX 12

    If D3D12 offers the same performance improvements as Mantle, we’ll see it boosting gameplay in titles where the CPU, rather than the GPU, is the primary bottleneck. So far, this doesn’t appear to be the case in many games — the Sony PS4 is often somewhat faster than the Xbox One, despite the fact that the Xbox One has a higher CPU clock speed. Whether this is the result of some other programming issue is undoubtedly game-dependent, but DX12 simply doesn’t look like an automatic speed boost for the Xbox One — it’s going to depend on the game and the developer.
    Taken as a whole, however, the Windows 10 integration and D3D12 work mean that two of Microsoft’s largest core businesses — its PC OS and its gaming platform — are now separated almost entirely by function rather than any kind of intrinsic capability.

    Microsoft’s last, best hope


    As a recent GamesIndustry.biz piece points out, Microsoft may be stuck with Xbox One for a very simple reason: There aren’t many companies that have both the capital and the interest in gaming to buy the segment at anything like a fair price. It’s entirely possible that Nadella would prefer to be out of gaming, but he’s not willing to defund and destroy the segment if he can’t find a buyer.

    Regardless of whether Microsoft has explored selling the unit, the company is finally taking the kinds of steps that its customers are likely to value — steps that could allow it to leverage the strengths of PC and Xbox gaming side-by-side, rather than simply walling off the two groups of customers in separate gardens. It’s not hard to see how Microsoft could eventually extend things the other direction as well, offering PC game buyers with Xbox One’s the ability to stream PC titles to the television. True, this would compete more closely with some of Steam’s features, but Microsoft has to be aware that a company other than itself controls the keys to PC gaming — and doubtless has ideas about how it could change that situation. The fact that it didn’t play out this generation doesn’t mean it won’t, long term.

    The Xbox One may have had one of the most disastrous debuts in the history of modern marketing, and it has a great deal of ground to make up, but Microsoft has proven willing to adapt the console to better suit the needs of its target audience. Taking the long view, it’s hard to argue that Microsoft’s system is at a greater disadvantage than the PS3 was at launch, with terrible sales, few games, and a huge price tag. The Xbox 360 led the PS3 in total sales for most of last generation, even after the RROD debacle, but in the final analysis the two platforms ended up selling virtually the same.
    If Microsoft’s gambits work, the Xbox One’s Windows 10 streaming and future cross-play opportunities could take it from also-ran to preferred-platform status.


    More...

  2. #102
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Hands on: PS4 firmware 2.50 with suspend and resume, 60fps Remote Play


    Earlier this week, Sony released PS4 firmware 2.50 dubbed “Yukimura.” There are numerous changes with this latest version, but the biggest two are definitely the addition of suspend/resume and 60fps Remote Play. Of course, we knew that this update was coming, but I wanted to know how well these features actually worked. Is the process seamless? Does the higher frame rate cause any problems?


    More...

  3. #103
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    DirectX 12, LiquidVR may breathe fresh life into AMD GPUs thanks to asynchronous shading


    With DirectX 12 coming soon with Windows 10, VR technology ramping up from multiple vendors, and the Vulkan API already debuted, it’s an exceedingly interesting time to be in PC gaming. AMD’s GCN architecture is three years old at this point, but certain features baked into the chips at launch (and expanded with Hawaii in 2013) are only now coming into their own, thanks to the improvements ushered in by next-generation APIs.

    One of the critical technologies underpinning this argument is the Asynchronous Command Engine (ACEs) that are part of every GCN-class video card. The original HD 7900 family had two ACE’s per GPU, while AMD’s Hawaii-class hardware bumped that even further, to eight.



    AMD’s Hawaii, Kaveri, and at least the PS4 have eight ACE’s. The Xbox One may be limited to just two, but does retain the capability.

    AMD’s Graphics Core Next (GCN) GPUs are capable of asynchronous execution to some degree, as are Nvidia GPUs based on the GTX 900 “Maxwell” family. Previous Nvidia cards like Kepler and even the GTX Titan were not.

    What’s an Asynchronous Command Engine?


    The ACE units inside AMD’s GCN architecture are designed for flexibility. The chart below explains the difference — instead of being forced to execute a single queue via pre-determined order, even when it makes no sense to do so, tasks from different queues can be scheduled and completed independently. This gives the GPU some limited ability to execute tasks out-of-order — if the GPU knows that a time-sensitive operation that only needs 10ns of compute time is in the queue alongside a long memory copy that isn’t particularly time sensitive, but will take 100,000ns, it can pull the short task, complete it, and then run the longer operation.


    Asynchronous vs. synchronous threading

    The point of using ACE’s is that they allow the GPU to process and execute multiple command streams in parallel. In DirectX11, this capability wasn’t really accessible — the API was heavily abstracted, and multiple developers have told us that multi-threading support in DX11 was essentially broken from Day 1. As a result, there’s been no real way to tell the graphics card to handle graphics and compute in the same workload.



    GPU pipelines in DX11 vs. DX12

    AMD’s original GCN hardware may have debuted with just two ACEs, but AMD claims that it added six ACE units to Hawaii as part of a forward-looking plan, knowing that the hardware would one day be useful. That’s precisely the sort of thing you’d expect a company to say, but there’s some objective evidence that Team Red is being honest. Back when GCN and Nvidia’s Kepler were going head to head, it quickly became apparent that while the two companies were neck and neck in gaming, AMD’s GCN was far more powerful than Nvidia’s GK104 and GK110 in many GPGPU workloads. The comparison was particularly lopsided in cryptocurrency mining, where AMD cards were able to shred Nvidia hardware thanks to a more powerful compute engine and support for some SHA-1 functions in hardware.

    When AMD built Kaveri and the SoCs for the PS4 and Xbox One, it included eight ACEs in the first two (the Xbox One may have just two). The thinking behind that move was that adding more asynchronous compute capability would allow programmers to use the GPU’s computational horsepower more effectively. Physics and certain other types of in-game calculations, including some of the work that’s done in virtual reality simulation, can be handled in the background.



    Asynchronous shader performance in a simulated demo.

    AMD’s argument is that with DX12 (and Mantle / Vulkan), developers can finally use these engines to their full potential. In the image above, the top pipeline is the DX11 method of doing things, in which work is mostly being handled serially. The bottom image is the DX12 methodology.

    Whether programmers will take advantage of these specific AMD capabilities is an open question, but the fact that both the PS4 and Xbox one have a full set of ACEs to work with suggests that they may. If developers are writing the code to execute on GCN hardware already, moving that support over to DX12 and Windows 10 is no big deal.



    A few PS4 titles and just one PC game use asynchronous shaders now, but that could change.

    Right now, AMD has only released information on the PS4’s use of asynchronous shaders, but that doesn’t mean the Xbox One can’t. It’s possible that the DX12 API push that Microsoft is planning for that console will add the capability.



    AMD is also pushing ACE’s as a major feature for its LiquidVR platform — a fundamental capability that it claims will give Radeon cards an edge over their Nvidia counterparts. We’ll need to see final hardware and drivers before making any such conclusions, of course, but the compute capabilities of the company’s cards are well established. It’s worth noting that while AMD did have an advantage in this area over Kepler, which had only one compute and one graphics pipeline, Maxwell has one graphics pipeline and 32 compute pipes, compared to just 8 AMD ACEs. Whether this impacts performance or not in shipping titles is something we’ll only be able to answer once DX12 games that specifically use these features are in-market.

    The question, from the end-user perspective, obviously boils down to which company is going to offer better performance (or price/performance ratio) in the next-generation DX12 API. It’s far too early to make a determination on that front — recent 3DMark 12 benchmarks put AMD’s R9 290X out in front of Nvidia’s GTX 980, while Star Swarm results from earlier this year reversed that result.
    What is clear is that DX12 and Vulkan are reinventing 3D APIs and, by extension, game development in ways we haven’t seen in years. The new capabilities of these frameworks are set to improve everything from multi-GPU configurations to VR displays. Toss in features like 4K monitors and FreeSync / G-Sync support, and it’s an exciting time for the PC gaming industry.


    More...

  4. #104
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Microsoft targets Halo Online modders with DMCA takedown


    Nobody likes to be told they can’t have something just because they live in the wrong part of the world. Case in point — Microsoft has earned the ire of gamers across the globe for its decision to make the upcoming free-to-play Halo Online PC title available only in Russia. Modders have gotten their hands on the game, though, and are busy removing the region lock. In response, Microsoft is unleashing the lawyers.

    Halo Online is simpler than modern Halo games on the Xbox 360 and Xbox One. It’s based on a heavily modified Halo 3 engine that has been tweaked to run well on low-power PCs. That said, the gameplay videos of Halo Online look perfectly serviceable. The game is played entirely online, so it’s multiplayer only. Microsoft doesn’t plan to create any sort of campaign for Halo Online.

    People are not taking kindly to Microsoft’s decision to launch Halo Online in closed beta for Russia only, but you can guess at the reasoning. The rates of piracy in Russia are higher than in North America or Europe, but free-to-play games tend to pull in some revenue from people that would otherwise just grab all their games from BitTorrent. The low spec requirements will also expand the user base dramatically. Microsoft would likely offer players the option of buying additional equipment and accessories in the game for real money, but there are no details what the cost structure will be yet.

    No sooner did Microsoft announce Halo Online then a leaked copy of the game showed up online. With access to the game, modders set to work getting around the region lock. It wasn’t long before a custom launcher called ElDorito (a joke based on the games official launcher, called ElDorado) showed up on Github. ElDorito is intended to make the game playable everywhere, but Microsoft doesn’t want that. And this is how the lawyers came to be involved.

    The ElDorito Github listing was hit with a DMCA takedown notice by Microsoft’s legal team yesterday. In the document, Microsoft asserts a copyright claim to ElDorito and demands it be taken down. Github is obliged to comply with any DMCA letters it gets, but the ElDorito team can appeal if they choose. It’s important to note that ElDorito isn’t the actual game — it’s just a launcher created by the community. It’s still possible it uses something from the official launcher or game, so Microsoft’s claim could be completely valid.

    The game files needed to play Halo Online are floating around online in the form of a 2.1GB ZIP file. As this is an online game, Microsoft can probably block leaked versions going forward. Still, modders aren’t going to stop developing workarounds until there’s a legitimate way to play Halo Online. Microsoft has said that any expansion of Halo Online to other markets would require changes to the experience, and it’s not focusing on that right now. Maybe someday, though.


    More...

  5. #105
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Relying on server connections is ruining video games


    As time goes on, more games are relying on online elements. And while that makes perfect sense for multiplayer games, single-player games are being impacted as well. When used sparingly, it’s no big deal, but developers and*publishers*seem totally willing to sacrifice the user experience for online hooks. And unsurprisingly, consumers aren’t happy with the situation. So, when are developers going to get the picture, and stop demanding online participation?

    At the end of March, 2K Sports shut down the servers for NBA 2K14. While it’s not particularly surprising to see a sports game’s multiplayer mode shut down after a year or so, the server shutdown left the save files for many single-player games unusable. If your “career mode” save used the online hooks, it stopped working completely. Users do have the option of going offline exclusively, but that means starting over completely.

    The reaction from users was incredibly harsh, and 2K Sports quickly turned the servers back on. Instead of a paltry 16-month window, 2K Sports promised 18 to 27 months of online support. It’s better than nothing, but that’s little more than a bandaid on a gaping wound. Unless the devs push out a patch to convert online saves to offline saves, players will still be left out in the cold eventually.



    Of course, this problem goes well beyond sports titles. Infamously, SimCity and Diablo III both required Internet connections at launch. After serious performance issues and consumer outcry, offline modes were later patched into both of those titles. Even after these massive failures of the always-on ideology, it seems developers and publishers are still willing to inconvenience the players to push online functionality.

    Now, not all online connectivity is bad. For example, I think BioWare’s Dragon Age Keep is a very clever solution to the save game problem. Despite my enthusiasm, it’s still a flawed implementation. If your Internet connection is down or the Keep servers are offline, you can’t customize your world state at all. At some point, there was talk about side-loading world states over USB, but that feature never materialized. And when EA pulls the plug on the Keep someday, the game will be significantly worse off.

    Obviously, there are benefits for everyone when online functionality is included as an optional feature. But locking major single-player functionality behind an online gate almost always ends in heartbreak. What we need is a balance between online and offline, but the industry continues to fumble on this issue. How much consumer outrage is it going to take before the developers and publishers wise up?


    More...

  6. #106
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    The EFF, ESA go to war over abandonware, multiplayer gaming rights

    One of the murky areas of US copyright law where user rights, corporate ownership, and the modern digital age sometimes collide is the question of abandonware. The term refers to software for which support is no longer available and covers a broad range of circumstances — in some cases, the original company no longer exists, and the rights to the product may or may not have been acquired by another developer, who may or may not intend to do anything with them.

    The EFF (Electronic Frontier Foundation) has filed a request with the Library of Congress, asking that body to approve an exception to the DMCA (Digital Millennium Copyright Act) that would allow “users who wish to modify lawfully acquired copies of computer programs” to do so in order to guarantee they can continue to access the programs — and the Entertainment Software Association, or ESA, has filed a counter-comment strongly opposing such a measure.
    Let’s unpack that a bit.

    The Digital Millennium Copyright Act, or DMCA, is a copyright law in the United States. One of the areas of law that it deals with is specifying what kinds of access end-users and owners of both software and some computer hardware are legally entitled to. One of its most controversial passages establishes that end users have no right to break or bypass any form of copy protection or security without the permission of the rights holder, regardless of how effective that protection actually is. In other words, a company that encrypts a product so poorly that a hacker can bypass it in seconds can still sue the individual for disclosing how broken their system actually is.

    The Librarian of Congress has the authority to issue exemptions to this policy, with the instruction that they do so when evidence demonstrates that access-control technology has had an adverse impact on those seeking to make lawful use of protected works. Exemptions expire after three years, and must be resubmitted at that time (this caused problems after the Librarian authorized jailbreaking cell phones in 2010 was not renewed in 2013).



    The GameSpy multiplayer service shutdown flung many titles into legal limbo and uncertain futures

    The EFF has requested that the Library of Congress allow legal owners of video games that require authentication checks with a server, or that wish to continue playing multiplayer, to be allowed to remove such restrictions and operate third-party matchmaking servers in the event that the original publisher ceases to exist or to support the title. The request covers both single-player and multiplayer games, and defines “no longer supported by the developer” as “We mean the developer and its authorized agents have ceased to operate authentication or matchmaking servers.”

    This is a significant problem across gaming, and the shift to online and cloud-based content has only made the problem worse. The EFF does not propose that this rule apply to MMOs like EVE Online or World of Warcraft, but rather to games with a distinct multiplayer component that were never designed to function as persistent worlds. To support its claim that this is an ongoing issue, the EFF notes that EA typically only supports offline play for its sports titles for 1.5 *to 2 years, and that more than 150 games lost online support in 2014 across the entire industry.

    The ESA staunchly opposes


    The ESA — which was on record as supporting SOPA — has filed a joint comment with the MPAA and RIAA, strongly opposing such measures. In its filing, the ESA argues the following:

    • The EFF’s request should be rejected out of hand because “circumvention related to videogame consoles inevitably increases piracy.”
    • Servers aren’t required for single-player gameplay. In this alternate universe, Diablo III, Starcraft 2, and SimCity don’t exist.
    • Video game developers charge separate fees for multiplayer, which means consumers who buy games for multiplayer aren’t harmed when multiplayer is cancelled. Given that the number of games that don’t charge for multiplayer vastly exceeds those that do, I’m not sure who proofed this point.
    • The purpose of such practices is to create the same experience as the multiplayer once provided, which means that it can’t be derivative, which means there is no grounds to allow anyone to play a multiplayer game after the initial provider*has decided to stop supporting the server infrastructure.

    This line of thinking exemplifies the corporation-uber-alles attitude that pervades digital rights policy in the 21st century. The EFF is not asking the Library of Congress to force companies to support an unprofitable infrastructure. It’s not asking the Library to compel the release of source code, or to bless such third-party efforts, or to require developers to support the development of a replacement matchmaking service in any way.

    The EFF’s sole argument is this: Users who legally paid for a piece of software should have the right to try to create a replacement server infrastructure in the event that the company who operated the official version quits. The ESA, in contradiction, argues that multiplayer function over the Internet “is not a ‘core’ functionality of the video game, and permitting circumvention to access such functionality would provide the user greater benefits than those bargained and paid for. Under these facts, consumers are not facing any likely, substantial, adverse effects on the ability to play the games they have purchased.”



    The disastrous debut of SimCity demonstrated everything wrong with always-online play.

    Try to wrap your head around that one, if you can. If you bought a game to play multiplayer, and EA disables the multiplayer function, the ESA argues that this does not adversely impact your ability to play the game, despite arguing in the same sentence that restoring your ability to play the game’s multiplayer would confer upon you greater benefit than you initially paid for.

    It is literally impossible to simultaneously receive greater benefit than you paid for if functionality is restored while losing nothing if that functionality is denied.

    The real reasoning is a great deal simpler: If you’re happy playing an old game, you might be less likely to buy a new one — and in an industry that’s fallen prey to kicking out sequels on a yearly basis, regardless of the impact on game quality, the phantom of lost sales is too frightening to ignore.


    More...

  7. #107
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Intel’s erratic Core M performance leaves an opening for AMD



    When Intel announced its 14nm Core M processor it declared that this would be the chip that eliminated consumer perceptions of an x86 “tax” once and for all.* Broadwell, it was said, would bring big-core x86 performance down to the same fanless, thin-and-light form factors that Android tablets used, while simultaneously offering performance no Android tablet could match. It was puzzling, then, to observe that some of the first Core M-equipped laptops, including Lenovo’s Yoga 3 Pro, didn’t review well and were dinged for being pokey to downright sluggish in some cases.

    A new report from Anandtech delves into why this is, and comes away with some sobering conclusions. Ever since Intel built Turbo Mode into its processors, enthusiasts have known that “Turbo” speeds were best-case estimates, not guarantees. If you think about it, the entire concept of Turbo Mode was a brilliant marketing move. Instead of absolutely guaranteeing that a chip will reach a certain speed at a given temperature or power consumption level, simply establish that frequency range as a “maybe” and push the issue off on OEMs or enthusiasts to deal with. It helped a great deal that Intel set its initial clocks quite conservatively. Everyone got used to Turbo Mode effectively functioning as the top-end frequency, with the understanding that frequency stair-stepped down somewhat as the number of threads increased.

    Despite these qualifying factors, users have generally been able to expect that a CPU in a Dell laptop will perform identically to that same CPU in an HP laptop. These assumptions aren’t trivial — they’re actually critical to reviewing hardware and to buying it.
    The Core M offered OEMs more flexibility in building laptops than ever before, including the ability to detect the skin temperature of the SoC and adjust performance accordingly. But those tradeoffs have created distinctly different performance profiles for devices that should be nearly identical to one another. In many tests, the Intel Core M 5Y10 — a chip with an 800MHz base frequency and a 2GHz top clock — is faster than a Core M 5Y71 with a base frequency of 1.2GHz and a max turbo frequency of 2.9GHz. In several cases, the gaps in both CPU and GPU workloads are quite significant — and favor the slower processor.




    While this issue is firmly in the hands of OEMs and doesn’t reflect a problem with Core M as such, it definitely complicates the CPU buying process. The gap between two different laptops configured with a Core M 5Y71 reached as high as 12%, but the gap between the 5Y10 and the 5Y71 was as high as 36% in DOTA 2. The first figure is larger than we like, while the second is ridiculous.
    None of this means that the Core M is a bad processor as such. But it’s clear that its operation and suitability for any given task is far more temperamental than has historically been the case. Even a 12% difference between two different OEMs is high for our taste — if you can’t trust that the CPU you buy is the same as the core you’d get from a different manufacturer, you can’t trust much about the system.

    Is this an opportunity for AMD’s Carrizo?


    Officially, AMD’s Carrizo and Intel’s Core M shouldn’t end up fighting over the same space; the Core M is intended for systems that draw up to 6W of power, and Carrizo’s lowest known power envelope is a 12W TDP. That doesn’t mean, however, that AMD can’t wring some marketing and PR margins out of the Core M’s OEM-dependent performance.



    When AMD talked about Carrizo at ISSCC, it didn’t just emphasize new features like skin-temperature monitoring, it also discussed how each chip would use Adaptive Voltage and Frequency Scaling, as opposed to Dynamic Voltage and Frequency Scaling. AVFS allows for much finer-grained power management across the entire die — it requires incorporating more control and logic circuitry, but it can give better power savings and higher frequency headroom as a result.

    If AVFS offers OEMs more consistent performance and better characteristics (albeit in a higher overall power envelope), AMD may have a marketing opportunity to work with — assuming, of course, that it can ship Carrizo in the near future and that the chip is competitive in lower power bands to start with. While that’s a tall order, it’s not quite as tall as it might seem — AMD’s Kaveri competed more effectively against Intel at lower power than in higher-power desktop form factors.

    Leaving AMD out of the picture, having seen both the Core M and the new Core i5-based Broadwells, I’d have to take a newer Core i5, hands down. Core M may allow for an unprecedented level of thinness, but the loss of ports, performance, and battery life doesn’t outweigh the achievement of stuffing an x86 core into a form factor this small — at least, not for me. Feel differently? Sound off below.


    More...

  8. #108
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Leaked details, if true, point to potent AMD Zen CPU

    For more than a year, information on AMD’s next-generation CPU architecture, codenamed Zen, has tantalized the company’s fans — and those who simply want a more effective competitor against Intel. Now, the first concrete details have begun to appear. And if they’re accurate, the next-generation chip could pack a wallop.

    Bear in mind, this is a single leaked slide of the highest-end part. Not only could details change dramatically between now and launch, but the slide itself might not be accurate. Let’s take a look:



    According to Fudzilla, the new CPU will offer up to 16 Zen cores, with each core supporting up to two threads for a total of 32 threads. We’ve heard rumors that this new core uses Simultaneous Multithreading, as opposed to the Clustered Multi-Threading that AMD debuted in the

    Bulldozer family and has used the last four years.
    Each CPU core is backed by 512K of L2 cache, with 32MB of L3 cache across the entire core. Interestingly, the L3 cache is shown as 8MB contiguous blocks rather than a unified design. This suggests that Zen inherits its L3 structure from Bulldozer, which used a similar approach — though hopefully the cache has been overhauled for improved performance. The integrated GPU also supposedly offers double-precision floating point at 1/2 single-precision speed.

    Supposedly the core supports up to 16GB of attached HBM (High Bandwidth Memory) at 512GB/s, plus a quad DDR4 controller with built-in DDR4-3200 capability, PCIe 3.0, and SATA Express support.

    Too good to be true?


    The CPU layout shown above makes a lot of sense. We’re clearly looking at a modular part, and AMD has defined one Zen “module” as consisting of four CPU cores, eight threads, 2MB of L2, and an undoubtedly-optional L3 cache. But it’s the HBM interface, quad-channel DDR4, and 64 lanes of PCIe 3.0 that raise my eyebrows.

    Here’s why: Right now, the highest-end servers you can buy from Intel pack just 32 PCI-Express lanes. Quad-channel DDR4 is certainly available, but again, Intel’s high-end servers support 4x DDR4-2133. Server memory standards typically lag behind desktops by a fair margin. It’s not clear when ECC DDR4-3200 will be ready for prime time. That’s before we get to the HBM figures.

    Make no mistake, HBM is coming, and integrating it on the desktop and in servers would make a huge difference — but 16GB of HBM memory is a lot. Furthermore, building a 512GB/s memory interface into a server processor at the chip level is another eyebrow-arching achievement. For all the potential of HBM — and make no mistake, it’s got a lot of potential –that’s an extremely ambitious target for a CPU that’s supposed to debut in 12 to 18 months, even in the server space.

    Nothing in this slide is impossible, and if AMD actually pulled it off while hitting its needed IPC and power consumption targets, it would have an absolutely mammoth core. But the figures on this slide are so ambitious, it looks as though someone took a chart of all the most optimistic predictions that’ve been made about the computing market in 2016, slapped them together on one deck, and called it good.
    I’ll be genuinely surprised if AMD debuts a 16-core chip with a massive integrated graphics processor, and 16GB of HBM memory, and 64 lanes of PCI-Express, and a revamped CPU core, and a new quad-channel DDR4 memory controller,*and*a TDP that doesn’t crack 200W for a socketed processor.
    But hey — you never know.

    More...

  9. #109
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    US government blocks Intel, Chinese collaboration over nuclear fears, national security


    A major planned upgrade to the most powerful supercomputer in the world, the Chinese-built Tianhe-2, has been effectively canceled by the US government. The original plan was to boost its capability, currently at ~57 petaflops (PFLOPS), up to 110 PFLOPS using faster Xeon processors and Xeon Phi add-in boards. The Department of Commerce has now scuttled that plan. Under US law, the DoC is required to regulate sales of certain products if it has information that indicates the hardware will be used in a manner “contrary to the national security or foreign policy interests of the United States.”



    According to the report, the license agreement granted to China’s National University of Defense Technology was provisional and subject to case-by-case agreement. Intel, in other words, never had free reign to sell hardware in China. Its ability to do so was dependent on what the equipment was used for. The phrase “nuclear explosive activities” is defined as including: “Research on or development, design, manufacture, construction, testing, or maintenance of any nuclear explosive device, or components or subsystems of such a device.”

    In order to levy such a classification, the government is required to certify that it has “more than positive knowledge” that the new equipment would be used in such activities. But the exact details remain secret for obvious reasons. For now, the Tianhe-2 will remain stuck at its existing technology level. Intel, meanwhile, will still have a use for those Xeon Phis — the company just signed an agreement with the US government to deliver the hardware for two supercomputers in 2016 and 2018.

    Implications and politics


    There are several ways to read this new classification. On the one hand, it’s unlikely to harm Intel’s finances — Intel had sold hardware to the Tianhe-2 project at a steep discount according to many sources, and such wins are often valued for their prestige and PR rather than their profitability. This new restriction won’t allow another company to step in, even if such a substitution were possible — it’s incredibly unlikely that IBM, Nvidia, or AMD could step in to offer an alternate solution.

    It’s also possible that this classification is a way of subtly raising pressure on the Chinese as regards to political matters in the South China Sea. China has been pumping sand on to coral atolls in the area, in an attempt to*boost its*territorial claim to the Spratly Islands. The Spratly Islands are claimed by a number of countries, including China, which has argued that its borders and territorial sovereignty should extend across the area. Other nations, including the Philippines, Brunei, Vietnam, and the US have taken a dim view of this. Refusing to sell China the parts to upgrade its supercomputer could be a not-so-subtle message about the potential impact of Chinese aggression in this area.



    China’s Loongson processor. The latest version is a eight-core chip built on 28nm.

    Restricting China’s ability to buy high-end x86 hardware could lead the country to invest more heavily in building its own CPU architectures and investing with alternative companies. But this was almost certainly going to happen, no matter what. China is ambitious and anxious to create its own homegrown OS and CPU alternatives. The Loongson CPU has continued to evolve over the last few years, and is reportedly capable of executing x86 code at up to 70% of the performance of native binaries thanks to hardware-assisted emulation. Tests on the older Loongson 2F core showed that it lagged ARM and Intel in power efficiency, but the current 3B chip is several generations more advanced. These events might spur China to invest even more heavily in that effort, even though the chip was under development long before these issues arose.

    More...

  10. #110
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    New Samsung 840 Evo firmware will add ‘periodic refresh’ capability


    When Samsung shipped the 840 Evo, it seemed as though the drive struck a perfect balance between affordability and high-speed performance. Those impressions soured somewhat after it became clear that many 840 EVO’s suffered performance degradation when accessing older data. Samsung released a fix last year that was supposed to solve the problem for good, but a subset of users have begun experiencing issues again. Earlier this year, the company announced that a second fix was in the works.

    Tech Report now has some details on how the company’s second attempt to repair the problem will work. Apparently, the upcoming firmware will add a “periodic refresh” function to the drive. When the drive detects that data stored on it has reached a certain age, it will rewrite that data in the background. This fits with what we heard back when the problem was first uncovered — some users were able to solve it by copying the data to a different part of the drive.



    The original problem with the 840 Evo was traced to shifting cell voltage levels. The drive controller expects cell voltages to operate within a specific range. As the NAND flash aged without being refreshed, those voltage levels passed outside their original programmed tolerances, and the SSD had trouble reading data from the affected sectors. The last firmware solution that Samsung released was supposed to solve the problem by reprogramming the range of values that the NAND management algorithms expected and could tolerate.

    This solution seems to be of a different order. Instead of patching the problem directly by addressing the corner cases, Samsung is adding a refresh feature to prevent the situations that cause an issue to start with. While this may be the smarter way of fixing whatever is throwing off the results, it does raise some questions: Does Samsung’s TLC NAND have a long-term problem with data retention — and will this new solution hurt long-term drive longevity?

    The good news, at least on the longevity front, is that even TLC-based drives proved to be capable of hundreds of TBs worth of write cycles, well above their listed parameters. Rewriting a relatively small portion of the drive’s total capacity every few months shouldn’t have a meaningful impact on lifespan. Samsung does note, however, that if you leave the drive powered off for months at a time, you may need to run its Drive Magician software — the algorithm is designed to run when the system is idle and can’t operate if the machine is powered off.

    It’s not clear what the future of TLC NAND is at this point. Samsung has introduced the TLC-NAND backed 850 Pro, but that chip is built on the 40nm process node. Higher (older) process nodes were actually better for NAND flash when it comes to reliability and longevity metrics, which means it may buffer this problem from appearing in future products. To date, very few manufacturers have introduced TLC NAND at 2D (planar) geometries — it may simply not be worth it for most products.


    More...

Page 11 of 97 FirstFirst ... 9 10 11 12 13 21 61 ... LastLast

LinkBacks (?)

  1. 10-25-2014, 04:45 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •