Page 13 of 97 FirstFirst ... 3 11 12 13 14 15 23 63 ... LastLast
Results 121 to 130 of 962
Like Tree1Likes

Game Tech News

This is a discussion on Game Tech News within the Electronics forums, part of the Non-Related Discussion category; Microsoft’s Xbox division took a significant hit in Q1 2015 after holiday price cuts drove unit shipments upwards, but decreased ...

      
   
  1. #121
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Microsoft’s Xbox revenues fall, shipments still lag Sony


    Microsoft’s Xbox division took a significant hit in Q1 2015 after holiday price cuts drove unit shipments upwards, but decreased the amount of profit Redmond earned on each system. The company reports that Xbox division revenues fell by $306 million or 24% compared with the first quarter of 2014. Microsoft chalks up 20% of the decline to price cuts, with a 4% drop thanks to currency adjustments and a strong US dollar.

    Ars Technica has analyzed the likely split between the Xbox One and Xbox 360, since Microsoft no longer breaks out sales between the two devices, and calculated that the Xbox One likely moved between 1.12 and 1.34 million units over the past three months, roughly in line with the 1.2 million units it moved last year. While flat sales in a seasonally off-peak quarter aren’t bad by any means, they still put the Xbox division well behind Sony, which sold 1.7 million consoles in just January and February.



    In other words, if Sony sold the same number of consoles in all three months of the year and Microsoft sold its 1.2 million as estimated, the PS4 still sold twice as many systems as the Xbox One.

    The Xbox hasn’t failed — but is this success?


    Let’s get something out of the way up front: I am not a console fanboy. In my adult life I’ve owned a Wii, a GameCube, and a Sony PS2. I’m a PC gamer first and foremost, which means I don’t have a personal stake in this fight.

    It’s become common for stories like this to quickly turn into mud-slinging festivals between Xbox fanboys convinced Sony is a year or two from bankruptcy and Sony fans who apparently believe the Xbox One packs the game performance of a Commodore 64. The factual truth is, Microsoft isn’t in an objectively bad position here — the PS3 was in far worse shape against the Xbox 360 at this point in its life cycle than the Xbox One is against the PS4. For one thing, Microsoft isn’t losing hundreds of dollars per unit.



    This graph of the PS3’s life cycle through 2011 shows just how much of a blood bath it was for Sony. Microsoft is far ahead of the curve on this front.

    Microsoft’s biggest problem with the Xbox One, in my opinion, has nothing to do with the fact that it’s somewhat less powerful than the PS4. Developers have demonstrated that they can work within the limits of the platform, and while the PS4 often has an edge in graphics or smoothness, it’s not enormous. The bigger problem for Microsoft is that it*clearly had an entire vision of the Xbox One that was written around Kinect and the concept of the Kinected (pun intend) home.
    Sources have told us that Microsoft was internally stunned at the reaction to its Kinect unveiling in 2013, having never anticipated the blowback it got from the press. The problem was simple: Microsoft didn’t unveil a cohesive gaming strategy to show how titles would use Kinect to deliver cutting-edge gameplay that no other company could match. As a result, readers looked for other reasons why MS would deploy an always-on camera and found them in spades. Edward Snowden’s disclosures, which demonstrated that Microsoft was already obligated to share data with the NSA, effectively killed Kinect.
    Microsoft, to its credit, walked away from its own plans for the Xbox One and pivoted the system to match Sony’s “It plays games” philosophy. They’ve fought back with price cuts and added system features. But without a really killer title — the Halo collection that might have filled that role has proven to be a buggy, half-finished wreck — they’re languishing in Sony’s shadow.

    The next big chance for Microsoft to change its standing will be later this year, when Halo 5 ships. A great Halo could reinvigorate Microsoft’s entire gaming division, just as Halo (and to some extent, Halo 3) drove sales of the original Xbox and Xbox 360. Until then, or unless Microsoft articulates a new philosophy for why consumers should be buying Xboxes and not PS4’s, the console seems stuck in second place.

    More...

  2. #122
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Former Kickstarter darling Ouya is up for sale: Report


    A “confidential” memo from Ouya CEO Julie Uhrman has leaked to Fortune explaining how dire the situation really is for the company that Kickstarter built. It seems that the investors want their money, and they want it now. “Given our debtholder’s timeline,” Uhrman writes, “the process will be quick.” In fact, she wants interested buyers lined up by the end of this month. If all of this is true, we should expect to see major details surface in the coming weeks.

    Back in September of last year, we discussed the possibility of an Ouya buyout after sales continued to flatline. At the time,*Recode was reporting that Ouya was shopping itself around to potential buyers in China and the US. Until this most recent report, we hadn’t heard much about it since then. Now that investors are looking for their cash on the double, the situation seems even worse than it was before.



    While there was loads of hype surrounding the $8.6 million Kickstarter campaign, our very own Joel Hruska referred to the project as “doomed” before the funding was even completed. Since then, the console company has raised well over $25 million in addition to the initial Kickstarter money. Unfortunately for Uhrman and her team, those debts need to be paid back now. Unless there is some kind of miracle, this is the end of the road for the Ouya.

    While some buyers might be interested in grabbing the team and core tech, what value does the Ouya brand name bring to anything? The stink of failure is strong, and wise companies shouldn’t want anything to do with the moniker. Amazon is investing heavily in its Android-based set-top box, and rumor has it that Apple is readying a brand new Apple TV with a third-party SDK. Some underpowered off-brand Android microconsole can’t compete at that level — let alone compete against the Xbox One and PS4.

    Of course, last year’s Ouya-Xiaomi partnership shines a light on what Ouya could become. Perhaps the Ouya team could shift to exclusively offering low-cost Android games on smart TVs. Maybe there’s money to be made in the pay-per-play hotel room gaming market. Whatever the case, the Ouya we all have come to know and begrudgingly accept doesn’t have much of a future at this point. After all, we could just spend a little more, and get a real PC instead of some aging Frankenstein’s monster.


    More...

  3. #123
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    New leak hints at AMD Zen’s architecture, organization


    AMD’s Analyst Day is set for next week and it now appears certain that the company will spend at least some time discussing its Zen architecture. This new slide is designed in AMD’s colors and style — if it’s a fake, it’s a good one. In this case, however, I don’t think it’s fake. The architectural details make sense and are reasonable based on what we might expect AMD to build.




    Let’s start with comparing and contrasting Zen with Excavator. If these slides are accurate, what we’ve got here is a CPU core that eschews the shared resource model that characterized the entire Bulldozer family. Note that the Excavator diagram shows a separate arrow for the two integer pipeline blocks, reflecting the fact that Steamroller added dual decoder blocks to the CPU family. The floating point scheduler can be addressed by either block.

    Zen, in contrast, has a unified decode block for both its integer and its floating-point schedulers. The dual arrows, in this case, likely signify Simultaneous Multi-Threading — what Intel calls Hyper-Threading. An SMT design would allow AMD to execute components of two different threads simultaneously.

    As for the integer pipelines, we need to know more before we can say anything. The Bulldozer family’s four lanes per core may look impressive, but two of those lanes are Address Generation Units, or AGUs. They aren’t used for calculating integer workloads, but for calculating the addresses the CPU uses to access main memory. In other words, simply showing us six pipelines doesn’t tell us what the pipelines do, or how efficient they are. AMD’s older K10 architecture had six integer pipelines, with three ALU and three AGUs per core.

    We know a bit more about the FPU — it supports 256-bit registers, which puts it on par with Haswell, at least as far as register sizes. Interestingly, this was a feature previously forecast for Excavator. That CPU apparently packs AVX2 support, but it may have kept 128-bit register sizes.

    Core architecture, cache design


    A second leaked slide puts Zen in more context.



    Last week, I said that the leaked Zen slide might be inaccurate — it looked more like a wish list than a functional processor. This is much more along the lines of what I’d expect AMD to be building. We see four Zen cores per “unit” (AMD is dropping the module terminology) with an L3 cache for every group of cores. AMD retains the ability to build multi-core systems using Multi-Chip Modules (MCMs), which means it can physically link a group of four Zen cores. It may also build these cores using a so-called “native” interface, with 8-16 cores per die — that’s not something we’re privy to at this point.

    The slide notes that AMD may be using a “fully inclusive” cache design for high performance and lower latency. This deserves a bit of additional explanation. There are (generally speaking) three types of cache design — strictly inclusive (data in the L1 is always stored in the L2 cache), strictly exclusive (data is either stored in the L1 or L2 caches, but never in both), and mainly inclusive, where data stored in L1 can be evicted from L2, but typically isn’t.

    Before Bulldozer, AMD historically used an exclusive cache design, which made certain memory operations slower, but offered more effective space, since you aren’t duplicating data between the L1 and L2. This made particular sense in the K7 days when the chip’s large L1 cache (128KB) would have been extremely expensive to duplicate in L2 cache. Inclusive cache designs typically have lower latency for certain operations and can simplify coherency checks.

    The rest of the implications of this slide are straightforward. AMD has designed a chip that’s clearly meant to deploy in an MCM — the reference to high-speed interconnects between units appears to refer to on-package modules, rather than between-socket interconnects in a 2P system. The 512K of L2 and 8MB L3 per group of four cores also makes sense as an organizational principle.

    None of these details, it must be noted, tell us anything about how Zen will perform, or how Keller and the rest of the AMD chose their power and performance targets. There are design elements here that could echo Phenom II, but without more information we can’t conclude that, yet.

    May 6 is going to be very, very interesting. As always, take these posts with a grain of salt — nothing is official until it’s announced.

    More...

  4. #124
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    AMD’s upcoming Fiji GPU will feature new memory interface


    Over the past few months, there’s been an ongoing question over whether or not AMD’s upcoming Fiji GPU would use High Bandwidth Memory. Most reports have pointed towards yes, and that’s been our assumption, but official confirmation has been lacking. Now, for the first time, we’ve got unofficial-official confirmation, courtesy of Hot Chips 27.

    Hot Chips is an annual tech symposium sponsored by IEEE and ACM. It tends to feature new and cutting-edge designs, with a mix of talks that focus on products that have already shipped as well as some that discuss*upcoming hardware. This year, we’ve got talks scheduled on Xeon-D, Cherry Trail, a new MIPS V CPU, an open-source GPGPU project (MIAOW), AMD’s Carrizo, a new “lost-cost” processor from Oracle (Sonoma), and yes: “Fiji: The World’s First Graphics Processor with 2.5D High Bandwidth Memory.”

    Now that it’s official, what can we expect?


    One persistent rumor is that Fiji will launch with 4GB of main memory, a 4096-bit memory bus, and a maximum throughput of roughly 500GB/s. That’s substantially more bandwidth than the old R9 290X — a gain of as much as 60%. The bandwidth gains of HBM are well-known and we’ve discussed them at length: Figures as high as 1TB/s of memory bandwidth on second-generation HBM devices have been tossed around, without exaggeration.



    Bandwidth, however, is just one characteristic of memory performance. Latency is equally important, but data on HBM latency compared with GDDR5 is much harder to come by. The implication, if I’ve read the various slide decks and data sheets correctly, is that HBM latency should be modestly better than GDDR5’s — but possibly not by much. Certainly it won’t improve by anything like the bandwidth jumps we’re going to see.

    This makes historical sense. As the slide below illustrates, we’ve had a much easier time increasing memory density than decreasing latency.



    This chart also explains why CPUs have long relied on sophisticated cache structures to improve performance.

    HBM vastly increases system bandwidth and it should dramatically reduce power consumption. There will be latency improvements courtesy of moving to through silicon vias (TSVs), but the fundamental timings shouldn’t change much. GPU workloads, however, aren’t very latency sensitive — and throwing that much bandwidth at a GPU should yield its own sets of dividends.

    Fiji is rumored to be dropping within a month or two, so we’ll see what AMD has cooked up with its next-generation memory architecture in the near future. The bandwidth improvements and dramatically reduced power consumption should both be good for any card.

    More...

  5. #125
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Microsoft sheds light on Windows 10 revenue, future OS pricing plans


    When Microsoft announced Windows 10, it said devices running Windows 7 and Windows 8/8.1 would receive a free upgrade for one year after the OS shipped. Devices upgraded in this fashion wouldn’t just get a one-time update code — Microsoft committed to keeping any upgraded device current “for the supported lifetime of the device.” Exactly what those words meant has never been clear. But new statements out of Redmond may have shed some light on that topic.
    Last week Microsoft announced that it would no longer recognize revenue from Windows 10 consumer licenses when those devices were purchased, as Computerworld reports. Instead, it will defer some of the revenue over several quarters, depending on the estimated supported lifetime of the device.



    Deferred revenue is nothing new to Microsoft, as the company already recognizes revenue from its enterprise Software Assurance licenses in this fashion. Office 365 revenue is similarly recognized over all four quarters, rather than when a subscriber signs up. The big question consumers have been asking is whether Windows 10’s “free upgrade” is going to contain some sort of gotcha clause that ropes people into paying a lump sum later, or being marooned on an unsupported OS.

    Windows 10’s free upgrade: An unlikely stick


    Here’s the good news: Microsoft is incredibly unlikely to try and turn Windows 10’s free upgrade into a perpetual stick. For one thing, any attempt to stick consumers with a gotcha price at the end of the first 12 months would result in the mother of all class-action suits, and Microsoft is savvy enough to know this. Handing users a free product, only to stick them with unexpected continued-use costs 12 months later after businesses and consumers had already transitioned, would run afoul of consumer protection laws in both the US and the EU.

    With that said, however, there’s a definite question regarding exactly how long the “expected lifetime of the device” actually is. Microsoft has historically provided support for operating systems long past what it considered their prime — Windows XP support lasted 13 years, while Windows 7 Extended Support will run through 2020.

    Microsoft has fought for years to pull users off of old versions of Windows, and the “supported for the lifetime of the device” language is likely designed to allow the company to move to a different support model. That doesn’t mean Microsoft intends to charge outright for future versions of the operating system, however. More likely, Microsoft wants users to treat Windows upgrades the same way that Android, iOS, and browser updates are typically treated, with the majority of users jumping for new versions as soon as they’re available. Businesses or individuals that choose not to do this may have the option of purchasing extended phone or technical support, in much the same way that companies can now.

    One thing I’m not concerned about is whether MS will continue to provide security updates. Regardless of how the company plans OS updates, Microsoft has offered security products even to pirates running illegal copies of its operating system. The chances that Redmond would roll back that critical feature are slim.



    Microsoft’s previous advertising changes haven’t always been well-received.

    Instead, it’s far more likely that Microsoft will make a play to embed post-purchase revenue streams as part of the Windows ecosystem. From Windows Store applications to Bing integration and in-OS advertising, Microsoft is moving away from deriving its income from the single point-of-purchase, and more towards an ecosystem where the initial OS revenue is just the beginning of monetization.

    What this will mean for consumers is unclear. In theory, it could be as simple as ad-supported experiences, similar to what Microsoft has integrated with the Xbox One. Microsoft, however, doesn’t have the best track record when it comes to subtly integrating ad content. The company’s shift towards alternative revenue sources will need to be handled with a very light touch.

    More...

  6. #126
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    The PS4 is being adopted faster than the PS2 in the UK — but does that matter?




    Huzzah, old chap! A report has surfaced showing that Sony has sold over two million PS4s in the UK. Moving that many units in less than 18 months in a nation of 64 million people is impressive. And considering that the UK was one of the few European bastions for the Xbox 360 last generation, this is a non-trivial indicator of which way the wind is blowing.

    In an interview with MCV, Sony’s Fergal Gara seemed ecstatic regarding the level of success. He even goes so far as to boast that the PS4 is “installing faster even than the PlayStation 2.” Even so, Nintendo’s Wii likely remains king of rapid adoption in the United Kingdom.


    The European gaming market is distinct from the US in a few major ways — most important of which is its history with consoles. For example, Sega played a bigger role in Europe than it did in the US, so the Dreamcast implosion left a bigger impact.
    After Sega left the hardware market, the UK ended up mostly backing the GameCube and Xbox. So while the PS2 was an unqualified success in the UK (and Europe in general), the adoption rate probably would have been even faster if the ghost of Sonic the Hedgehog wasn’t looming over the sixth-generation consoles.

    While the PS4 is — without question — selling well in the UK, the comparison with PS2 adoption rates holds relatively little value. It says more about the vastly different market realities than it does about the PS4’s status as a juggernaut. Just like the Wii’s incredible success and eventual fizzling, it’s difficult to draw conclusions about the long-term health of a platform by comparing it with past releases.

    I believe the only comparison that matters at this point is how well the PS4 is selling compared with the Xbox One. Microsoft’s console launched strong in the UK, and the 360 was dominant in the UK during the previous generation, in spite of the PS3 performing better in continental Europe. Unfortunately, we don’t have exact apple-to-apples numbers to compare here. However, VGChartz is estimating that only 3.14 million Xbox Ones have been sold throughout all of Europe. The PS4? 8.25 million. Those definitely aren’t firm numbers, but it’s safe to assume that the Xbox One is not keeping pace with the PS4 in jolly old England (and its sister countries).


    More...

  7. #127
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    AMD’s comprehensive roadmap update: 2016 in mobile, desktop, and servers


    One of the key components of AMD’s Analyst Day has been a comprehensive set of updates for its various product roadmaps. We’ve rounded up that information and rolled it into a single post. We’ve already covered AMD’s Zen, K12, and cancellation of Project Skybridge, so we won’t be discussing those projects directly.

    AMD’s desktop and mobile roadmaps


    Under Rory Read, AMD’s desktop CPU roadmap languished. The only meaningful improvement to high-end CPUs under his leadership was the FX-9590 — a 5GHz Piledriver-class core with a 220W TDP. In 2016, that’s finally going to change, thanks to the launch of AMD’s Zen. AMD hasn’t given a date on when to expect Zen (scuttlebutt has pointed to second half of the year, but AMD hasn’t confirmed or denied that estimate).



    We’ve expected that Zen would come to desktops, but CEO Lisa Su confirmed that Zen would actually ship on the desktop first. The CEO noted that this market has proven to be important to AMD’s overall market position and design wins, and AMD is committed to attacking this space.

    It’s assumed that Zen will also power the 7th-generation of desktop and mobile APUs that are shown here, though AMD only explicitly stated that Zen was coming to the desktop. One major change coming in 2016 is that the APU and CPU will occupy a common desktop socket, AM4. In the future, you won’t have to choose between two different motherboard sockets when you pick either an APU or a CPU from AMD — you can move from one to the other.

    If AMD can deliver the estimated 40% IPC uplift that it’s claiming, it should be far more capable of challenging Intel in every segment of the market. Even if Zen isn’t capable of going head-to-head with Core i7 (and we aren’t saying that it won’t), a 40% IPC jump would still give AMD a much more competitive position against the Core i5 and Core i3. The days of needing two or more AMD cores to match a single Intel core with Hyper-Threading enabled could finally be drawing to a close.



    One point AMD made indirectly is that its FX chips will be CPU-only, while the APUs will continue to include a graphics core. With a common platform, customers should have more freedom to shift to solutions that meet their needs. AMD did confirm, in its subsequent Q&A session, that the era of the “cat” cores is over. The Jaguar / Puma family of chips doesn’t make the jump to FinFET.



    AMD didn’t say much about graphics directly. Fiji is due to ship out in the second quarter, but the company didn’t reveal anything about the GPU core inside the 7th Generation APUs that are set to debut next year. We know that Carrizo’s GPU is based on Tonga and we’ve heard rumors of a Greenland GPU core that would finally overhaul the GCN architecture. We do know that AMD expects to adopt FinFETs for its graphics solutions next year, which means the company will move to either 16nm or 14nm at TSMC or Samsung respectively. Second-generation HBM is also set to debut, which could deliver up to 8192-bit memory interfaces and a further bandwidth increase.

    Also, Lisa Su noted that while AMD had started multiple 20nm designs, those projects aren’t going to come to market, due to minimal profitability on that silicon. Back in 2012, we published slides from Nvidia that collectively illustrated that 20nm wasn’t a useful node for high-performance silicon. While we saw some mobile uptake of that node, it’s worth noting that we were absolutely right — if you’re in the high performance computing business, TSMC’s 20nm was, in fact, “essentially worthless.”

    AMD’s server roadmap


    AMD’s first ARM-based server, the A1100 Opteron, will debut in the second half of 2014 2015. This delay, according to AMD, was driven by the market — ARM microservers simply didn’t explode the way AMD thought they would initially, and it decided to delay its own efforts as a result.

    The K12 is expected to debut next year as a follow-up to the Cortex-A57 based Seattle, with an emphasis on high performance at the upper end of the ARM computing market. Exactly how it will line up against future Zen-based Opterons, and the degree of overlap we may or may not see between them, is still unclear.



    The most interesting statement in AMD’s server presentation is the “Disruptive Memory Bandwidth” and “Transformational Memory Architecture” claims. We’ve seen rumors before that AMD might integrate HBM into APUs. If the company were to do this, it might make the most sense to integrate it into servers first. While HBM has been discussed as a game-changer for integrated graphics — and it truly could change the rules of the game in that segment — it’s important to stair-step new technologies into markets that can afford the additional cost.

    Offering HBM on a server chip would give AMD access to a nearly on-die cache that would offer vastly improved bandwidth compared with traditional DRAM. AMD likely can’t afford to take Intel’s route of building a 128MB L4 cache on die, but an HBM memory segment (backed by a conventional DDR4 main memory) could be a potent alternative.

    Putting it all together:


    Not much of what AMD revealed today is going to change forecasts for 2015. Carrizo is expected to offer better battery life over older Kaveri systems, and the R9 3xx updates should put AMD on much better footing against Maxwell. But the bottom line is that for many of AMD’s businesses, 2015 is a bit of a pause. Even in situations where we see new product launches, like the upcoming Seattle ARM core, these are the vanguard launches of product segments where further products, like K12, are expected to make a larger difference.

    It’s clear that 2016 will be the make-or-break year for AMD. The entire future of the company is riding on two premises: One, that it can build a business around K12 and that investing in a custom ARM architecture will pay off over and above the price of continuing to license a standard ARM core. Two, that its upcoming Zen CPU will offer vastly improved performance per watt compared with the Bulldozer-class chips, scale across the entire industry from mobile to server, and offer a compelling alternative to Intel’s own chips in each of these segments.

    It’s not hard to believe that Zen will beat out Bulldozer in terms of instructions-per-clock: AMD could’ve beaten Bulldozer’s IPC by shrinking Phenom II to 14nm and adding a few BD-derived features to the core. Hopefully AMD’s confidence that it can offer a top-to-bottom solution against Intel is well-deserved.

    More...

  8. #128
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Oculus Rift finally gets consumer version and actual release date


    After several years of speculation — really since the thing first hit Kickstarter in 2012 — Oculus VR has finally announced a consumer version of its Oculus Rift PC-based virtual reality headset. The first units will begin shipping to buyers in the first quarter of 2016, with preorders starting “later this year.” By that point, the Rift will hopefully fulfill the promise that several iterations of developer kits have hinted at: an actual ecosystem, at least one killer app of some kind, and seamlessly integrated hardware and software.

    The company said the new version will “build on the presence, immersion, and comfort of the Crescent Bay prototype,” with an improved tracking system that supports seated and standing experiences. If the accompanying vendor-supplied photos are anything to go by, it will also look considerably cooler than before, and perhaps be more comfortable to wear and use as well.
    Crescent Bay brought a lot to the table last time, including a lower-latency display, 360-degree head tracking with more LED markers, integrated headphones with RealSpace 3D audio support, and a lower hardware weight. As for this new version, the company says to look for announcements “in the weeks ahead” — which we presume has something to do with E3 — for specifics on the hardware, software, input, and “many of our unannounced made-for-VR games and experiences,” the latter of which certainly has our interest peaked. And if you’re a developer, you probably know this already, but there’s info on how to get started with the Oculus Rift dev kit over in the company’s Development Center.



    Since the first Oculus Rift dev kit arrived in 2013, it has impressed many people, and the company managed to ensnare gaming veteran John Carmack, $16 million in additional funding, and eventually, a stunning $2 billion Facebook acquisition that rocked Silicon Valley. While the latter sparked a ton of speculation about alternate uses for the Oculus Rift, gaming still seems to be its strongest selling point.

    Back in March, Sony demonstrated its own Project Morpheus headset. Arguably the coolest looking of the prototypes, the latest prototype sports a 5.7-inch, 1080p OLED screen inside, a 100-degree field of view, and like the Oculus, a 360-degree view of the world. HTC and Valve also revealed the SteamVR-powered Re Vive, which contains two 1200×1080 displays (one for each eye) at a fast 90Hz refresh rate. Then there’s the Samsung Gear VR, which is tied to a Galaxy Note 4 phone that you use as a display inside the headset.

    Last week, Microsoft demonstrated a new HoloLens prototype at its Build conference. Unlike the Oculus Rift and others of its ilk, which create full-blown 3D areas you can explore and move around in, the HoloLens focuses on augmenting reality, generating holographic overlays over existing structures and objects in the real world. It’s an entirely different experience, and one that’s less prone to motion sickness in some people than the Oculus Rift is.

    Clearly, the world of virtual reality is picking up steam, but the question still remains whether consumers want it. Thanks to Oculus VR’s announcement, we may finally find out in less than a year.

    More...

  9. #129
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    AMD analyst day sketches big picture, leaves critical details unanswered


    Now that AMD’s first analyst day in three years has wrapped and we’ve covered the major highlights, it’s time to turn to the elephant in the room: Can AMD break into new markets and compete more effectively in established ones, given that it’s often at a severe financial and technological disadvantage compared with its rivals? AMD claims that it can, but some of the evidence it offered yesterday was less conclusive than we might have liked.

    The critical, collapsed PC business


    First, let’s talk about AMD’s PC and mobile businesses. Mark Papermaster and Lisa Su spoke at length on how AMD has chosen to invest in these areas because they represent a core opportunity for products and services, and remain crucial to AMD’s efforts in new markets. She also talked about investing in gaming, in high-performance compute, and in pushing the envelope for both x86 and ARM design. Many of these markets were highlighted as near-term growth opportunities, as shown below:



    The problem with this narrative is that it ignores the near-collapse of AMD’s business in many of these market segments. We assembled the following chart to illustrate just how hard the company has been hit. AMD changed how it reports revenue after the Xbox One and PS4 began shipping, so these results only show revenue for the computing solutions and graphics industries from 2011 – 2014.



    AMD has lost more than half its sales in CPUs and GPUs since 2011. Results like this partially explain why ex-CEO Rory Read and now Lisa Su have focused on creating new markets for AMD hardware, including its console design wins and a few more as-yet-unnamed semi-custom designs that will ship in the back half of 2016 and 2017 respectively.

    Since we’re talking about AMD’s core markets, specifically, it’s important to chart the course of that business segment to date. One thing AMD made clear yesterday is that Zen is at the heart of its planned revival. There will be no future “cat” cores — Zen’s 14/16nm FinFEFT deployment will handle AMD’s entire x86 product stack. With its 40% improved IPC and a new generation of APUs arriving in 2016, the stage seems set for an AMD resurgence in desktops and mobile — or, at least, it did until AMD’s interim Chief Financial Officer, Devinder Kumar dumped cold water on such expectations:



    Note the PC market revenue projections

    Everything AMD said yesterday about its return to profitability and future revenue drivers has to be evaluated through the lens of this slide. That “Flat to down” long-term growth model for the traditional PC business puts the kibosh on any argument that AMD expects to see significant revenue gain in core businesses. Kumar and Su both made it clear that the company’s 2015 goal is to stabilize its losses while divesting itself from some unprofitable low-end SKUs and markets. The GPU segment offers only slightly better performance. With an operating margin in the mid-single digits, it means AMD expects to continue making just 4-6% profit on its GPU, APU, and PC sales. While it’s true that these are projections meant for financial types, AMD simply isn’t forecasting that traditional markets will play a significant role in its return to profitability.

    More...

  10. #130
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    800

    Microsoft’s new rendering research could bring better visuals to Xbox One, mobile devices


    During Microsoft’s initial Xbox One reveal and subsequent E3 demonstrations, it mentioned a capability that we haven’t heard much about since. According to the company, the Xbox One could theoretically be paired with offsite cloud rendering platforms at some point in the future to deliver a superior game experience to anything the console could handle on its own. Since then, Redmond has been silent on what form the technology might take or if we’d ever see a version of it in the wild — until now. A new paper, published in collaboration with Duke University and the University of Washington details a joint rendering system, dubbed Kahawai, that pairs a client and server GPU together for simultaneous rendering.

    Kahawai: Beyond streaming


    The Kahawai render platform was actually designed for mobile GPUs, but the principles behind the approach would apply to any joint rendering system between a client and remote platform. The streaming platforms currently in use (Shield, the now-defunct OnLive, Sony’s PlayStation Now) render a game remotely and send that output via broadband to the target device. The problem with this approach is that it requires a fast Internet connection for peak quality. The problem only gets worse in the long term if bandwidth upgrades don’t match resolution improvements. 4K gaming on consoles may be a pipe dream today, but it could happen in 5-6 years — but not via streaming, unless Internet connections accelerate dramatically. The other problem with streaming play is that offline play is, by necessity, unavailable.



    The Kahawai system is designed to address both of these problems and it can tackle them in two different ways. In the first version, instead of relying entirely on streaming, it tasks the mobile GPU with rendering a lower-end version of the game engine while the server concurrently renders a high-definition version of the same frame. The difference between the two outputs is compared and the server sends only the delta frames — meaning, the frame data that shows the difference.

    In the second type of rendering, the local device (a mobile GPU in this case) renders high-quality frames, but relatively few frames per second. The server renders the missing frames and then sends them along for integration into the game engine. If this seems confusing, here’s an analogy:



    The Zelder Scrolls
    lives!

    I’ve borrowed this image from the open-world Zelda game Nintendo is working on, because it incorporates multiple areas of varying detail. If this scene were rendered using the delta frame rendering method, the Wii U’s local GPU would be working on the areas of relatively low detail seen in the distance, while the server would be rendering the high-detail grass we see in the foreground. In the second rendering method, the Wii U might be rendering frames 1-4, while the server is rendering frames 5-20. Because most of the graphics workload remains on the server, the mobile GPU can run a much more detailed game than might otherwise be possible.

    According to Microsoft,
    the total network bandwidth requirements of these methods are about one-sixth what conventional streaming requires, though low-latency remains critical.

    So how does it look and play?


    Kahawai doesn’t require an online connection to play the game (at least, not intrinsically), but visual quality drops significantly if one isn’t present. The screenshots below illustrate the difference between low and high quality.



    One of the requirements for the system is that there has to be a way for the game engine to set “Low” vs. “High” quality settings — if the right options aren’t available, the system can’t work properly. The research team notes that they were able to modify both the idTech 4 engine (used in Doom 3), for which source code was available, as well as building hooks into the Street Fighter IV engine to test that title.

    User tests were quite positive. When the system was deployed over low-latency, high-bandwidth connections, users reported being almost as satisfied with the game experience as when playing over a conventional “thick” client. Obviously there are some caveats, given that high bandwidth, low-latency connections can be erratic, but Kahawai looks as though it has promise.

    While this is strictly a research paper, this kind of system could work long term and be a better fit for consoles than simple game streaming. Not only does the hybrid system use less bandwidth, it would give Sony and Microsoft both a way to continue building differentiated systems that combine higher-power client hardware with powerful server backends.

    More...

Page 13 of 97 FirstFirst ... 3 11 12 13 14 15 23 63 ... LastLast

LinkBacks (?)

  1. 10-25-2014, 03:45 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •