Page 12 of 97 FirstFirst ... 2 10 11 12 13 14 22 62 ... LastLast
Results 111 to 120 of 965
Like Tree1Likes

Game Tech News

This is a discussion on Game Tech News within the Electronics forums, part of the Non-Related Discussion category; It’s been over a year and a half since the Xbox One and PS4 first debuted here in the U.S. ...

      
   
  1. #111
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Xbox One vs. PS4: How the hardware specs compare (updated for 2015)

    It’s been over a year and a half since the Xbox One and PS4 first debuted here in the U.S. In that time, they’ve both earned their keep as the bearer of current-generation game consoles. Microsoft realized some months after release that it needed a $400, price-competitive version with the PS4 that lacked the Kinect camera, and has since remedied what was once a bit of a lopsided, apples-and-oranges comparison.

    Since then, both the Xbox One and PS4 have sold pretty well, with the edge on Sony’s side — although January’s $50 price cut for the Kinect-less Xbox One is helping Microsoft catch up. That said, how do they directly compare with each other in 2015? If you’re thinking about buying one of these two consoles–or just want ammunition for bragging rights–here’s what you need to know.
    One note before we get started: Unlike all previous console generations, the PS4 and Xbox One are almost identical hardware-wise. With an x86 AMD APU at the heart of each, the Sony and Microsoft consoles are essentially PCs — and their hardware specs, and thus relative performance, can be compared in the same way you would compare two x86-based laptops or ARM-based Android tablets. Read on for our Xbox One-versus-PS4 hardware specs comparison.

    PS4 vs. Xbox One: CPUs compared




    For the PS4 and Xbox One, Microsoft and Sony both opted for a semi-custom AMD APU — a 28nm part fabricated by TSMC that features an eight-core Jaguar CPU, paired with a Radeon 7000-series GPU. (We’ll discuss the GPU in the next section.) The PS4 and Xbox One CPU is virtually identical, except the Xbox One is clocked at 1.75GHz, while the PS4 is at 1.6GHz.

    The Jaguar CPU core itself isn’t particularly*exciting. In PCs, Jaguar is used in AMD’s Kabini and Temash parts, which were aimed at laptops and tablets respectively. If you’re looking for a tangible comparison, CPUs based on the Jaguar core are roughly comparable to Intel’s Bay Trail Atom. With eight cores (as opposed to two or four in a regular Kabini-Temash setup), both the PS4 and Xbox One will have quite a lot of CPU power on tap. The large core count allows*both consoles to excel at multitasking — important for modern living room and media center use cases, and doubly so for the Xbox One, which runs two different operating systems side-by-side. Ultimately, though, despite the Xbox One having a slightly faster CPU, it makes little*difference to either console’s relative games performance.



    PS4 vs. Xbox One: GPUs compared


    Again, by virtue of being an AMD APU, the Xbox One and PS4 GPUs are technologically very similar — with the simple difference that the PS4 GPU is larger. In PC terms, the Xbox One has a GPU that’s similar to the entry-level Bonaire GPU in the older Radeon HD 7790, while the PS4 is outfitted with the midrange Pitcairn that can be found in the HD 7870. In numerical terms, the Xbox One GPU has 12 compute units (768 shader processors), while the PS4 has 18 CUs (1152 shaders). The Xbox One is slightly ahead on GPU clock speed (853MHz vs. 800MHz for the PS4).

    In short, the PS4’s GPU is — on paper — 50% more powerful than the Xbox One. The Xbox One’s slightly higher GPU clock speed ameliorates some of the difference, but really, the PS4’s 50% higher CU count is a serious advantage for the Sony camp. Furthermore, Microsoft says that 10% of the Xbox One’s GPU is reserved for Kinect. Games on the PS4 will have a lot more available graphics power on tap. Beyond clock speeds and core counts, both GPUs are identical. They’re both based on the Graphics Core Next (GCN) architecture, and thus support OpenGL 4.3, OpenCL 1.2, and Direct3D 11.2.

    PS4 vs. Xbox One: RAM subsystem and bandwidth


    Once we leave the CPU and GPU, the hardware specs of the Xbox One and PS4*begin*to diverge, with the RAM being the most notable difference. While both consoles are outfitted with 8GB of RAM, the PS4 opts for 5500MHz GDDR5 RAM, while the Xbox One uses the more PC-like 2133MHz DDR3 RAM. This leads to an absolutely massive bandwidth advantage in favor of the PS4 — the PS4’s CPU and GPU will have 176GB/sec of bandwidth to system RAM, while the Xbox One will have just 68.3GB/sec.




    More...

  2. #112
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Minecraft exploit that made it easy to crash servers gets patched


    It turns out that for the past two years, you could crash a Minecraft server pretty easily. A security researcher published the exploit Thursday and said he*first discovered it*in version 1.6.2 back in July 2013, which is almost two years ago. He claims Mojang ignored him and did nothing to fix the problem, despite his repeated attempts at following standard protocol and contacting the company in private.

    “This vulnerability exists on almost all previous and current minecraft versions as of 1.8.3; the packets used as attack vectors are the 0x08: Block Placement Packet and 0x10: Creative Inventory Action,” Ammar Askar wrote. The exploit takes advantage of the way a Minecraft server decompresses and parses data, and causes it to generate “several million Java objects including ArrayLists,” running out of memory and pegging CPU load in the process.

    “The fix for this vulnerability isn’t exactly that hard, [as] the client should never really send a data structure as complex as NBT of arbitrary size and if it must, some form of recursion and size limits should be implemented. These were the fixes that I recommended to Mojang 2 years ago.” Askar posted a proof of concept of the exploit to GitHub that he says has been tested with Python 2.7. Askar has since updated his blog post twice after finally making contact with Mojang. What he says essentially confirms that the company either didn’t test a claimed fix against his proof of concept, or lied about having one in the first place.
    Today, it*looks like Mojang has responded (at least indirectly) to the post with a patch. The company announced today that it is releasing version 1.8.4: “This release fixes a few reported security issues, in addition to some other minor bug fixes & performance tweaks.”

    The release notes make no direct*mention of the*exploit Askar wrote about, and comments are closed on the post. But notably, two of the fixes listed are Bug MC-79079, “Malicious clients can force a server to freeze,” and Bug MC-79612, “Malicious clients can force a server to go out memory [sic]:”



    At the time of this writing, Askar has yet to update his blog post a third time acknowledging the patch and/or commenting on whether it fixes the exploit.

    Back in September, Microsoft announced it was buying Mojang for $2.5 billion, with company founder Notch moving on something new. The game is available on all major platforms, including PC, Mac, PS3, PS4, Xbox 360, Xbox One, iOS, Android, Windows Phone, and Amazon Kindle Fire.


    More...

  3. #113
    Senior Member matfx's Avatar
    Join Date
    Sep 2013
    Location
    Malaysia
    Posts
    1,178
    Blog Entries
    114
    Follow matfx On Twitter

    EA shows off Star Wars Battlefront footage at fan convention



    Nearly ten years after the series went dormant with Star Wars: Battlefront 2, a new shooter set in the Star Wars universe game series is set to hit the PC, Xbox One, and PS4 on November 17.

    At the Star Wars Celebration in Anaheim, members of the DICE development team showed off a pre-alpha trailer for the Frostbite engine-powered game, which is simply being called Star Wars: Battlefront. While the footage didn't show off much direct gameplay, the whole thing was reportedly rendered in real-time on a PS4.

    The in-game footage displayed the team's use of photogrammetry to render models directly from pictures of the actual models used in the movies rather than 3D figures created whole cloth by digital artists. The development team said they took trips to shooting locations for the original Star Wars films and referenced materials from the libraries at Skywalker Ranch to add further to the authenticity to the source material. Skywalker Sound will be providing the audio for the game.

    As far as gameplay, players will be able to can take part in story missions, either solo or with a co-op partner, and online multiplayer shootouts with up to 40 people, all from either a first- or third-person viewpoint. On-field power-ups will let players transform into well-known characters like Luke Skywalker, Darth Vader, and Boba Fett. Vehicles like speeder bikes, snowspeeders, X-wings, TIE Fighters, AT-ATs, and even the Millennium Falcon will be pilotable, but those vehicles unfortunately can not be taken out into space—this is a strictly planet-based affair.

    The new Battlefront takes place during the time of the original film trilogy, across familiar locations like Hoth, Endor, and Tatooine, as well as the newly explorable planet Sullust (which is based on the Icelandic countryside). As a tie-in with the upcoming Force Awakens movie, players will also have access to a free bit of DLC detailing the "Battle of Jakku," which takes place decades before the events shown in the film's new trailer. Players that pre-order the game will get that DLC mission on December 1, while everyone else will get it December 8.

    The new Battlefield was originally announced at E3 2013 and was shown in slightly more detail at last year's show, but EA and developer DICE have been relatively silent regarding the details of the game until now. Back in December, DICE said it was "not moving onto future projects," including Battlefront, until it fixed nagging server issues with Battlefield 4.

    Battlefront is the first game in a previously announced ten-year deal giving Electronic Arts the exclusive rights to all Star Wars-based games.

    Source
    Follow my official trading theregulartrader blog

  4. #114
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    The 20 best free PC games



    Free PC games used to be the realm of quirky flash games or weird indie projects, but the free-to-play phenomenon has really taken off in the last couple of years. Now, the $60 AAA games that once ruled the roost are getting some real competition from games that offer hundreds of hours of gameplay for free.

    There are innumerable free-to-play games available for the PC, and with that comes both good and bad. The large selection means that there is something to fit just about any taste, but the signal-to-noise ratio is truly atrocious. Instead of trudging through dozens of clones and half-hearted cash grabs, let me separate the wheat from the chaff for you. Today, I’m highlighting twenty of the very best*free games*on the PC to help you find something you’ll really love. There’s a lot to cover, so follow along, and something here is bound to strike your fancy.



    Dota 2


    Based on the popular Warcraft III mod called Defense of the Ancients (DotA for short), Valve’s Dota 2*is a model free-to-play game. Without spending one red cent, you get access to the entire gameplay experience. Of course, Valve makes a tidy profit from selling cosmetic and ancillary items. The Bellevue company is well-versed in the realm of free-to-play games, so don’t be surprised if you find yourself buying new loot for this “free” game once you’re hooked.



    League of Legends


    Just like Dota 2, League of Legends is a MOBA (multiplayer online batter arena) derived from the same Warcraft III mod. However, the folks at Riot Games have a very different pricing model than Valve’s. You can play a select number of characters for free, but access to additional characters is going to cost you. Regardless of the value proposition compared to other MOBAs, this game remains insanely popular across the globe.


    Heroes of the Storm


    As if there wasn’t enough competition in the MOBA space, Blizzard is getting in on the action as well. Heroes of the Storm takes elements from all of Blizzard’s various franchises, and melds them all together in a single DotA-like. This particular iteration of the MOBA concept has been lauded as more approachable than others in the genre, but it’s still in beta testing. If you want to play it, you’ll need to apply for access. Of course, Blizzard is more than happy to take your money for heroes and skins regardless. Most purchases range between $4 and $20, but there are a few outliers here and there. Anything could change, though, so buy carefully. A nerf is always around the corner.


    Hearthstone: Heroes of Warcraft


    Based on the artwork and setting of Blizzard’s incredibly popular Warcraft franchise, Hearthstone is a phenomenon in and of itself. This turn-based collectable card game is hugely successful on PC and mobile, and the low barrier to entry is the reason why. All you need is a free Battle.net account, and you can join in on the fun. As it stands, there are two single player campaigns that cost $25 each, and you can spend anywhere from $3 to $70 at a shot on booster packs. But if you just buckle down and play the game on a regular basis, you’ll soon earn enough good cards that you won’t really need to buy boosters to stay competitive.



    AdVenture Capitalist


    If you’ve ever played a game like Cookie Clicker or Candy Box, you’ll be very familiar with the way this game works. When broken down into its component parts, you do little more than click buttons and watch numbers grow higher, but there’s something so viscerally satisfying about this style of game. AdVenture Capitalist, fundamentally, is a dopamine machine. You can download it for free on Steam, and you can spend anywhere from $2 to $100 at a shot for currency that will essentially speed up your progress. The problem is… the progress is all this game has to offer. What is even the point of clicking these damned buttons if you don’t enjoy the slow progress of it all? It’s hard to explain the appeal, but since it’s free, you can simply go see for yourself.

    More...

  5. #115
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    AMD leaks Microsoft’s plans for upcoming Windows 10 launch



    One last tidbit related to AMD’s last conference call that doesn’t have much to do with AMD itself, but sheds light on Microsoft’s plans for Windows 10. During the call, CEO Lisa Su was asked whether she could clarify how Q2 results were expected to play out with regards to both the semicustom and embedded business (Xbox One and PS4) against the more mainstream APU and GPU business. Su responded by saying that she expected the semicustom business to be up “modestly,” and that AMD would move some inventory in Q2 and begin ramping Carrizo in greater volume. She then went on to say:

    “What we also are factoring in is, you know, with the Windows 10 launch at the end of July, we are watching sort of the impact of that on the back-to-school season, and expect that it might have a bit of a delay to the normal back-to-school season inventory build-up.” AMD could be wrong, of course, but it’s unlikely — the company almost certainly knows when Microsoft intends to launch the new operating system, and this isn’t the first time we’ve heard rumors of a summer release date for the OS.

    It’s possible that AMD intends to capitalize on the launch with a new push around the GPU architecture its currently expected to launch in approximately the same time frame. The company wouldn’t need to launch alongside the operating system, but hitting the streets a few weeks early would build buzz around DX12 and other performance enhancements baked into Windows 10 before the actual OS debuts.

    The big question on everyone’s mind in the PC industry is going to be whether Windows 10 lives up to the hype. So far, it seems as if it will. I’m not claiming people will be lining up around the blocks to grab it, but I think there’s a decent chance that

    Windows 7 users, who gave Windows 8 a pass, will investigate Windows 10 — especially given the long-term advantages of DirectX 12.

    It’s not clear yet how quickly games will transition to the new API, or how long they’ll support both DX12 and DX11. From past experience, we’d expect to see high-end AAA games popping out in DX12 in short order (the Star Wars Battlefront title that debuted last week will be DX12). Plenty of other games will transition more gradually, however, and it wouldn’t surprise us to see a few DX9 titles still knocking about — indie games and small developers have very different budgets compared with the big game studios.


    More...

  6. #116
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Rounding up GTA V on the PC: How do AMD, Intel, and Nvidia perform?

    18 months after it debuted on the Xbox 360 and PS3, Grand Theft Auto V has made it over the PC side of the fence. Videos and previews before the launch teased a constant 60 FPS frame rate and enhanced visuals and capabilities that would leave last-gen consoles in the dust. A number of sites have published comprehensive overviews of GTA’s performance, including a focus on CPUs, GPUs, and the performance impact of various settings. We’ve broken down the big-picture findings, with additional links to specific coverage.

    CPU Scaling


    Let’s start with CPU scaling, since there’s going to be questions there. Whether at 1080p or 1440p, GTA V is playable on quad cores and above from both AMD and Intel, but the Intel chips continue to have a definite advantage overall. At normal detail levels with FXAA (using a GTX Titan to avoid GPU bottlenecks), Techspot reports that the FX-9590, AMD’s highest-end, 220W chip, is the only CPU from Team Green to beat the Core i5-2500K — a midrange CPU from Intel nearly five years old.

    That doesn’t mean AMD CPUs don’t offer a playable frame rate. But most of AMD’s cores are between 55-72 FPS, while Intel locks down the 70+ FPS range. GamersNexus offers a more detailed look at some CPUs, again using a Titan X to eliminate any chance of a GPU bottleneck. What’s most notable about AMD’s CPU performance, even at 1080p, is the gap in minimum frame rate.




    AMD’s chips hit a minimum frame rate of 30 FPS, compared with 37 FPS for the Core i7-4790K. That gap is significant — Intel’s high-end cores are hitting a 23% higher frame rate. That said, these gaps can be reduced by lowering visual quality from Max to something a bit less strenuous — many of the Advanced graphics features and post-processing effects in GTA V incur a heavy performance hit.

    The bottom line is this: While CPU brand matters in GTA V, it’s not the major factor. Every chip, save for the Intel Pentium G3258, can run the game. Low-end AMD owners may have to put up with a significant performance hit, while most users with AMD Athlon 760K-class processors likely aren’t trying to run GTA V in the first place.

    GPU scaling


    GPU scaling is a more interesting animal. First, the game scales exceptionally well to a wide variety of video cards from both AMD and Nvidia. Nvidia cards from the 900 series generally have the upper hand at the various detail levels, but at 1080p “High,” for example, even the R9 285 returns a 0.01% frame rate (meaning that 99.99% of the frames were higher than this) of 34 FPS and an average frame rate of 69. Even the Nvidia GTX 750 Ti is capable of hitting 52 FPS in this mode.




    The high-end GPUs from AMD and Nvidia can both compete at higher resolutions — at 1440p at max detail, Tweaktown reports that the R9 290X 8GB flavor hits 77 FPS, just barely behind Nvidia’s far more expensive Titan Black. The GTX 900 family continues to dominate overall, with a moderate-but-significant performance edge.

    If you want to run at 4K and max detail, however, you’re going to be disappointed. Even the mighty GTX Titan has a bad day at 4K on Ultra, with frame rates that can’t break 40 FPS. Interestingly enough, however, there’s a disparity between what we’ve seen reported at websites like GamersNexus, which has the GTX Titan X at 40 FPS, vs Tweaktown, which clocks it at 72 FPS. GamersNexus didn’t use the prerecorded benchmark, while Tweaktown did — this may have impacted the final overall results.
    in the benchmark compared to in-title. EuroIf you prefer video frame rates, Eurogamer’s in-game results also may point to a discrepancy between how AMD and NV perform gamer reported much stronger minimum frame rates for AMD when using a high-end Core i7 compared to a midrange Core i3; other sites showed no such difference.

    Some features, like high-end textures, incur a relatively modest performance hit, while others, like those under the game’s Advanced Graphics Options, exact far steeper penalties. The game will try to keep you from setting visual options that your GPU won’t support, but you can override this from within the Graphics menu. GamersNexus has an exhaustive post on the topic of which game options hit performance the most, along with a number of comparison shots.

    Tying it all together


    The general consensus is that Grand Theft Auto V scales quite well. Virtually any modern GPU + CPU combo can run the game, though enthusiasts with low-end AMD CPUs may have to make a number of compromises. The game is optimized for certain Nvidia GameWorks features, like Percentage-Closer Soft Shadows (PCSS), but also supports AMD’s contact Hardening Shadows (CHS).


    Nvidia’s Percentage-Closer Soft Shadows



    AMD’s Contact Hardening Shadows

    Some readers may be concerned about the GTX 970’s performance at high memory loads, but evidence for a problem is mixed. Techspot’s review shows the GTX 970 hanging even with the GTX 980 at 1080p with normal textures (88 FPS) and the R9 290X following at 80 FPS. At 4K with Very High Textures and FXAA, the R9 290X had advanced to third place, at 33 FPS, behind the GTX 980 with 36 FPS. The GTX 970, in contrast, had fallen back to 30 FPS. No one has reported unusual stuttering or other problems on the GTX 970, however, at least not yet.

    Based on the results we’ve seen to date, we’d say that Rockstar has delivered the definitive and best-looking version of the game for PCs, with a 60 FPS option available for almost any video card and CPU combination. The controls and key mappings are terrible and clearly designed for consoles. But as far as the frame rate and eye candy is concerned, GTA V delivers. 4K at max detail, however, remains well beyond the range of even the most powerful Nvidia GPU.

    More...

  7. #117
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    AMD details new power efficiency improvements, update on ’25×20


    Energy efficiency is (if you’ll pardon the pun) a hot topic. Foundries and semiconductor manufacturers now trumpet their power saving initiatives with the same fervor they once reserved for clock speed improvements and performance improvements. AMD is no exception to this trend, and the company has just published a new white paper that details the work it’s doing as part of its ’25×20

    More...

  8. #118
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    General Motors, John Deere want to make tinkering, self-repair illegal


    The ability to modify a vehicle you’ve purchased is, in many ways, a fundamental part of America’s car culture — and, to some extent, embedded in our culture, period. From the Fast & Furious saga to Han Solo’s “She may not look like much, but she’s got it where it counts, kid. I’ve made a lot of special modifications,” we value the right to tinker. More practically, that right can be critically important when it comes to fixing heavy farm equipment. That’s why it’s significant that companies like John Deere and General Motors have joined forces to argue that no, you don’t actually own the equipment you purchase at all.



    Without those modifications, what are the chances that the Falcon could make .5 past light speed? I am a huge nerd.

    The tractor and farm equipment manufacturer doesn’t mince words. “In the absence of an express written license in conjunction with the purchase of the vehicle [to operate its software], the vehicle owner receives an implied license for the life of the vehicle to operate the vehicle, subject to any warranty limitations, disclaimers or other contractual limitations in the sales contract or documentation.”

    GM, meanwhile, alleges that “Proponents incorrectly conflate ownership of a vehicle with ownership of the underlying computer software in a vehicle.” The problem with these arguments is that while existing software laws confirm that individuals are licensing code rather than purchasing it when they buy a license from Adobe or Microsoft, the cases in question did not generally anticipate that the code would be used to artificially create extremely high barriers to repair. As tractors have gone high tech, John Deere has aggressively locked away critical information needed for adjusting either aspects of the vehicle’s timing and performance or the necessary information to troubleshoot problems. Kyle Wiens wrote about the issue a few months back, noting how John Deere’s own lockouts and high-tech “solutions” have supposedly caused a spike in demand for older, simpler vehicles. Farmers, it seems, don’t like having to pay expensive technicians. As a result, the used tractor business is booming.

    Both companies go on to assert that the Copyright Office shouldn’t consider allowing tractor owners or car enthusiasts make any sort of modifications to their vehicles because doing so might enable piracy through the vehicle entertainment system. While this is theoretically possible in a modern car, I suppose, if improbable, it’s downright laughable in a tractor. A new tractor can cost upwards of $100,000 — is anyone seriously going to pirate One Direction CDs while ploughing a field?

    The other argument — that users will abuse these capabilities to engage in unsafe or dangerous activities — ignores the fact that Americans enjoyed this level of tweaking and tuning for decades. True, a modern computer system might make it easier to modify certain characteristics, but it’s not as if the concept of tuning a car got invented alongside OBD-II.



    Arr! Raise the Jolly Roger! Ahead ploughing speed!

    John Deere makes a number of scare tactic allegations around the very idea of a modifiable tractor, including this gem: “Third-party software developers, pirates, and competing vehicle manufacturers will be encouraged to free-ride off the creativity and significant investment in research and development of innovative and leading vehicle manufacturers, suppliers, and authors of vehicle software. The beneficiaries of the proposed exemption will not be individual vehicle owners who allegedly want to repair, redesign or tinker with vehicle software, but rather third-party software developers or competing vehicle manufacturers who — rather than spending considerable resources to develop software from scratch — instead would be encouraged to circumvent TPMs in order to make unauthorized reproductions of, and derivative works based on, the creativity of others.”
    Just in case you’re confused, we’re still talking about tractors — not someone stealing the crown intellectual property of a literary genius.

    The long-term risk of locking-out repair


    I’ve never modded my car and I haven’t been near a tractor in decades. But the reason this fight matters is directly rooted in the ability to repair anything. With multiple manufacturers shoving the concept of an Internet of Things full throttle, how long before basic appliance repair has become something you can only perform with a licensed technician? It’s not an altogether crazy concept. Device and appliance manufacturers have fought to build repair monopolies for decades in various industries, with generally limited success.

    Thanks to the DMCA and the rules of software licensing, those efforts could finally succeed in the 21st century. As the IoT advances, virtually everything can be locked off and limited to only those shops that can afford sophisticated diagnostic equipment — thereby limiting user choice and freedoms.

    More...

  9. #119
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    Analyst: Intel will adopt quantum wells, III-V semiconductors at 10nm node



    =

    As the pace of Moore’s Law has slowed and shifted, every process node has become more difficult and complicated to achieve. The old days, where a simple die shrink automatically brought faster chips and lower power consumption, are now gone. Today, companies perform a die shrink (which makes most aspects of the chip perform worse, not better), and then find supplementary technologies that can improve performance and bring yields and power consumption back to full. Analyst and RealWorldTech owner David Kanter has published a paper on where he thinks Intel is headed at the 10nm node, and predicts the tech giant will deploy a pair of new technologies at that node — quantum well FETs and III-V semiconductors.

    We’ve talked about III-V semiconductors first, so we’ll start there. Intel has been evaluating next-generation semiconductor materials for years. We first spoke with Mark Bohr about the company’s efforts back in 2012. The III-V semiconductors are called that because the materials are drawn from Groups III and V of the periodic table. Many of these materials have superior performance compared with silicon — either they use less power or they allow for drastically higher clock speeds. But they cost more and often require extremely sophisticated manufacturing techniques. Finding materials to replace both the p-channel and n-channel has been difficult, since some of the compounds used for one type of structure don’t work well for the other.
    Kanter predicts that Intel will use either InGaAs (Indium Gallium Arsenide) or InSb (indium tin) for n-type, and strained germanium or additional III-V materials for the p-type channel. The net effect of this adoption could cut operating voltages as far as 0.5v. This is critical to further reducing idle and low-use power consumption, because power consumption today rises as the square or cube of voltage increase (depending on leakage characteristics and overall transistor type).

    The other major advantage that Kanter thinks Intel may deploy is the use of quantum well structures. Quantum wells trap electrons by surrounding them with an insulating structure that leaves a limited number of dimensions for the electrons to move in. This new fin structure and gate are shown in the image below, from an Intel IEDM paper in 2011.




    Combined, these new structures would allow Intel to substantially cut power consumption while simultaneously improving other characteristics of the transistor’s performance. It’s not clear if this would enable substantially faster chips. Intel has focused primarily on making its CPUs more efficient in the past five years, as opposed to making them faster. And while a Core i7-4790K is unquestionably quicker than a Core i7-2600K from 2010, the gap isn’t what it would’ve been ten years ago.

    It’s possible that these new structures would run at substantially higher clock speeds. But building chips smaller also means decreasing feature sizes, which increases the formation of hot spots on the die. The ability to run chips at 0.5v is great for the Internet of Things, but it’s not necessarily great if the chip needs 1.1V to hit maximum clock.

    Uncertainty at 10nm and below


    Kanter addresses why he wrote this piece, noting he believes that “industry experts should make insightful, specific, verifiable predictions that have a definite time horizon.” It makes sense that Intel would go for some of these capabilities. The firm is known to be researching III-Vs, and quantum wells have apparently been worked on for a decade or more. Intel likes to talk up its manufacturing capabilities and advantages, and this type of innovation at the 10nm node could give them a huge leg up over the competition.

    Samsung and TSMC haven’t revealed much about their own plans, but it’s highly likely that both firms would stick with conventional FinFET designs at 10nm, just as Intel spent two design cycles using their own standard Tri-gate technology. If TSMC actually closes the gap with Intel at 10nm, III-Vs and QWFETs would give Intel an argument for claiming it’s not just the number of nanometers that makes the difference — it’s the process tech as well.

    If Intel adopts these technologies at 10nm, it’ll push back the window on EUV farther, back to 7nm — and possibly require additional verification and validation that the tool sets are compatible with both the new semiconductor manufacturing equipment and the requirements of EUV itself. Such a move would likely push EUV introduction back into the 2018-2019 timeframe, assuming that 10nm shipments begin in 2016-2017, with 7nm ready 2-3 years later.

    Intel supposedly delayed installation of 10nm equipment until December of this year, but is planning to ramp production at its facilities in Israel.


    More...

  10. #120
    member HiGame's Avatar
    Join Date
    Sep 2014
    Posts
    1,062
    Blog Entries
    801

    ARM details its upcoming Cortex-A72 microarchitecture


    Earlier this year, ARM announced its Cortex-A72 — a new custom microarchitecture from the CPU designer that builds on and refines the 64-bit Cortex-A57. Ordinarily it takes up to 24 months for new ARM cores to come to market, after the company announces a new CPU design. But Qualcomm has*told us to expect Cortex-A72 cores by the end of the year. If true, that would make this one of the company’s fastest CPU ramps, ever — so what can the new core do?
    If ARM hits its targets, quite a lot.

    New process, new product


    The Cortex-A72 is based on the Cortex-A57, but ARM has painstakingly refined its original implementation of that chip. The company is claiming that the A72 will draw 50% less power than the Cortex-A15 (a notoriously power-hungry processor) at 28nm and 75% less power at its target 16nmFF+ / 14nm process node. Compared to the Cortex-A57 at 28nm, ARM still expects the A72 to draw 20% less power.



    ARM is supposedly aiming for the Cortex-A72 to be capable of sustained operation at its maximum frequency, which is a topic we touched on yesterday when covering the Snapdragon 810’s throttling problem. The CPU is targeting improved performance of 1.16x to 1.5x over the Cortex-A57, clock-for-clock. Making this happen required revamping the branch predictor, cutting misprediction by 50%, and a 25% reduction in speculation power consumption. The chip can also bypass its branch predictor completely in circumstances where it is performing poorly and save additional power in the process.
    The Cortex-A72 is still capable of decoding three instructions per clock cycle, but apparently adds some instruction fusion capability to increase efficiency. Each of these components has been power-optimized as well. AnandTech reports that ARM’s dispatch stage can break fused ops back into micro-ops for increased execution granularity, effectively turning a three-wide decoder into a five-wide machine in some cases.




    ARM is also amping up its game in SIMD execution units. Instruction latencies have been slashed, pipelines shortened, and cache bandwidths boosted. There are no huge changes in organization or capability, but the CPU core should see significant improvements thanks to these adjustments. ARM has even managed to shave off some die size — the Cortex-A72 is supposed to be about 10% smaller than the Cortex-A57, even on the same process.




    Ars Technica reports that according to ARM, the Cortex-A72 can even beat the Core M in certain circumstances. Such predictions must be*taken with a grain of salt — they assume, for example, that the Core M will be thermally limited (we’ve seen that this can vary depending on OEM design). Tests like SPECint and SPECfp tend to be quite dependent on compiler optimizations, and while the multi-threaded comparison is fair as far it goes, ARM is still assuming that the Cortex-A72 won’t be thermally limited. Given that all smartphones and tablets throttle at present, the company will need to prove the chip doesn’t throttle before such claims can be taken seriously.

    All the same, this new chip should be an impressive leap forward by the end of the year. Whether it’ll compete well against Apple’s A9 or Qualcomm’s next-generation CPU architecture is another question.


    More...

Page 12 of 97 FirstFirst ... 2 10 11 12 13 14 22 62 ... LastLast

LinkBacks (?)

  1. 10-25-2014, 04:45 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •