HP rolls out Elite x3 phablet, to reach India in late 2016

HP rolls out Elite x3 phablet, to reach India in late 2016

In a yet another fillip to the idea that smartphone can one day become your laptop or personal computer, HP Inc on Thursday unveiled Elite x3 – a 5.96-inch touchscreen device that runs Windows 10 Mobile and supports Continuum – a feature that allows the device to be attached seamlessly to a separate screen, mouse and keyboard.

To be available in Asia Pacific and Japan in September this year and then in India, HP Elite x3 bridges phablet, laptop and desktop use cases from a single computing device while enabling users to run key productivity apps across seamlessly.

“At HP, we are constantly pushing the envelope in design, productivity, security and entertainment to build innovative products for ‘One Life’,” Anneliese Olson, vice president of personal systems business, HP Asia Pacific and Japan, told reporters here.

By utilising Continuum in Windows 10, the Elite x3 enables frictionless multi-screen transitions between a phone and a desktop PC. The device sports 8MP front camera and 16MP rear camera.

For users’ security, the device has both a fingerprint reader and an iris recognition camera, offering biometric security as part of Window’s biometric Hello framework.

“Users can dock Elite x3 with its ecosystem of accessories to render desktop and laptop productivity experiences while also retaining productivity on-the-go in a world-class premium and commercial-grade phablet,” HP added.

Although Elite x3’s docking station is not new in the field but what takes HP device into a new league is its mobile extender that helps Elite x3 double as a laptop.

The Elite x3 offers a unique computing experience by leveraging the power of the Snapdragon 820 processor, the optional HP Desk Dock and the optional Lap Dock, allowing the device to let people work on their own terms no matter where the locale.

Users can also enjoy easier, faster charging with Qualcomm Quick Charge 3.0 technology.

The Desk Dock offers a full featured desktop experience for the Elite x3.

It includes a DisplayPort for external monitor support, two USB-A and a USB-C connection for business continuity and wired Ethernet to seamlessly scale users’ productivity at their desk.

The Elite x3 docks in portrait mode at a comfortable viewing angle when sitting at a desk.

The dock also supports the Elite x3 with and without a protective case.

The HP Lap Dock creates a laptop experience using a near zero bezel 12.5-inch diagonal HD display that’s mere one kg.

No data is stored on the Lap Dock for additional security and all of the apps, passwords and files are managed and stored from the Elite x3.

The Elite x3 also allows users to work with the apps they love and rely on.

HP Workspace – an app catalog designed by HP to easily enable access to virtualised apps – drives a seamless app usage experience.

HP Workspace on the Elite x3 creates a virtual PC, giving users access to company curated catalogs of x86 apps via a virtualised solution.

Users benefit from quick access to their virtualised apps with a full keyboard and mouse experience not typically available from a mobile device when using the Desk Dock and Lap Dock.

HP is also partnering with Salesforce, the world’s leading customer relationship management (CRM) platform, to include Salesforce on every Elite x3, empowering users to run their businesses from their mobile devices with powerful tools that work in the cloud, keeping them up-to-date with whatever real-time data they need.

The price of Elite x3 device is yet to be announced.

Nest to permanently brick Revolv smart home devices

Back in 2014, Google/Alphabet’s Nest bought a rival home automation company, Revolv. Prior to the acquisition, Revolv had focused on building a smart home automation system that could control lights, open doors, and even brew coffee on demand. Post-acquisition, Revolv stopped selling its own products, though it pledged to continue supporting its existing customer base. Now, Nest is pulling the plug on that promise, despite the fact that Revolv hardware and software was sold with a “lifetime subscription.”

Let’s be clear on this point: Nest isn’t saying “We won’t support the existing software or infrastructure with future updates.” Nest is pulling the plug on Revolv, period. To quote from the Revolv website: “As of May 15, 2016, your Revolv hub and app will no longer work.”

The Internet of sh*tty Things

Revolv only had a small customer base, though its products reviewed reasonably well. But there’s a stark disconnect between how Internet of Things and other “smart” devices are marketed to people, and the reality of how products and services actually function in this day and age. The “lifetime” guarantee that Revolv offered turned out to be meaningless, even though Google/Alphabet could afford to support these devices throughout their entire useful lives. Cloud backup companies that sell data protection plans to consumers always include verbiage that deny said users the right to recognize any value from said data in the event the service loses or destroys their backups. The security issues raised through poor IoT devices is large and getting larger, and that’s before we get to the privacy implications.

Revolv-Software

There is no reason why shutting down Revolv should automatically disable both the app interface and the hub itself — except, of course, that Revolv was never designed to operate independently of a centrally located cloud service. Companies love to talk about the benefits of these products in terms of ease-of-use and simplicity, while never acknowledging the downside. The Revolv, which ran $300 just 17 months ago, was expensive enough to include the local processing power required to do its job.

Writing on Medium, Arlo Gilbert, CEO of Televero, notes:

On May 15th, my house will stop working. My landscape lighting will stop turning on and off, my security lights will stop reacting to motion, and my home made vacation burglar deterrent will stop working. This is a conscious intentional decision by Google/Nest.

To be clear, they are not simply ceasing to support the product, rather they are advising customers that on May 15th a container of hummus will actually be infinitely more useful than the Revolv hub.

The concept of planned obsolescence is nothing new; the term was first used in 1932. In the past, planned obsolescence was conceived of as either a marketing effort (convince people they need something new before they actually really do), or as a deliberate design methodology that ensured products would break and need to be replaced on a fairly regular basis.

Revolv, it could be argued, is a third type of planned obsolescence. Instead of designing the hubs to break or aggressively marketing Nest products as a replacement for Revolv hardware, Google/Alphabet can simply shut down the entire product family at will.

The Internet of Things is often marketed as enabling products and solutions that couldn’t exist otherwise, but all too often these products are used to limit consumer freedom, not expand it. Vanity Fair has a recent profile of Juicero, a Silicon Valley startup that’s raised more than $120 million in recent funding rounds. The self-described “Keurig for fresh juice” company sells a $700 juicer and juice packs that cost between $4 and $10 each. The article describes the product as follows:

The juice packs are stamped with QR codes, which the machine scans and uses to determine if the fruits and vegetables are fresh enough for it to press into an eight-ounce cup of juice for you. If it’s not, the pack is discarded. It’s a Wi-Fi connected device, which means if the Internet’s out, you can’t have your cup of beet-and-apple juice that morning.

Keurig, of course, is now infamous for its attempt to create DRM coffee, and Juicero appears to think it can follow a similar path by forcing customers to adopt DRM from the beginning rather than at a later date. These types of products add nothing useful to the larger ecosystem. After seeing how Nest is treating Revolv, I can’t say I’d ever be interested in purchasing one of the company’s primary products. If Nest thinks it’s okay to completely deactivate Revolv, it’ll have no problem turning off its own hardware some day.

AMD ‘pre-announces’ 7th-generation Bristol Ridge, plans for Computex launch

BristolRidge-Feature

It’s only April, but AMD is raring to talk about its next-generation family of APUs and their various improvement. The upcoming Bristol Ridge family of APUs will debut on mobile first — AMD has partnered with HP to launch a new iteration of the Envy x360 based on its upcoming seventh-generation hardware.

Pre-announcing Bristol Ridge

AMD is calling this a “pre-announcement,” a term which reminds me of George Carlin’s rant about the abuse of the prefix “pre.” Let’s just call this an early reminder AMD has new hardware coming down the pipe and a look at what the new chips are capable of. AMD is forecasting an 23% improvement in 3DMark scores, and a 5% improvement in PCMark 8v2.

AMD-Graphs

Those performance gains are easy to explain once you consult the notes at the back of the presentation. AMD’s Bristol Ridge was tested using DDR4-1866, while the older Carrizo systems are tapping DDR3-1600. That gives Bristol Ridge a roughly 17% memory bandwidth advantage over Carrizo, and we already know AMD’s APUs are almost entirely bandwidth-bound. Toss in a small core clock increase and a bit more top-end CPU frequency, and that’s the difference between the two chips.

AMD is also talking up its Cinebench scores, as shown below, but the company’s reported baseline benchmarks are a bit off compared to what we’ve seen in retail. We elected to create our own graph for this data since the slide isn’t completely clear.

CinebenchR15

AMD’s slide claims this data is based on Cinebench 11.5, while the actual footnote says Cinebench R15. R15 is a better fit for the data, so we’re going to go with that. Even so, there’s an odd discrepancy in the claimed performance of the Kaveri platform. AMD reports a 66.48 in Cinebench R15 — considerably below the measured performance median of 75 in that benchmark as recorded by Notebookcheck.net. The Carrizo scores, in contrast, are exactly in line with what we’d expect from the FX-8800P. Either way, AMD is claiming to have picked up a respectable 14% from Bristol Ridge over and above Carrizo. Higher turbo clocks could easily yield that kind of improvement, though this would also imply the speculation of a 14nm Bristol Ridge was incorrect — if AMD had actually taken Carrizo down to 14nm, we’d see larger improvements.

New model numbers, hints of a product position

I suspect this last information isn’t data AMD intended to include, but its presentation also contains reference to at least one specific Bristol Ridge processor — an FX-9830P equipped with 8GB of DDR4-2400 RAM. I’m not sure how much weight to put on this, because the same annotation also claims AMD’s FX-7600P was tested using 8GB of DDR4, and Kaveri doesn’t support that RAM standard.

What’s somewhat more interesting is AMD appears to be showcasing different system configurations to OEMs, at least. The annotated slides reference a “Work Faster” configuration (outfitted with a 35W CPU and 8GB of DDR4-2400) as well as a “Play Longer” system (15W CPU with 8GB of DDR4-1866). Again, this may be nothing but an internal method of referencing two different types of system configurations AMD would like to see OEMs build. But it would be great to see the company provide more guidance and firmer requirements for its Bristol Ridge platform. As we discussed several months ago, many of the Carrizo systems you can buy today are fundamentally configured in ways that don’t make sense for the price points the hardware was intended to target.

HP-ENvy

It’s not clear yet which kind of system the HP Envy x360 will turn out to be. Many Carrizo systems were saddled with poor performance thanks to single-channel memory configurations and screens that weren’t well-calibrated or had poor brightness. The fact that HP is offering a 4K option seems to suggest we can expect better things this time around, but we’re going to wait and see before passing judgment.

AMD’s CPU and APU divisions are in a difficult position. They can iterate on Carrizo and deliver some significant improvements year-on-year, the same way that Carrizo made major improvements to Kaveri, especially in video playback power consumption. None of this, however, will fundamentally alter the company’s fortunes. Zen, not Bristol Ridge, is AMD’s one chance to regain at least some of the ground its lost to Intel over the past five years. With no current date on when Zen APUs will actually come to market, Bristol Ridge will have to do what it can to anchor things for the next 18-24 months.

Nvidia goes all-in on self-driving cars, including a robotic car racing league

Jen-Hsun Huang raving about the Nvidia powered Robocar for Roboracing

Nvidia doesn’t always announce new consumer graphics cards at its annual technology conference, but it was widely expected to this year. Instead, GTC 2016 is all about AI, VR, and especially self-driving cars. Following up on its announcement of the Drive PX 2 car computer, Nvidia updated its plans to ship a complete set of developer tools — fueled by its own autonomous vehicle research — for car makers, and to sponsor and help equip a robot car racing league.

DriveWorks is the power behind the Drive PX 2

A supercomputer in your trunk, like Nvidia’s Drive PX 2, isn’t much good without the software to run it. That’s where Nvidia’s DriveWorks platform comes in. First announced at CES, it is getting closer to reality with a “Spring 2016” ship date. Nvidia CEO Jen-Hsun Huang also used his keynote to go into more detail about what it will include. The developer platform starts with sensor fusion and computer vision software that can work with up to 12 cameras and other sensors to provide a comprehensive model of the vehicle’s environment. From there, advanced machine learning capability will assist with navigation, vehicle control, and path planning.

Path planning is the tricky process of deciding where to navigate the car in traffic or through an intersectionHigh-quality maps, like those from HERE, are also going to be supported. One interesting feature is support for map creation using the DriveWorks in-car platform coupled with cloud-based processing for the actual map creation. It was a little unclear from Huang’s description exactly how all this would work — except that he hopes and expects that that cloud will be populated with Nvidia’s new $130K DGX-1 supercomputer — but what is clear is that he sees this technology greatly reducing the cost of mapping areas, and of training autonomous vehicles. In particular, it should make it possible to do a better job of keeping maps up to date. Instead of needing routes to be re-driven with expensive, specialized, vehicles to pick up changes in the road layout or obstacles, data from “regular” Drive PX 2-equipped cars could be used.

Self-driving with DAVENET, or “I can do that, Dave”

Rounding out Nvidia’s DriveWorks offering will be a deep neural network (DNN) that has been trained to know how to drive. Traditionally, autonomous vehicles, such as the ones used in the DARPA challenge, have relied on manually-coded algorithms to follow a desired route, and provide vehicle control. Nvidia (along with many other current vehicle research teams) has been experimenting with using deep learning neural networks instead. According to Huang (and illustrated with a demo video), after only 3,000 miles of supervised driving, its car — powered by its DAVENET (formerly named DRIVENET) neural network — was able to navigate on freeways, country roads, gravel driveways, and in the rain.

The computing power required to recoginize and accurately track dozens of objects across many cameras and sensors is definitely supercomputer worthyOf course, what he showed was only a demo video. But all in all, it was quite a remarkable achievement when contrasted with the hundreds of man years of coding that went into the much-less-sophisticated driving of the DARPA challenge cars only 10 years ago. Obviously, Nvidia isn’t suddenly planning to become a car company, but it will be providing its technology as part of the set of tools for the auto industry to use to take advantage of its Drive PX 2. Huang showed, for example, how the PX 2’s ability to process 12 cameras at once not only assists driving safely through traffic and obstacles, but builds a sufficient model of the world around it to allow for adjusting to road conditions and routing.

Roborace: Full-size robotic car racing

For decades, car and auto accessory manufacturers have used racing as both an advertising tool and a way to advance their own research and development. Whether it is F1, IndyCar, or NASCAR, factory teams are ever present and always using what they learn to help them with their next generation of street vehicles. Now that autonomous operation is an increasingly realistic future path for road cars, bringing computing front and center in auto development, it makes sense racing should become a platform for AI-based vehicle R&D.

Nvidia's Drive PX 2 replaced a supercomputer in Baidu's autonomous vehicle project

That’s exactly what Nvidia and others are planning for the newly announced Roborace league. Piggybacking off the fast-growing Formula E (all Electric) schedule and car design, the league will feature 20 identical Roborace cars allocated to 10 teams. They will race on the same courses as Formula E, except without drivers. The cars won’t be remote-controlled, either. They’ll be fully autonomous, using an Nvidia Drive PX 2 portable supercomputer to run their software. So the teams’ innovation and differentiation will be in the software they develop for the race. The Roborace is scheduled to start alongside the 2016-2017 Formula E season, later this year. Roborace founder Dennis Sverdlov told GTC attendees he expected it to make heroes out of software developers: “It’s not possible to get competitive advantage based on how much money you put in hardware. Our heroes are not the drivers. Our heroes are engineers.”

Jealous? You too can build a (small) self-driving car!

You can DIY your own robo racecar by following along with JetsonhacksAlong with each new autonomous vehicle announcement, there is always a statement of the massive investment needed to make it happen. But for those of us who want to do more than be passive spectators, there is an exciting new opportunity to learn how to build your own — scaled-down — robotic race car. Startup JetsonHacks has taken MIT’s RACECAR autonomous car learning platform and made it accessible to the DIY community with detailed assembly instructions, and cost-saving hardware options to make it more affordable than the University’s original version. The RACECAR is a massive kit bash of an offf-the-shelf RC vehicle — a Traxxas Rally — so that all the DIY fun is concentrated on the control and programming. The brain is (naturally) a Jetson TK1, running Robot OS (ROS).

In an exclusive interview, JetsonHacks Founder Bill Jenson excitedly explained that this year will feature an upgraded model based on this Spring’s MIT Controls Course — which will be available online — and a new design featuring a more-powerful Jetson TX1. If you’d rather flex your maker muscle with a drone, he also offers a lot of great DIY drone advice based on the DJI Matrice 100 development platform.

Why you’re better off waiting a few more months to upgrade your GPU

AMD-vs-NV

Yesterday, Nvidia took the wraps off its high-end GP100 GPU and gave us a look at what its top-end HPC configuration would look like come Q1 2017. While this new card is explicitly aimed at the scientific computing market and Nvidia has said nothing about future consumer products, the information the company revealed confirms some of what we’ve privately heard about next-generation GPUs from both AMD and Nvidia.

If you’re thinking about using some of your tax rebate on a new GPU or just eyeing the market in general, we’d recommend waiting at least a few more months before pulling the trigger. It may even be worth waiting until the end of the year based on what we now know is coming down the pipe.

What to expect when you’re expecting (a new GPU)

First, a bit of review: We already know AMD is launching a new set of GPUs this summer,codenamed Polaris 10 and Polaris 11. These cores are expected to target the sweet spot of the add-in-board (AIB) market, which typically means the $199 – $299 price segment. High-end cards like the GTX 980 Ti and Fury X may command headlines, but both AMD and Nvidia ship far more GTX 960s and Radeon R7 370s than they do top-end cards.

AMDGPU

Polaris 10 and 11 are expected to use GDDR5 rather than HBM (I’ve seen the rumors that claim some Polaris SKUs might use HBM1 — it’s technically possible, but I think it exceedingly unlikely) and AMD has said these new GPUs will improve performance-per-watt by 2.5x compared with their predecessors. The company’s next-generation Vega GPU family, which arrives late this year, is rumored to be the first ground-up new architecture since GCN debuted in 2012 with 4,096 shader cores and HBM2 memory.

We don’t know yet what Nvidia’s plans are for any consumer-oriented Pascal cards, but the speeds and core counts on GP100 tell us rather a lot about the benefits of 16nm FinFET and how it will impact Nvidia’s product lines this generation.

Pascal-Chart

With GP100, Nvidia increased its core count by 17% while simultaneously ramping up the base clock by 40%. Baseline TDP for this GPU, meanwhile, increased by 20%, to 300W. The relationship between clock speed, voltage, and power consumption is not linear, but the GTX Titan X shipped with a base clock of 1GHz, only slightly higher than the Tesla M40’s 948MHz. The GP100 has up to 60 SM units (only 56 are enabled), which puts the total number of cores on-die at 3,840. That’s 25% more cores than the old M40, but the die is just 3% larger.

We may not know details, but the implications are straightforward: Nvidia should be able to deliver a high-end consumer card with 30-40% higher clocks and significantly higher core counts within the same price envelopes that Maxwell occupies today. We don’t know when Team Green will start refreshing its hardware, but it’ll almost certainly be within the next nine months.

Here’s the bottom line: AMD is going to start refreshing its midrange cards this summer, and it’d be unusual if Nvidia didn’t have fresh GPUs of its own to meet them. Both companies will likely follow with high-end refreshes towards the end of the year or very early next year, again, probably within short order of each other.

When waiting makes sense

There’s a cliche in the tech industry that claims it’s foolish to try and time your upgrades because technology is always advancing. 10-12 years ago, when AMD and Nvidia were nearly doubling their top-end performance every single year, this kind of argument made sense. Today, it’s much less valid. Technology advances year-on-year, but the rate and pace of those advances can vary significantly.

Polaris2

The 14/16nm node is a major stepping stone for GPU performance because it’s the first full-node shrink that’s been available to the GPU industry in more than four years. If you care about low power consumption and small form factors, upcoming chips should be dramatically more power efficient. If you care about high-end performance, you may have to wait another nine months, but the amount of GPU you’ll be able to buy for the same amount of money should be 30-50% higher than what you’ll get today.

There’s also the question of VR technology. We don’t know yet how VR will evolve or how seriously it will impact the future of gaming; estimates I’ve seen range from total transformation to a niche market for a handful of well-heeled enthusiasts. Regardless, if you plan on jumping on the VR bandwagon, it behooves you to wait and see what kind of performance next-generation video cards can offer.

Remember this: VR technology demands both high frame rates and extremely smooth frame delivery, and this has knock-on effects on which GPUs can reliably deliver that experience. A GPU that drives 50 frames per second where 30 is a minimum requirement is pushing 1.67x more frames per second than the user demands as a minimum standard. A GPU that delivers 110 frames per second where 90 is a minimum requirement is only 1.22x above the target frame rate. It doesn’t take much in the way of additional eye candy before our second GPU is bottoming out at 90 FPS again.

The final reason to consider delaying an upgrade is whether you plan to upgrade to a 4K monitor at any point in the next few years. 4K pushes roughly 4x more pixels than 1080p monitors and modern graphics cards are often 33-50% slower when gaming at that resolution. Waiting a few more months to buy at the beginning of the new cycle could mean 50% more performance for the same price and gives you a better chance of buying a card that can handle 4K in a wider variety of titles.

If your GPU suddenly dies tomorrow or you can’t stand running an old HD 5000 or GTX 400-series card another minute, you can upgrade to a newer AMD or Nvidia model available today and still see an enormous performance uplift — but customers who can wait for the next-generation refreshes to arrive will be getting much more bang for their buck. We don’t know what the exact specs will be for any specific AMD or Nvidia next-gen GPU, but what we’re seeing and hearing about the 16/14nm node is extremely encouraging. If you can wait, you almost certainly won’t regret it — especially if you want a clearer picture on which company, AMD or Nvidia, performs better in DirectX 12.

Samsung announces new ’10nm-class’ DDR4

Samsung-DRAM

Samsung announced this week that it’s begun production of 8Gb DDR4-3200 chips using its new ’10nm class’ production lines. According to Samsung, these new chips aren’t just a business-as-usual node shrink — the company had to perform some significant additional design steps to bring the hardware to market.

First, a bit of clarification: This isn’t actually 10nm DRAM, though Samsung wouldn’t mind if you thought it was. Samsung’s PR helpfully states that “’10nm class’ refers to 10nm-class denotes a process technology node somewhere between 10 and 19 nanometers, while 20nm-class means a process technology node somewhere between 20 and 29 nanometers.”

The company goes on to note that while its first “20nm-class” DDR3 came to market in 2011, it didn’t actually launch 20nm DDR3 until 2014. We expect something similar to be happening here. This kind of sleight-of-hand has become a bit of a Samsung trait; the company also likes to claim its EVO family of drives use “3-bit MLC” NAND as opposed to TLC, probably because the TLC moniker took a bit of a beating after the 840 EVO had so many long-term problems. But that’s a different topic.

10nm or not, Samsung claims that it had to adopt quadruple patterning lithography for its new DDR4, as well as develop a new proprietary cell design and new methods of ultra-thin dielectric layer deposition. The new DDR4 is expected to clock up to 3.2GHz — we’ll undoubtedly see third-party manufacturers ramping higher than that.

DDR4-4266 is technically already available on NewEgg, provided you’re willing to pay $300 for 8GB of RAM. The performance benefits of that much memory frequency are questionable, to say the least, but we typically see a steady decrease in RAM price and an increase in memory frequencies over the life of any given RAM generation. DDR4 is still relatively young; it wouldn’t be surprising to see DDR4-4266 selling for a fraction of what it costs today in a few more years.

Available in vanilla, chocolate, cherry, and SO-DIMM.

The counter-argument to this, however, is the fact that Samsung is relying on quad patterning to manufacturer this DRAM. Quadruple patterning means that Samsung performs multiple additional lithography steps to manufacture its DRAM. There are multiple ways to perform multi-patterning and Samsung hasn’t specified which it uses, but the important thing to know for our purposes is that multi-patterning significantly increases manufacturing costs. DRAM produced by this method may not hit the same price points as older memory did, or it may simply take longer to decrease in price.

Samsung intends to take what it’s learned from this new ’10nm-class’ product and deploy it in mobile form factors later this year. JEDEC’s LPDDR4 roadmap has a path to 4266 MHz already, and we may see Samsung rolling out high frequencies in the near future. As screen resolutions have skyrocketed, mobile GPUs have often struggled to keep pace, and adding faster RAM is the best way to improve performance in an otherwise-bottlenecked application.

Nvidia’s vision for deep learning AI: Is there anything a computer can’t do?

Nvidia's Jen-Hsun Huang announcing the DGX-1 at GTC 2016

It is nearly impossible to overstate the enthusiasm for deep-learning-based AI among most of the computer science community and big chunks of the tech industry. Talk to nearly any CS professor and you get an overwhelming sense that just about every problem can now be solved, and every task automated. One even quipped, “The only thing we need to know is which job you want us to eliminate next.” Clearly there is a lot of hubris baked-in to these attitudes. But with the rapid advances in self-driving vehicles, warehouse robots, diagnostic assistants, and speech and facial recognition, there is certainly plenty of reason for computer scientists to get cocky.

And no one is better at being cocky than Nvidia CEO, Jen-Hsun Huang. On stage, he is always something of a breathless whirlwind, and as he recapped the recent, largely Nvidia-powered, advances in AI, and what they portend for the future, it reminded me of a late-night infomercial, or perhaps Steve Jobs revealing one more thing. In this case, though, Nvidia has a lot more than one thing up its sleeve. It is continuing to push forward with its AI-focused hardware, software, and solutions offerings, many of which were either announced or showcased at this year’s GTC.

Nvidia’s AI hardware lineup: Tesla P100 GPU and DGX-1 Supercomputer join the M40 and M4

For anyone who still thinks of Nvidia as a consumer graphcis card company, the DGX-1 should put that idea to rest. A $129,000 supercomputer with 8 tightly-coupled state-of-the-art Pascal-architecture GPUs, it is nearly 10 times faster at supervised learning than Nvidia’s flagship unit a year ago. For those who want something a little less cutting edge, and a lot less expensive, Nvidia offers the M40 for high-end training, and the M4 for high-performance and low-power AI runtimes.

If you want access to these high-end GPUs you'll likely also need a high-end rig, like this Cipher model being shown off by Rave at Nvidia GTC 2016

Nvidia’s AI developer tools: ComputeWorks, Deep Learning SDK, and cuDNN 5

With cuDNN 5 and a Tesla GPU, Recurrent Neural Networks can run up to 6 times as fastNvidia has supported AI, and especially neural net, developers for a while with its Deep Learning SDK. At GTC Nvidia announced version 5 of it neural network libraries (cuDNN). In addition to supporting the new Tesla P100 GPU, the new version promises faster performance and reduced memory usage. It also adds support for Recurrent Neural Networks (RNNs), which are particularly useful for applications that work with time series data (like audio and video signals — speech recognition, for example).

CuDNN isn’t a competitor to the big neural net developer tools. Instead, it serves as a base layer for accelerated implementations of popular tools like Google TensorFlow, UC Berkeley’s Caffe, University of Montreal’s Theano, and NYU’s Torch. However, Nvidia does have its own neural net runtime offering, Nvidia GPU Inference Engine (GIE). Nvidia claims over 20 images per second, per watt for GIE running on either a Tesla M4 or Jetson Tx1. CuDNN 5, GIE, and the updated Deep Learning SDK are all being made available as part of an update to Nvidia’s ComputeWorks.

TensorFlow in particular got a big shout-out from Huang during his keynote. He applauded that it was open source (like several of the other tools are) and was helping “democratize AI.” Because the source is accessible, Nvidia was able to adapt a version for the DGX-1, which he and Google’s TensorFlow lead Rajat Monga showed running (well, showed a monitor session logged into a server someplace that was running it).

The always-fascinating poster session in the GTC lobby featured literally dozens of different research efforts based on using Nvidia GPUs and one of these deep-learning engines to crack some major scientific problem. Even the winner of the ever-popular Early Stage Companies contest was a deep-learning application: Startup Sadako is teaching a robot how to learn to identify and sort recyclable items in a waste stream using a learning network. Another crowd favorite at the event, BriSky, is a drone company, but relies on deep learning to program its drones to automatically perform complex tasks such as inspections and monitoring.

JetPack lets you build things that use all that great AI

MIT's sidewalk-friendly personal transport vehicle at Nvidia GTC 2016Programming a problem-solving neural network is one thing, but for many applications the final product is a physical vehicle, machine or robot. Nvidia’s JetPack SDK — the power behind the Jetson TX1 developer kit — provides not just a Ubuntu-hosted development toolchain, but libraries for integrating computer vision (Nvidia VisionWorks and OpenCV4Tegra), as well as Nvidia GameWorks, cuDNN, and CUDA. Nvidia itself was showcasing some of the cool projects that the combination of the JetPack SDK and Jetson TX1 developer kit have made possible, including an autonomous scaled-down race car and autonomous (full-size) 3-wheeled personal transport vehicle, both based on work done at MIT.

How Neural Networks and GPUs are pushing the boundaries of what computers can do

Huang also pointed to other current examples of how deep learning — made possible by advances in algorithms and increasingly powerful GPUs — is changing our perception of what computers can do. Berkeley’s Brett robot, for example, can learn tasks like putting clothes away, assembling a model, or screwing a cap on a water bottle by simple trial and error — without explicit programming. Similarly, Microsoft’s image recognition system has achieved much higher accuracy than the human benchmark that was the gold standard until as recently as last year. And of course, AlphaGo’s mastery of one of the most mathematically complex board games has generated quite a bit of publicity, even among people who don’t typically follow AI or play Go.

Has Nvidia really created a super-human? It thinks so

In line with its chin-out approach to new technologies, massive banners all over the GTC proclaimed that Nvidia’s AI software learned to be a better driver than a human in “hours.” I assume they are referring to the 3,000 miles of training that Nvidia’s DAVENET neural network received before it was used to create the demo video we were shown. The statement reeks of hyperbole, of course, since we didn’t see DAVENET do anything especially exciting, or avoid any truly dangerous situations, or display any particular gift. But it was shown navigating a variety of on and off road routes. If it was truly trained to do that by letting it drive 3,000 miles (over the course of 6 months according to the video), that is an amazing accomplishment. I’m sure it is only a taste of things to come, and Nvidia plans to be at the center of them.

SK Hynix highlights the huge size advantage of HBM over GDDR5 memory

AMD-HBM

For decades, computer chips became smaller and more efficient by shrinking the size of various features and finding ways to pack more transistors into a smaller area of silicon. As die shrinks have become more difficult, companies have turned to 3D die stacking and technologies like HBM (High Bandwidth Memory) to improve performance.

We’ve talked a great deal about HBM and HBM2 in the past few years, but photographic evidence of the die savings is a bit harder to come by. SK Hynix helpfully had some HBM memory on display at GTC this year, and Tweaktown caught photographic evidence of 8Gb of GDDR5 compared with a 1 GB HBM stack and a 4GB HBM2 stack.

HBM2

The one quibble I have with the Hynix display is that the labeling mixes GB and Gb. The HBM2 package is significantly larger than the HBM1 chip, but still much smaller than the 8Gb of GDDR5, despite packing 4x more memory into its diminutive form factor.

We don’t expect HBM2 to hit market until the tail end of this year and the beginning of next; GDDR5 is expected to have one last hurrah with the launch of AMD’s Polaris this year. These space savings, however, illustrate why both AMD and NV are moving to HBM2 at the high end. Smaller dies mean smaller GPUs with higher memory densities for both consumer, professional, and scientific applications. Technologies like GDDR5X, which rely on 2D planar silicon, can’t compete with the capacity advantage of layering multiple chips on top of each other and connecting them with TSVs (through silicon vias). GDDR5 will continue to be used for budget and midrange cards this generation, but HBM2 will likely replace it over the long term as prices fall, lower-end cards require more VRAM, and manufacturer yields improve.

NV-HB

Over the long term, though, even HBM2 isn’t enough to feed the needs of next-generation exascale systems. The slide above is from an Nvidia presentation on high performance computing (HPC) and the energy requirements of DRAM subsystems. Shifting to HBM drives a significant improvement in I/O power and an absolute improvement in total power consumption for the DRAM subsystem. HBM2 draws less power to provide 1TB/s of bandwidth than GDDR5 used to prove 200GB/s.

Unfortunately, straightforward scaling of the HBM2 interface won’t prevent future memory standards from exceeding GDDR5 power requirements. Long-term, additional improvements and process node shrinks are still necessary — even if die-stacking has replaced planar silicon die shrinks as the primary performance driver.

Meet the new HP Spectre: The world's thinnest laptop

HP continues its march toward premium PC territory with the new HP Spectre, which the company calls “the world’s thinnest laptop.” So far, the numbers stand up. According to HP, the 13-inch Spectre is 10.4mm thick, while Apple’s iconic MacBook Air is 17mm thick, as is the Lenovo LaVie (which can claim to be the world’s lightest 13-inch laptop). The 12-inch MacBook and the recent Razer Blade Stealth both clock in at 13mm thick.

When we get down to a few millimeters, one might think it wouldn’t make much of a difference, but a 17mm laptop feels very different from a 13mm laptop, and based on my short hands-on time with the HP Spectre, a 10.4mm laptop feels different from both of those.

 

fldanhplaptop

 

 

A bold color scheme also helps the Spectre stand out, ditching the usual silver/grey for a dark, smokey gray, with bold gold accents.The entire hinge is a bright, jeweled gold, which just draws more attention to its unusual design. To avoid unnecessary bulk, the hinge has moved in from the very rear edge, and is instead inset by a tiny bit. It’s a design we’ve seen on a handful of laptops over the years, although usually on much larger systems. That hinge is aluminum, as is the laptop’s lid, while the bottom panel is carbon fiber. HP says the mix of materials serves to give the Spectre the right balance between weight and stiffness, especially in the lid. At 2.45 pounds, this isn’t close to being the lightest 13-inch laptop ever, but it’s still very easy to pick up and carry around.

Inside the body, according to a deconstructed version of the system I was able to look at, a standard laptop battery is flattened down into multiple separate very thin cells, to fit across most of the bottom footprint. HP also uses smaller fans to pull air in and through the laptop, rather than exclusively pushing hot air out. It’s a version of a cooling scheme from Intel which it calls hyperbaric cooling.

 

source-cnet