Intel diligence

Internet-connected devices, mobile, tablets and portable devices are commonly used terms today. How strongly does Intel believe in this space and what are the expectations in the mid and long term?

Intel and its OEM partners are making significant progress in the tablet and ultra-mobile space. There are over 30 Intel-based tablets and 2-in-1 devices running Windows* or Android in the market today or coming by Holiday 2013.

  • Intel’s next generation 22nm quad core Atom SoC (Bay Trail), based on the Silvermont microarchitecture, will include Gen7 graphics, support for DX11, full HD, Intel® Burst Technology 2.0, improved security features, 2x CPU improvement and 3x Graphics improvement1 and supports both Windows* and Android operating systems.
  • Intel is accelerating its Android efforts and with Bay Trail, it’s the first time Intel is offering OEM’s a mobile platform solution that offers OS flexibility and provides the same great Intel tablet performance and high-resolution graphics at cost savings to consumers.
  • Bay Trail represents the first time Intel is offering OEM’s a mobile platform solution that is flexible with regards to OS and we will provide the same great Intel tablet performance and high-resolution graphics at cost savings to consumers.

With the advent of touch, the tech community was looking at gesture computing and gesture driven interfaces. Then came along Google Glass. Intel has its own research in the field of perceptual computing and gesture recognition. What time frame are you looking at before such interfaces and devices (in the family of Glass) could become accessible (affordable) to the mainstream market?

Perceptual computing will reshape the way we interact with our devices, making it more natural, intuitive and immersive. Computing devices will be able will be able to perceive our actions through new capabilities including close-range hand gestures, finger articulation, speech recognition, face tracking, augmented reality experiences, and more.

We believe that Perceptual computing will bring new and exciting user experiences based on natural and intuitive human-computing interactions that be can be applied to education, browsing, shopping, 3D modeling, collaboration, gaming, and other immersive experiences. Some examples that Intel has demonstrated are video conferencing with background extraction, virtual green-screen technology for video bloggers, gaming with gesture interaction, and 3D object manipulation.

Intel is pursuing plans to integrate depth sensing technology and bring to market in 2H’14. We are very excited about the new usages and experiences this technology will deliver to PCs. We are not disclosing details of the technology targeted for integration at this point.

Does Intel see its portfolio moving beyond ultrabooks and into Glass-like devices, thereby increasing potential for developers?

The Perceptual Computing SDK beta which was first announced at IDF SF 2012 went live in October and since then has had thousands of downloads, which will enable developers to create the next generation of immersive, engaging and innovative software applications that will change the way people interact with their devices. In March Intel released the Gold version of the SDK.

The SDK gives developers access to:

  • Speech recognition: includes capabilities for voice command and control, short sentence dictation and text to speech synthesis.
  • Facial analysis: Includes the ability to do face detection and recognition, six and seven point landmark detection and attribution detection, including smiles, blinks and age groups.
  • Close-range depth tracking (6 inches to 3 ft): The SDK’s unique close-range finger-tracking mode provides developers with the ability to define innovative uses that recognize the positions of each of the user’s hands, fingers and joints. Close-range tracking addition supports recognition of static hand poses and moving hand gestures.
  • 2D/3D object tracking: Includes compelling augmented reality usage that allows developers to combine real-time images from the RGB camera and close-range tracking from the depth sensor with 2D or 3D graphical images in order to create innovative, immersive experiences. The SDK supports marker-less object tracking so that pre-defined 2D or 3D objects can be inserted seamlessly into a live scene.

This creates new opportunities for developers to innovate and create applications for mainstream platform and increase their revenue.

Leave a Comment

Your email address will not be published. Required fields are marked *