vendredi 27 janvier 2017

AI in Smartphones: Separating Fact From Fiction, and Looking Ahead

As it goes every year, one hot feature sets a trend in technology, and suddenly every company boasts some variation of that which is uniquely theirs. This year, that feature is AI. Hot on the heels of Alexa's and Google Assistant's holiday successes, Artificial Intelligence on phones has become the de facto must-have feature – whether consumers know it or not. In any case, manufacturers seem not to realize that AI doesn't mean "Anything Intuitive" – that's just how operating systems are supposed to be. Yet it seems that OEM's are eager to label nearly any vaguely intuitive feature as AI. As this trend will no doubt continue, it's important to take a moment to separate fact from fiction.


What is AI, really?

Before we dive in, let's outline some definitions and distinctions regarding the field of AI. AI breaks down into two main categories: General AI and Narrow AI. General AI, in theory, is meant to replicate human consciousness and resemble a sentient being – think I, Robot or Terminator – while Narrow AI is used to achieve a specific task or reasoning skill. General AI is still some time off from realization, so Narrow AI will be the focus here. The graphic below illustrates the evolution and tiers within.

Image Credit: Nvidia Blogs

In the earliest formations of AI, Arthur Samuel – the man who coined the term "Machine Learning" – programmed a computer to play checkers. Samuel used algorithms based on the piece positioning, the number of pieces, and the proximity of pieces to "king" spots, among other things. This was the basis of early AI, which soon would cross over into Machine Learning. As the program continued to develop, it gained the ability to "learn" from previous situations – going on to play thousands of games against itself to improve on its own skill – the same basic mechanism by which Machine Learning works today. Deep Learning, the most recent evolution of AI, transcends another level by leveraging Neural Networks, which enable computers to process data including pictures, text, and numbers to then draw conclusions. Neural Networks use layers of Machine Learning components (often referred to as Neurons) to process and "learn" information in much the same way as the human brain, where repetition and variety is key. With the right algorithms, hardware, and the wealth of Big Data that now exists, Neural Networks have become very capable and efficient in absorbing large data sets – completing tasks and indeed learning from each of these.

Image Credit: Google Research Blogs


For example, instead of playing Checkers, a computer may be tasked with recognizing a picture of a Checkers game. Having been "trained" by processing thousands of pictures of Checkers games, the layers within the Neural Network assign values to the probability or "confidence" that the present picture has the particular attributes of a Checkers game. Each layer may be in charge of recognizing a certain attribute, such as its square shape, checkered pattern, the color of the pattern, the position or shape of the pieces and much more. If these attributes have a high probability of presence, then the network may determine, with "X" degree of "certainty", that the picture is in fact a Checkers game. Although Machine Learning alone has reached similar capabilities, Neural Networks have lessened the need for lengthy, explicit coding while also improving accuracy, efficiency, and overall capability.

Though voice-assistants such as Siri, Cortana, and Google Assistant are commonly known to utilize Neural Networks to improve their speech recognition, they have shown themselves to otherwise be quite limited. Generally, these voice-assistants do little more than input/output for applications and web searches – learning little, if anything, about the user in the process. Even these well-funded and continually developed assistants have considerable room to improve on their automation, intelligence, and learning capacity. Given that, what level of useful integration can we expect from smaller OEM's who just now have entered the AI arena?

Huawei Honor Magic

"The Honor Magic breaks new ground by incorporating artificial intelligence designed to understand and analyze users' data in order to provide intelligent interactive features."

Go on.

"To further improve user experience, the Honor Magic houses the Honor Magic Live system, which anticipates users' needs and facilitate their daily lives by offering a whole host of predictive information. The Honor Magic Live system is, for instance, able to formulate a range of customized recommendations based on users' social conversations via instant messaging apps– conversations revolving around movies will trigger blockbuster recommendations."

"Honor Magic's Smart Display proactively retrieves and displays practical information, anticipating users' need. For example, ordering a cab with Honor Magic will trigger the driver's license plate number to be displayed on the screen."

Though these features can be useful, they are plainly not AI. While one could label them as "intuitive" or even "smart," bringing up boarding passes at the airport or pulling shipping information from emails has existed in phones for years now and requires no degree of "learning" or adapting. Huawei could conceivably implement a system to gather and "learn" from user data to improve efficacy over time, but this might be overkill for such a feature.  Furthermore, this likely would not track well in countries other than China, as Google Now already offers these features in just about every other country. It seems Huawei engineers are flirting with the voice-assistant industry, but rather than creating a full-blown voice-assistant to unify these features they have chosen to skip this step entirely and move directly to touting the phone as AI-driven.

Huawei Mate 9

"The Huawei Mate 9 automatically learns user habits to build app priority. The system anticipates the user's next moves to prepare resources in advance. This process is run on the phone, not the cloud for better performance and privacy protection."

This is a tricky one – or so it attempts to be. Learning from user habits to anticipate the next app to be opened and pre-emptively pooling resources for it does technically fall under the umbrella of Machine Learning, albeit at a very basic level. However, boasting that "This process is run on the phone, not the cloud for better performance and privacy protection," is quite misleading. In some applications of Machine Learning, extremely large data sets are stored on the cloud so that machines with much greater capabilities can process the data quickly and relay that information to your device. In the case of predicting the next app the user will open; the corresponding data set is extremely small and would never involve the cloud in any practical application of this feature. This bit seems to be pandering to consumer security concerns more than anything else.

"The Huawei Mate 9 automatically manages resources by prioritizing CPU, RAM and ROM for optimal performance and closes memory intensive background apps. Within the CPU, fine-grained scheduling maximizes computing efficiency. For RAM, it automatically recycles memory resources, creating more memory for priority apps and enabling stutter-free performance. And for ROM, it opens an exclusive channel of Storage I/O making the apps you use the most work even faster."

Huawei's track-record with memory management is not a great one. Previously, they utilized a very basic system that informs the user of the most power-heavy background apps, then closes them. It seems this feature has become less obtrusive, although minimally effective all the same. Beyond this, attempting to achieve "stutter-free performance" through such means is generally unnecessary. As we've seen, more substantive gains in performance can be made traditionally, through proper hardware/software pairings as well as optimizations to framework and design.

"The new F2FS file system improves the I/O storage performance. This speeds up the database engine allowing pictures to load more smoothly. The optimized performance of the rendering engine gives better control and a faster reaction to your touch."

This is the true catalyst for increased performance. Much like the optimizations in Android 7.1, most notably seen in the Google Pixel's buttery-smooth performance in touch latency and responsiveness, the rendering tweaks in the Mate 9, the pairing of F2FS on UFS 2.1, and the highly-capable Kirin 960 SoC are the true engines behind excellent system performance –  not AI.

HTC U Ultra/Play

Details on HTC's AI endeavors are still scarce, especially on their own website. As such, the following information has been gathered by Gadgets360, based on their time with HTC representatives and the U handset at CES this year.

"With the new HTC U Ultra and HTC U Play, the company is betting big on it's new AI assistant called Sense Companion, which it claims will learn your usage behaviour over time in order to present you with priority notifications and alerts based on people you contact the most."

"According to HTC, you'll need to manually perform an initial setup of Sense Companion on the HTC U Ultra and HTC U Play, which involves adding your favourite contacts and apps in order to 'train' the AI, after which it's supposed to automatically manage this for you…the AI will be able to alert you if your phone needs a charge, depending on your schedule for the day."

"HTC AI will be able to understand your consumption patterns as well. For instance, rather than simply recommending restaurants around you, it will learn how you order food – based on restaurant ratings and proximity – and over time, when it's gathered enough data, it will offer prompts to the places you are most likely to order from. The same goes for the weather. Instead of alerting you with weather alerts every day or hour, Sense Companion will only alert you when the weather is unusual."

As this information is based on a third-party's understanding, details may be incomplete or otherwise misinterpreted. We certainly hope this is the case. The only mention of actual learning within this entire write-up is in reference to HTC Sense Companion's ability to recognize your "consumption patterns." This presumably means your choice of restaurants, stores, or other places where goods can be purchased. The scope of this may be smaller though, as it seems that third-party application support is a necessity for effective implementation. This aside, the only other instance where learning could be utilized – prioritizing notifications – has effectively removed AI from the equation by requiring the user to manually input their favorites. There's absolutely nothing wrong with this approach – save for labeling it AI. How HTC justifies labeling alerts for unusual weather as artificial intelligence is beyond comprehension, though we do hope future revelations will add clarity and justify this selling point.

LG/Samsung

LG has recently indicated that they would like to leverage AI in their next phone but with rumors of Google Assistant integration on their upcoming phones and Alexa support already on their other electronics, it is unclear to what extent we will see AI in LG devices. Samsung, on the other hand, seems to be releasing their own "AI" assistant – Bixby. Built on technology developed by Viv, an AI company most notable for creating Siri and recently being acquired by Samsung, Bixby has some serious potential. Viv has shown itself to be capable of answering queries as complex as "Will it be warmer than 70-degrees near the Golden gate bridge after 5pm the day after tomorrow?" and much more. This proficiency in sophisticated queries coupled with the creators' commitment to third-party application integration certainly creates the potential to launch Bixby into the upper echelon of smartphone AI. Nevertheless, Bixby has yet be officially announced, though early reports indicate the ability to interact with native apps, conduct mobile payments, and of course search the web. The smartest feature offered by Bixby, so far, appears to be a Google Googles-like function that allows the camera to be used as an input to search the web. More details on this will surely emerge, but until the Galaxy S8 launches, speculation will continue to be just that – speculation.

Honorable Mention: Facebook

Just because Facebook doesn't have mobile hardware doesn't mean Mark Zuckerberg and company are out of the game. Facebook has created Neural Networks of their own, not just for facial recognition in photos, but also in a platform called Caffe2Go. This platform can capture, analyze and process pixels in real time on a mobile device, effectively transferring stylistic features from a given painting to every single frame in a live video. With Oculus under its wing, the innovation is unlikely to stop there. Improvements in VR experiences and the creation of a computer with "common sense" are just a couple points mentioned in Facebooks recent manifesto. If the world's fifth richest man has something to say about AI, you will certainly hear it. Expect some significant impacts on AI from the Facebook camp in the coming years, as well.

Facebook's Caffe2Go AI Algorithm. (Credits: CIO Today)

Tasker

Given the high prevalence of automation in these so-called "AI" features, it would be remiss to not mention Tasker. Tasker is essentially IFTTT for local applications and functions on your phone, but with a good amount more customizability, and thus potential, especially with its extensive repertoire of plugins. Priced at $2.99 in the Play store, Tasker does not require root access (although some actions do necessitate it) and enables you to automate a myriad of situations. From setting your phone to read texts aloud when you're in the car to creating a mobile hotspot monitor, Tasker has a seemingly endless amount of automated options. A compiled list of some of our favorite Tasker functions, replete with walk-throughs and instructions, can be found here. From what we've read above, Tasker could certainly be leveraged intelligently to offer similar results – in fact, solutions could be even more personalized and therefore effective.


The Future of AI (On Phones and Beyond)

With all the advances in Deep Learning, thankfully, mobile hardware has stepped up to the task. For a few years now SoCs have been evolving behind the scenes in conjunction with Deep Learning – increasing their capabilities, while decreasing their size and power consumption. For the most part, these chips were dedicated to creating Machine Learning mobile devices in healthcare and other sciences. Only very recently has the refinement of these chips become apparent, and soon, readily available to consumers in the form of Qualcomm's Snapdragon 835 SoC. While the average Galaxy S8 buyer has little interest in using Machine Learning on an SoC (or MLSoC) to detect arrhythmia or myocardial infarctions with 95% accuracy, but would rather take a picture of El Chupacabra only to find out Bixby is 99% sure it's a cat – both are indeed possible thanks to MLSoC's (albeit on different systems, for now at least.)

Qualcomm has applications everywhere, even outside of mobile. A particularly cool example they briefed us on involved implementing object and context recognition in baby monitors/cameras. Updates or alerts can then be sent to the parents regarding their baby's status. This can be very useful as Deep Learning enables the recognition of various activities or situations. Powered by chips such as the Snapdragon 835, mobile devices that aspire to be truly adaptive and intelligent will now have the proper hardware to do so.

Speaking of hardware, one needs the proper software to utilize these capabilities. Enter Tensorflow. From the minds of the Google Brain Team, Tensorflow is essentially a pre-fabricated, open-source Neural Network, which is free to download. With this program, anyone can put together a Neural Network and input their own data to "train" it. Some data libraries also exist within the program, providing the user with tools and pre-made data sets to work with, though they can also create their own. Some level of knowledge in Python or C++ is needed, but the official website has plenty of resources even for beginners. Perhaps the best feature of this may be the use of a single API enabling compatibility across desktops, mobile devices, and even servers.

Image Credit: Qualcomm

SoCs like the Snapdragon 835 have all the proper parts to run an effective Neural Network such as those made through Tensorflow. In fact, Qualcomm has been working with Google to ensure their newest chip uses its components to their fullest potential when doing so. Utilizing the CPU and the DSP instead of just the CPU or GPU, the 835 has shown great potential and solid performance in Machine Learning – all before ever touching the inside of a commercially available phone.

The Wave Has Just Begun

Much to the chagrin of AI purists or those who value truth in advertising, unsubstantiated claims of Artificial Intelligence in certain smartphones are likely to continue and even grow. So few of these devices can rightfully say they learn and adapt in any way and most tout features that amount to little more than discreetly coded automation. Try as these companies might to obfuscate the true power of this technology, a real AI uprising is upon us. Breakthroughs in Machine Learning, coupled with rapidly advancing mobile technologies have brought us to the point where legitimate Neural Networks can begin to run directly on mobile devices, without the cloud. The implications of this are large and far-ranging, impacting everything from modern medicine to how you find pictures you've taken, and everything in between. Manufacturers claiming to harness AI simply want to be aligned with this sweeping movement – and given the potential, who could blame them?


Who/what do you have your eye on in the AI wars? Let us know in the comments below!



from xda-developers http://ift.tt/2jFKqIy
via IFTTT

Rumor Says Sony Could Launch as Many as 5 New Devices at MWC 2017

We're just a few weeks away from Mobile World Congress 2017 and we already know that a number of smartphone OEMs will be showing off new devices at the event. LG has started to send out press invites to various online publications and it is speculated that they will be announcing the LG G6 on February 26th.

We're also seeing Motorola sending out press invites for the event as well. We haven't heard too much about what Motorola will announce at MWC 2017, but the Moto G5 has been showing up in a number of leaks lately.

We'll likely see other devices shown off in Barcelona this year as now we're hearing a rumor about Sony's plans. If it turns out to be true, Sony has plans to expand their new Xperia X series of devices and we're seeing codenames for 5 different devices being tossed around right now. The codenames we're hearing about right now include Yoshino, BlancBright, Keyaki, Hinoki, and Mineo. The Yoshino device is said to be the successor to the Sony Xperia XZ smartphone.

The rumor claims this device, tentatively called the Xperia XZ2, will sport a 5.5″ 2560p display (4K), the Qualcomm Snapdragon 835 SoC, 6GB of RAM and a new Sony IMX400 camera sensor. Now, we have heard that MWC 2017 flagships will not be equipped with the Snapdragon 835 SoC. It's believed that Samsung's order for the Snapdragon 835 that will be used in the Galaxy S8 is eating up so much of Qualcomm's inventory that they won't be able to fulfill orders for anyone else this early.

This is why the LG G6 is rumored to be using the Snapdragon 821 SoC, so if Sony wants to use the 835 in a device they'll show off at MWC 2017, we don't expect it to be sold immediately after the launch event. As with all rumors, we should take this information with a grain of salt until we hear something official from Sony themselves.

Story Via: Android Community Source: Sumaho



from xda-developers http://ift.tt/2kBikyg
via IFTTT

Version 0.55 of FlashFire Brings Bug Fixes, Improvements and New Features for Pixel Phones

Fans of Chainfire's FlashFire application can look forward to a new update via the Play Store, or simply download the APK directly from the official website. This update brings the version number of FlashFire up to 0.55 (with a quick 0.55.1 hotfix that resolves an NPE crash when the GUI is reloaded), and comes with a number of changes that everyone can look forward to. We're told this update brings bug fixes and improvements that everyone can enjoy, but then also adds some new features that are mostly for those who own the Pixel or the Pixel XL.

Chainfire tells us there was a pre-release version of this update back on January 3rd and that we should expect the changes in that version included in this changelog as well. So, for those with a Pixel phone, FlashFire has now added initial support for its new partition layout and the A/B slots that come with it. Slot management is mostly automatic with this update, but there are some actions within the application that let you manually override the slot you're performing a certain action on.

This big update also adds initial support for the file-based encryption that is included in the new Pixel phones by Google (and Android 7.0 in general). Because of how this encryption method works though, FlashFire will only have access to the data of the primary user. Chainfire also points out that if you want a backup to be restored in an encrypted form, it has to be both created and restored with the device in an encrypted state using FlashFire.

Basic support for FlashFire has been added with a device that is currently using Magisk (with both SuperSU as well as using just topjohnwu's mod of superuser). But Chainfire has only done preliminary testing with this and says that our mileage may vary depending on how things are setup on our devices. The full changelog for this update can be found below, and we encourage you to join in on the conversation in the XDA forum thread for FlashFire.

– Improved 32/64 bit handling (fixes some blackscreens)
– Improved handling of devices that have a /vendor partition
– Add initial support for devices with multiple slots
– Add support for uncrypted OTA ZIPs
– Add support for A/B OTA ZIPs
– Add support for file-based encryption backup/restore (primary user only)
– Add additional Pixel partitions
– Add support for Magisk+SuperSU (preliminary)
– Add support for Magisk+phh (topjohnwu version only) (preliminary)
– Add circular icon (Android 7.1)
– Restrict app usage to primary user
– Make treating system/vendor/oem as original a setting (auto-detection is not completely reliable)
– File selection activity now remembers last location
– Fix drawer closing on back button press on tablets like Pixel C
– Fix overlay display visibility on S7@Nougat
– Detect and handle screen scaling on S7@Nougat
– Fix archive scanner freeze when reading password protected ZIPs inside another archive
– Fix seeking issue with custom recovery detector
– Fix archive scanner inconsistency with multiple files targeting the same partition
– Fix archive scanner scanning inside images
– Fix archive creator display inconsistency
– Fix unconditional block update ZIP detection
– Fix busybox/untar not setting SELinux file context on files that already existed
– Hide cache wiping options if no cache partition present
– Restart and re-check for root if root not found initially
– Refactor boot image analysis
– Preserve recovery: option hidden from devices without a dedicated recovery partition
– Replace update_engine service on A/B update devices
– Add intent to flash a specific ZIP file
– Workaround adb restore 'never-finish' issue by using adb push (temporary?)
– Embedded SuperSU updated to v2.79 SR3
– Adjust timebomb for non-Pro users to May 01, 2017

Source: +Chainfire



from xda-developers http://ift.tt/2kB3EPA
via IFTTT

Advanced Kernel Tweaks for the OnePlus 3

Are you looking to get the most in terms of battery and performance on your OnePlus 3? Check out XDA Senior Member Asiier's thread on advanced kernel tweaks! What's more, you can use these tweaks across any phone with the interactive governor. Head on over!



from xda-developers http://ift.tt/2jxmwkP
via IFTTT

New Report Says LG G6 Will Ditch Removable Battery In Favor Of Waterproof Design, May Feature Google Assistant

We have been getting reports regarding LG's upcoming flagship device, the LG G6, for a while now. Earlier reports have shown that the device will sport a completely new design – leaving behind the G5's modularity – with a 5.7-inch QHD display, and will settle for a relatively-old Snapdragon 821 SoC.

Now, a new report coming out of CNET adds to previous reports. According to CNET's sources, the LG G6 won't be using the latest Qualcomm SoC, Snapdragon 835, instead it will feature Snapdragon 821 chip. The reason behind the move is that LG wants to launch its flagship ahead of the Samsung Galaxy S8, which won't come to the market until March. If LG wanted to go with the Snapdragon 835 SoC, it would have to wait until later in the year — which means launching the G6 after the Galaxy S8 launch, the source familiar with the matter told CNET.

The source also adds up to a previous report regarding the LG is opting for a water resistant design, saying that the device will ditch the removable battery to make the G6's body waterproof. The LG V34 (a smaller V20) launched in Japan last year was IP67 certified, so it's not surprising to see the company wants to implement the same feature to its main flagship device as well.

Earlier it was reported that the LG G6 may come with either a Google Assistant or Amazon Alexa virtual assistant. Now, as per the CNET source, the G6 will likely feature Google's AI assistant, making it the first ever non-Google phone to boast this functionality when it will launch. The LG V20 was the first ever Android device to ship with the Android 7.0 (Nougat) software out-of-the-box, so partnering up with Google to take advantage of the new software is not something completely new to LG.

With the LG G6 using the last year's SoC, LG's main selling points for the G6 will surely be the better display, improved camera performance, and perhaps Google Assistant. Whether the move to launch the G6 a month ahead the Galaxy S8 will help LG in sales remains to be seen. The company is expected to officially launch the LG G6 on 26th February at MWC.

Source: CNET



from xda-developers http://ift.tt/2kBfUnf
via IFTTT

Google Opens Up the Daydream VR Platform to All Developers

Google launched a lab division for employees to work on their Daydream VR platform. The goal was to come up with some experiments and see what worked and what did not work in a VR environment. They've come up with a number of conclusions about the social aspect of VR, ways to prevent trolling and even came up with some ways that people can interact within a VR setting. So far, this has only benefited Google's select partners because they're the only ones who could submit Daydream VR applications and games.

This changed this week though as Google finally started to allow any developer to submit an APK that was meant for their Daydream platform. Before you submit your application or game, Google wants you to read over the Daydream App Quality guidelines on the usability and quality standards. Just like with the certification process of Daydream ready devices, Google wants to make sure the Play Store is offering a great user experience when it comes to this Daydream section of the Play Store.

Before you can publish the application or game to the Play Store though, you will need to head over to the Pricing and Distribution section of the Developer Console and opt-in to Daydream. Opting-in tells Google that you want your application to be found through places like Google Play VR and Daydream Home. Google will then double check to make sure your application or game meets their app quality guidelines before they decide to publish it to the Play Store.

As mentioned, until now we have only seen Daydream applications and games published in the Play Store by developers who had close ties with Google. Now that Google is opening up the floodgates, we should start to see a bunch of new content for their Daydream VR platform. Developers should be sure to check the source link below to make sure you meet all the requirements to publish your applications and games into the Play Store.

Source: Google



from xda-developers http://ift.tt/2jc7odX
via IFTTT

NVIDIA Finally Starts Rolling Out Android Nougat Update To SHIELD TV 2015

NVIDIA launched SHIELD TV (2017) a while back with Android 7.0 Nougat running out-of-the-box. Now, it appears the company is also bringing the Android 7.0 to its SHIELD TV (2015 as well via a new software update. Earlier, we reported that NVIDIA was working on Android 7.0 Nougat update for the SHIELD TV 2015, and now it looks like the update is finally rolling out.

NVIDIA has announced on its site that the older SHIELD TV is now getting a new update which will bump the SHIELD Experience software from 3.3 to 5.0. The update comes with Android 7.0 Nougat as well as tons of new features and enhancements.

For a quick reminder, the update brings in Amazon Video support which will allow users to stream their favorite movies and TV shows in 4K HDR directly on SHIELD TV. The other big change in the update is the new NVIDIA Games App, which replaces the existing SHIELD Hub app.

GeForce Now has also been upgraded with an improved graphics performance, which now lets users stream on-demand PC games from the cloud to SHIELD TV box with performance comparable to a computer running a GTX 1080. On the other hand, GameStream also sees performance improvements in the form of 4K HDR support and low-latency streaming. GameStream will allow users to stream their favorite PC games to SHIELD TV and will also give access to Steam Big Pictures from the Steam app. The update also adds support for Nest Cams. With the Nest app for SHIELD, users will be able to watch the live video streams from their Nest Cams on SHIELD TV.

Apart from the above mentioned changes, the update also packs in Android 7.0 features such as double pressing the home button to access the recent menu, support for picture-in-picture mode (in supported apps,) and revamped settings menu with navigation optimizations.

The update has already begun rolling out. If you're sporting a SHIELD TV 2015, you should be receiving the update in the coming days.

Source: NVIDIA



from xda-developers http://ift.tt/2jZ3TEU
via IFTTT