Is the Future of Computing more personalized than we expect? Our Timeline’s Possibility.

Dear readers,

Over a year ago, I wrote a defining article titled “The next big Thing: Has Technology hit its ceiling?” in which I posited the idea that computing will become less personal. Through the use of cloud services such as Dropbox and data syncing, you would be able to log in to your friend’s devices and receive the same treatment sitting in front of your own machine. “No hassles, just credentials!” If I ever open my own tech company, perhaps I could make it its motto?

In the past I argued that such a transition would yield to common devices which no longer individualized themselves to one person but rather transformed, at will, to anyone’s taste. This to me appears as a very negative future, where instead of owning what you own, people would only have cloud properties that mattered. A drop on device prices could also lead to the existence of millions of public terminals which rely only on these cloud accounts. Sure, smartphones and tablets drive this future, acting as a catalyst for those cheaper prices and millions of devices both in public and private. This I call the “ceiling” of technology, since the infrastructure which it is based upon has already matured to a large extent. Thanks to an increasing use of the internet, many governments are even discussing the idea of making web access “a right,” similar to how electricity is delivered. Mobile networks will drive this dream; As more radio spectrum is auctioned and converted to digital forms, the capacity for these networks to support both rural areas and a large populous will be possible.

Recent competition between the wireless carriers in the United States (as sparked partially by T-Mobile’s uncarrier plans) are driving down prices, or the very least increasing the amounts of data shared by families on their accounts. The more members join the smartphone party, the greater amounts of data people require. Additional bandwidth used by streaming videos has not only prompted the debate of Net Neutrality, but also forced companies to accept that people can no longer comfortably live off of 1 GB of data per month. Yet it is this drive for an increase in data-hungry devices and cloud storage which lead me to re-think and come up with another idea quite opposite from what I originally envisioned. Recent trends both within the space of smartwatches and the new features introduced by flagship smartphones (The HTC 1 (M8) and Galaxy S5) are creating a focus on unique personalization. This is also expanding the field of prediction and “AI”, since these newer phones and wearables are able to sense motions and have a unique awareness of what goes on in your environment.

This is slightly scary, and I firmly hold the view that the amount of sharing and collection that companies are permitted needs to be limited to avoid data breaches as much as possible. How often can the phone send off your location to a traffic service just to predict your drive-home commute? When should it stop collecting your blood pressure data? Perhaps pop up advice on what you should eat nearby based on both of these factors? Can Google use these data points to give companies specific insight into the best times to serve up certain menu items derived from collective information about people’s lives? These are questions that will drive forth the super-personalized future of 2014 and beyond. While the cloud will still hold all components, some data, such as biometric information, will be tied to the device you are using. This is how Apple’s iPhone 5S works with fingerprints, although it is for now a minor part of the software. Imagine self-aware phones which not only sense motion but might also hold general snapshots of your daily routine. This could include both pictures and geodata, used as a way to identify you as an individual based on the habits you alone perform.

Some will call me far-fetched for proposing such a ho-hum idea. Consider today’s HTC 1 announcement, where several new “smart” features were unveiled.
One of these is ability for your phone to recognize motions and gestures to bring you to the app or service that you want to use in the moment you take out your phone. Even when in standby, it tracks every small motion you perform and can, over time, learn what you do to invoke certain actions. (There is also some connection here to effortless fitness/health tracking capabilities.) While there was no mention on whether this data is stored locally or in the cloud, it is one step further towards this reality of a super-personalized future.

You might argue that devices could still remain neutral, as a lot of the features will be tied to log-ins or other credentials. This is where all of us precisely should draw the line, however. There is a level of authentication that can only be accomplished by a device-specific encryption, a signature which only can be completed by one unique person accessing that same device. By off-loading this data to the cloud for the convenience of anonymous sign-ins, there’s a risk of not having any physical identity which can truly authenticate you to the cloud. When I wrote my article a year ago, the entire debate of privacy sparked by the Snowden leaks had not yet traced its way upon society’s fabric. I argue that the impact it has made is not as profound as portrayed by the media, yet people are more conscious of how cloud data is handled. Even if the general public were to ignore it entirely, businesses are demanding security against both government access and any attackers who might take advantage of known exploited weaknesses. Nevertheless, the shift is a quiet one. It is creating a world where we yearn for our devices to be personalized and unique to who we are, not as generic placeholders for society’s technology. There are still inroads which can be made here, and hopefully it will be a future where a balance of both our privacy and need to know information for prediction are respected. If done right, it could weave the future where we do not purely depend on technology, using it as an enhancement, not a detriment.

Leave a Comment