We are in an exciting age of design: Welcome to a new era
in history where our bodies, cars, bedrooms, heaters, streets and just about
everything is beginning to become an interface.
This article will present a number of exciting
technologies and various interfaces to interact with them, ranging from touch
to VR, as well as take a historical perspective on interactions with man-made
objects that have evolved with us to where we are.
For simplicity’s sake, I like to group human interaction
with the environment and technology into 4 ages:
●
The age of tools
●
The age of the machine
●
The age of software
●
The age of the self
The Age of Tools
We used primitive
objects and symbols to communicate.
Humans began communicating with symbolic representations
carved into any surface. Hieroglyphics were one of the initial ways that humans
started communicating, and it was highly symbolic. This symbolism would later
develop into art, writing, documentation and story-telling. We can even argue that we have come full circle and are
using the symbols on our keyboards to communicate subtleties in communication
beyond words, even if they are silly.
The tools that we used to communicate became more and more
sophisticated, resulting in things still widely used such as pens.
The Age of Machines
When hardware was the
interface.
The industrial revolution placed emphasis on productivity.
Welcome to the age of the machine, where we built objects at scale to help our
lives become simpler.
One example of this is the invention of the typewriter in
1868 by Christopher Latham Sholes. We begun tapping physical keys to make
words, still using our hands, but with help from the typewriter as a
replacement of the pen. It helped create a consistent and effective format that
could be easily adopted as well as save us time.
The drawback, however, was that we needed to learn how to
type. We were mass producing machines and the power shifted to them. Despite
designing the hardware as the interface, we still had to learn how to use the
machines. This is symbolic of many machines created at the time.
The Age of Software
Learned skills from
using hardware become metaphors to teach us how to use software.
When software needed an interface, UI designers looked to existing hardware and behaviour to make it easy
for us to learn how to use it. For example we looked back to the typewriter to
learn how to type on a screen. The typewriter was used to inspire the keyboard
to make it easier for us to know what to do. We had already learned to type, so
the natural progression was to begin interacting with screens.
We see this same transition with our smartphone keypads looking like mini versions of the very same keyboards and typewriters. Adorable and useful. As we began to touch, we began to define a completely new way of interacting with our environment.
UI design evolution is
influenced by hardware and intuitiveness. A good UI design sticks to
familiarity and irons out the learning curve.
Skeuomorphism is another example of making the two
dimensional screen look like the three dimensional world to help users
understand how they should interact with the interface. Designers created
interfaces that were already familiar by depicting things like controls of a
radio or mixer in audio interfaces. Apple famously led this trend under the
direction of Steve Jobs. It wasn’t until Jonathan Ive became more powerful at
Apple that skeuomorphic design slowly evolved into flat design, punctuated by
the release of iOS7 in 2013. We were ready to make the leap to less literal
cues and could now appreciate the simplicity of a reduced interface. The
current iOS Human Interface Guidelines actively encourage the shift from “Bezels, gradients, and drop shadows
sometimes lead to heavier UI elements” with a “focus on the content to let the UI play a supporting role.”
Material design also shifts towards different
representation of the third dimension by giving the entire canvas depth, as
opposed the the individual UI elements as represented in skeuomorphism.
Material design depicts the “surfaces and
edges of the material provide visual cues that are grounded in reality. The use
of familiar tactile attributes helps users quickly understand affordances. The
fundamentals of light, surface, and movement are key to conveying how objects
move, interact, and exist in space and in relation to each other.”
Touch is Human centric
On why touch worked.
With the rise of the smartphone, we taught ourselves all
kinds of funny gestures for the novelty and , of course because it was cool to
use and to discover even secret stuff on our devices. We learned the difference
between a pinch and a tap and a long tap, and invented more gestures than we
can keep up with.
We started expanding and contracting as a way of zooming
in and out. This behaviour became so natural that I have witnessed grown men
try and zoom in on physical maps.
Touch works because it is intuitive. You see babies
working tablet devices faster than their grandparents these days, simply
because we are born to explore things with our fingers. It’s innate and reminds
us of where we started at the beginning of communication.
Touch came At A Price
And the user
experience often suffered.
We have become like children in a candy shop, wanting to
touch everything in sight, and along the way, we made up some pretty obscure
gestures that made it nearly impossible to find stuff.
Touch interfaces came at a
price. UI designers were forced to hide a lot of important stuff behind the
sleek app façade.
We hid a lot of the main user interface features. A major part of the problem was competition between
Android and iOS, where iOS initially led the way and significantly reduced its
Human Interaction Guidelines. The simplicity looked beautiful, but we were just
hiding the ugly or complicated stuff for later, often making interfaces more
difficult to use. Android emulated a lot of the worst things Apple implemented
and it wasn’t until Material Design was introduced that there were even
consistencies in Android design at all. The myriad of device sizes and display
aspect ratios didn’t help, either.
We also forgot about
consistency.
A swipe on iOS is used to read an email, delete an email,
archive an email, or playfully connecting with my next Tinder match, depending
on the app and the context. As designers, we cling to extensive onboarding
sequences simply to show users what to do.
Touch only Works on Sufficiently Big Screens
Now, we have new, wearable devices with such small screens
that touch becomes difficult. Designers of these devices are re-introducing
hardware centric features for humans to struggle with.
Even if your fingers are finer and more dextrous than
mine, I still smile at the thought of poking around 1.5-inch displays on our
wrists.
You cannot navigate a complex thing like the internet from
a hardware centric feature such as Apple’s Digital Crown. It is a real-world
spin-off from known watch-adjusting behaviour, and it is time consuming and
fiddly. However there are smart devices that do go in the right direction. The
Internet of Things (IoT) expands the use of our everyday devices into
interactive canvases and are great examples of real-work hardware mimicry done
right, putting functionality in the forefront. Unlike the Digital Crown, larger
dials are functional and sensible. The Nest Learning Thermostat is an example
of not only taking visual cues from real thermostat design, but also the way we
usually interact with dials, making the product both fit seamlessly into the
home, as well as being simple to use (and of course, it benefits the
environment and wallet).
Force Touch is
famously deployed on Apple Watch and 6th generation iPhones, but is rapidly
expanding to all kinds of devices. Apple’s Hollywood Keynote tends to
overshadow the fact that BlackBerry had already began experimenting with a
similar concept back in 2008, and that Force Touch support has been a part of
Android for years; it was introduced in Android 1.0 (API Level 1), in the form
of the getPressure() API.
Despite Force Touch’s surprisingly long history, Samsung is reportedly
equipping their upcoming Galaxy phones with their variation of Force Touch
(supplied by custom-designed human interface software company Synaptics), which
it has named ClearForce Technology.
The Age of The Self
The old metaphor comes
full circle — the next iteration.
Now that the time has come, how do we design experiences
and products in a world where any environment is interactive?
The next iteration illustrates our coming full-circle; the
Apple Pencil is a piece of both hard- and software technology that is helping
us write again. We are back to where we started, with a simple tool and a
surface. While Apple Pencil has now made this idea mainstream, it was only as
recently as 2010 that Jobs famously said “Who wants a Stylus?!“ before bragging about multi-touch. While Microsoft was
unfortunate not to execute of its vision of the future, Jobs had a point about
usability and the difficulty of such a narrow pen on a small tablet ended up
not working for Windows. The tools were not right for the time, forcing us to
feel less human, and only recently do we see the circle completed.
It just so happens that these simple devices grew larger,
becoming not-so-simple in the process. That is why the Apple
Pencil is debuting on the advanced and
oversized iPad Pro instead of the smaller 9.7- and 7.85-inch models.
Specifications aside, what is exciting here is that we are now getting to a
point that technology is so advanced that we can “unlearn” how to use it.
The Apple Pencil is human centric because it uses two
things we are already familiar with: a pencil and an iPad. We don’t need to
learn anything new in order to use it (unless we need a reminder of how to
write with a pencil again).
How can we design
products that facilitate innate behaviours, rather than design products that
force us to learn new skills? How can we become more human centric in our
design philosophy?
Moving Beyond Touch
Not only did small screens prompt designers and
technologists to explore others ways of interacting with technology, new
use-cases and contexts inspired us to think of different ways we could use
technology.
Voice commands, for example, work great while driving or
cooking, but may cause a couple of stares while asking Siri where the nearest
erotic massage parlour is on the train commute home.
Voice is one way that we can interact with technology
around us. It can be passive or interactive. The great benefit of voice is that
we don’t need hands. However, there are limitations, such as context, that mean
voice will not necessarily be the most intuitive. Further, until very recently,
voice recognition has not been good enough to be relied upon. Now, it can be
eerily good.
Virtual reality (VR) was thrust into the mainstream with a
lot of hype, supported by the purchase of Oculus Rift by Facebook in 2014.
Shortly after, Google presented Google Cardboard at I/O in 2014, a low cost VR
solution, a little lighter on the wallet than the $2 billion tag of Oculus, and
there are more low-cost alternatives coming.
Force Touch, VR, motion
tracking, and more. UI designers will have to master a lot of new skills over
the next couple of years.
Virtual reality places the user in a computer simulated
three dimensional world, allowing us feel immersed in the experience and move
way beyond our fingers, hands and voice. Despite allowing us to use our entire
body, virtual reality is constrained by the elaborate head gear. Some not-safe-for-work
use cases also come to mind. Influential tech
figures, such as Kevin Rose, boldly announced that “Virtual Reality will turn out to be a dud,” elaborating that “consumers will always take the path of
least resistance,” a similar argument can be made in terms of usability.
I must agree that the novelty factor is great, but
anything so interactive needs to feel intuitive. Wearing a huge mask, sometimes
tethered to your desktop computer, may well not be that intuitive. We are
already one step closer to removing the computer tethering on some platforms,
thanks to Gameface
Labs, yet still hiding behind the VR mask. So
have you placed your Oculus pre-order?
Like Touching, But without Touching
Project
Soli is a tiny radar that can turn basically
any piece of hardware into an interactive device, controlled by delicate
gestures. It’s from the Advanced Technologies and Projects (ATAP Lab) at Google
and helps make the world our interface.
Now that Project Soli is open for a select group of
developers to work on, the future of interaction design is limited only by our
range of gestures. Project Tango is
creating devices that we already use to help navigate the physical world. It
combines motion tracking, depth perception and area learning to help spatially
process information. Because Project Tango is completely open source, the
opportunity to innovate is pretty real. There are already some unique consumer
products built, including Left Field Labs’ Soundfield and Space Sketchr.
Lenovo will be releasing a Lenovo Tango device, the new
beginning of using our smartphones to map our worlds in three dimensional
space. With a whole lot of new technology and use cases, our job as designers
is to make the experiences feel truly human. My ask is that we leverage existing
human behaviours and use technology as a facilitator.
References
This article was
originally published on Toptal
0 comments:
Post a Comment