Showing posts with label internet of things. Show all posts
Showing posts with label internet of things. Show all posts

Tuesday, 1 March 2022

Top technological trends to re-shape UX

In this technology world, we are moving with fast pace. Our life style and behavior are changing and which is bringing more opportunities for creation of new products, services, solutions, gadgets etc. With the evolution of Internet companies have explored a larger area of business and this area is going to be getting more expanded with the evolution of some amazing and near future techniques. More innovations have happened at technology side now more imagination is happening at designing side. In this article we are going to see some cool areas for User Experience designer to work in near future.



Internet of thing
IOT, Internet of things, UX

The internet of things (IoT) is the network of physical objects—devices, vehicles, buildings and other items—embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data.


Connected Home, cars, workstations, offices, cities are going to boom our current environment and experience.


3 Months free Interaction Design Foundation


Virtual Reality

Virtual reality, VR, UX

Virtual reality or virtual realities (VR), also known as immersive multimedia or computer-simulated reality, is a computer technology that replicates an environment, real or imagined, and simulates a user's physical presence and environment to allow for user interaction. Virtual realities artificially create sensory experience, which can include sight, touch, hearing, and smell. The global AR and VR market is expected to grow to $209.2 billion by 2022. (Ref)
Most of the companies have already worked in this section and started creating VR products. Google cardboard and Microsoft Holo lenses are some well known products of this area.

Virtual reality (VR) is the experience where users feel immersed in a simulated world, via hardware—e.g., headsets—and software. Designers create VR experiences—e.g., virtual museums—transporting users to 3D environments where they freely move and interact to perform predetermined tasks and attain goals—e.g., learning.

- Interaction Design Foundation 




Share:
Continue Reading →

Friday, 9 July 2021

UX and Internet of Things

UX & IoT | Will UX connect the dots of IoT?  

Consumer centric Internet of Things products emerged as a new layer of devices which managed to connect formerly static and non-connected devices with computers, tablets, and smartphones. For instance, Nest (thermostat and smoke alarm), wearables (smart watches, fitness bands…etc.), and August Smart Lock (door lock system) are all examples of consumer centric or consumer based Internet of Things devices.
IoT signifies a major shift in the idea of the Internet, where it will power not just computing devices, but also billions of everyday devices, from heart monitoring implants to toaster ovens. Large businesses have already started to invest in IoT to gain competitive advantage by collecting and analyzing data from millions of wearable and embeddable devices to make meaningful business decisions.

Designing the Experience

UX and Internet of things (IOT)

Right now, companies are trying to make their connected product work, but few are focused on making it usable. There needs to be a designed experience for users to find value in connected products. Designing for IoT comes with a bunch of challenges that will be new to designers accustomed to pure digital services. How tricky these challenges prove will depend on :
1 The maturity of the technology you’re working with
2 The context of use or expectations your users have of the system
3 The complexity of your service (e.g. how many devices the user has to interact with).





More UX and less GUI

With small form factors, majority of the devices will have small or no visual interfaces. There will be less buttons to push and scroll-bars to pull. Without large scale screens, the need for great UX design will become even more important, but the primary mode of interacting with these devices will be using touch gestures and voice.

Interfaceless UX


Multi-device Experiences

A single user experience will get spread across multiple devices (personal and non-personal), different platforms and different points in time. Some respond to touch, voice, gestures whereas others respond to sensor tricking or digital juggling. A UX designer will have to break out of platform specific standards and learn how to provide a continuous user experience across multiple devices and platforms.

Multimodal interfaces in UX

In 2020, there could be 200-Billion connected devices, from smart dust to smart cities



DISCLOSURE: This post may contain affiliate links, meaning when you click the links and make a purchase, we receive a commission.

Top UX Courses at Udemy

Speech as unifying modality

Speech is the simplest and oldest form of communication by the humans. Using speech as the medium, the user interface becomes almost invisible and the experience becomes as natural as day-to-day activity. The goal is to communicate with devices as we would each other do as humans. Dialogue evokes emotions, meaning, identity and trust.

Mobile app controlling of IOT devices



Designing beyond logos, colors, fonts

With significantly less to literally no interfaces, branding will take new forms other than pixels. Voice, sound tone and tactile feedback will become the ways in which users will identify a brand.


Smart rooms, smart homes


A UX designer must understand how to communicate brand experiences using these means.


Smart rooms, smart homes


Internet-like failures are irritating

For instance, you command your lights to switch on and they respond after 2 minutes or don’t respond at all. What will be your reaction? You will be irritated just like “Web page not available” or disconnected Skype calls. As the IoT is highly asynchronous due to wide range of connections between devices, these kinds of experiences are very irritating.

IOT in medicine

Intelligence is the secret sauce

UX and Intelligence should go hand-in-hand in right proportions for the IoT to be successful and engaging for all the users. As IoT is all about sophistication and large amount of data, there has to be considerable amount of attention to Intelligence factor for the enjoy-ability of the product.

Smart home, smart office

Will UX connect the dots of IoT?

As IoT involves less to no interface, UX will be the prime factor that will drive the entire product experience among the customers. This not only involves UI or Interface design rather the underlying connection between each and every product that should be knitted carefully with each other. Designers have a very large space and opportunity to contribute to this revolutionary technology which will impact millions of lives around the globe.


Share:
Continue Reading →

Thursday, 1 April 2021

The Psychology of Wearables and Wearable Technology


In recent years we’ve seen new, disruptive innovations in the world of wearable technology; advances that will potentially transform life, business, and the global economy. Products like Google Glass, Apple Watch, and Oculus Rift promise not only to change the way we approach information, but also our long established patterns of social interaction.
Indeed, we are witnessing the advent of entirely new genre of interface mechanisms that brings with it a fundamental paradigm shift in how we view and interact with technology. Recognizing, understanding, and effectively leveraging today’s growing landscape of wearables is likely to be increasingly essential to the success of a wide array of businesses.
In this article, we discuss the ways in which effective interface design will need to adapt, in some ways dramatically, to address the new psychology of wearable technology.

Enter the Neuroscientific Approach

Cognitive neuroscience is a branch of both psychology and neuroscience, overlapping with disciplines such as physiological psychology, cognitive psychology, and neuropsychology. Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neuropsychology and computational modeling.
In the context of interface design, a neuroscientific approach is one which takes into account – or more precisely, is centered upon – the way in which users process information.
Psychology of wearables

The way people interact with new, not-seen-before technologies is more bounded to their cognitive processes than it is to your designer’s ability to create stunning UI. New, often unpredictable, patterns emerge any time a person is presented with a tool, a software or an action that he has never seen before.
Accordingly, rather than employing more traditional approaches (such as wireframing and so on), you will instead focus on the sole goal of your product, the end result you want the user to achieve. You will then work your way back from there, creating a journey for the user by evaluating the how to best align the user’s intuitive perception of your product and his or her interaction with the technology used. By creating mental images, you won’t need to design every step the user has to take to accomplish an action, nor you will have to evaluate every possible feature you could or couldn’t include in the product.

DISCLOSURE: This post may contain affiliate links, meaning when you click the links and make a purchase, we receive a commission.

Top UX Courses at Udemy


Consider, for example, Google Glass Mini Games. In these 5 simple games made by Google to inspire designers and developers, you can see exactly how mental images play a major role in user engagement with the product. In particular, the anticipation of a future action comes to the user with no learning curve needed. When the active elements of the game pop up into view, the user already knows how to react to them and thus forms an active representation of the playing environment without the need to actually have one to see. Not only has the learning curve has been reduced to a minimum, but the mental images put the user in charge of the action immediately, anticipating what the user will do and just letting the user do it.
Bear in mind that is possible to identify three different types of images that form in the brain at the time of a user interaction, all of which need to be adequately considered and addressed to achieve an effective, and sufficiently intuitive, interface. These include:
  1. Mental images that represent the present
  2. Mental images that represent the past
  3. Mental images related to a projected potential future

And don’t worry. You don’t need to run a full MRI on your users to test what is going on in their brain to arrive at these mental images. Rather, you can simply test the effectiveness and universality of the mental images you’ve built.


Users Do What Users Do

When approaching a new technology, it’s vital to understand how users experience and relate to that technology. In particular, a reality check is often needed to recognize how users actually use the technology in spite of how they’re “supposed to” (or expected to) use it. Too many times we’ve seen great products fail because businesses were expecting the users to interact with them in a way that in reality never occurred. You shouldn’t jump on the latest, fancier technology out there and build (or, worse, re-shape!) your product for that technology without knowing if it will actually be helpful to, and adapted by, your users. This is an easy mistake and it’s quite eye-opening to see the frequency with which it occurs.

Leveraging Multiple Senses in Wearables

Wearables bring the great advantage of being way more connected to the user’s physical body than any smartphone or mobile device could ever hope for. You should understand this from the early stage of your product development and stop focusing on just the hand interaction. Take the eyes for example. Studies conducted with wearable devices in a hands-free environment have shown that the paths users follow, when their optical abilities are in charge, are different from the ones you would expect. People tend to organize and move in ways that are due to their instinctive behavior in spite of their logical ones. They tend to move instinctively towards the easier, faster paths to accomplish that action, and those paths are never straight lines.
One application that effectively leverages multiple senses is the Evernote app for the Apple Watch. Actions in the Watch version of the application have the same goals as their desktop/mobile counterparts, but are presented and accomplished in totally different ways. With a single, simple button click, you can automatically access all of the feature of the app: you don’t need multiple menus and differentiation. If you start talking, the application immediately creates a new note with what you’re dictating, and syncs it with your calendar. As a user, you are immersed in an intuitive experience here that lets you be in charge of what you’re doing, while presenting you with an almost GUI-free environment.

UX Design certificate by Google

And what about our more subtle, cognitive senses? Wearables bring the human part of the equation more fully into account with a deeper emotional connection: stress, fear and happiness are all amplified in this environment. You should understand how your product affects those sensations and how to avoid or take advantage of those effects.
Just remember: let the cognitive processes of the users lead and not the other way around.

Voice User Interface (VUI)

In the past, designing a Voice User Interface (VUI) was particularly difficult. In addition to all the challenges in the past with voice recognition software, VUIs also present a challenge due to their transient and invisible nature. Unlike visual interfaces, once verbal commands and actions have been communicated to the user, they are not there anymore. One approach that’s been employed with moderate success is to give a visual output in response to the vocal input. But still the designing of the user experience for these types of devices presents the same limitations and challenges of the past, so we’ll try to give a brief overview here of what people like and don’t like about VUI systems and some helpful design patterns.
For starters, people generally don’t like to speak to machines. This might be a general assumption but it is even more true if we consider what speaking is all about. We interact with someone taking in consideration that the person can understand what we’re saying or at least has the “tools” and “abilities” to do so. But even that is not generally sufficient. Rather, speaking with someone typically involves a feedback loop: you send out a message (carefully using words, sounds, and tones to help ensure that what you say is properly understood in the way you intended). Then the other person receives the message and hopefully provides you with some form of feedback that hopefully confirms proper understanding.
With machines, though, you don’t have any of this. You will try to give a command or ask for something (typically in the most metallic voice you can muster!) and hope for the machine to understand what you’re saying and give you valuable information in return.
Moreover, speech as a means for presenting the user with information is typically highly inefficient. The time it takes to verbally present a menu of choices is very high. Moreover, users cannot see the structure of the data and need to remember the path to their goal.
The bottom line here is that these challenges are real and there are not yet any “silver bullet” solutions that have been put forth. In most cases, what has been proven to be most effective is to incorporate support for voice interaction, but to limit its use to those places where it is most effective, otherwise augmenting it with interface mechanisms that employ the other senses. Accepting verbal input, and providing verbal feedback, are the two most effective ways to incorporate a VUI into the overall user experience.

Micro-interactions

While designing for wearable tech, remember that you will find yourself in a different, unusual habitat of spaces and interactions that you’ve probably never confronted before (and neither have most of your users).
Grids and interaction paths, for example, are awesome for websites and any other setting that requires huge amount of content to be handled. With wearable devices, though, you have limited space for interaction and should rely on the instinctive basis of the actions you want to implement to give the best experience to the users.
Let’s take the Apple Watch for example. For one thing, you will only be able to support one or two concurrent interactions. Also, you don’t want the user to need to constantly switch between tapping on the screen and scrolling/zooming the digital crown on the side of the device. frankly, even Apple itself made mistakes in this regard. In handling the Watch’s menu interface, for example, to safely tap an icon on the menu, you will need to zoom-in and out most of the time using the crown while briefly distracting yourself from the direct task you wanted to accomplish.
A great example of effective micro-interaction design can be found in the Starwood Hotels and Resorts app for the Apple Watch. The Starwood application for Apple Watch perfectly fits their personal brand experience by letting users unlock their room door in the hotel by the simple tap of a button. You don’t need to see the whole process going on to enjoy what this kind of micro-interaction can do. You’re in the hotel and you want to enter your door without going in your bag or pocket looking for the actual key. The app also shows one of the best practices for wearables: the selective process. Don’t put more actions or information that you should, otherwise it will disrupt the experience (like in the Apple menu example). Rather, just show the user the check-in date and the room number. When they click “unlock” a physical reaction occurs, outside the device and into the real world.


UX Design certificate by Google


The KISS Principle

The well known KISS Principle is perhaps even more relevant in the domain of wearables than it is with more traditional user interface mechanisms. The Wishbi ShowRoom app for Google Glass is a great working example of a light UI that enriches the user experience without “getting in the way”. It has already been incorporated by companies like Vodafone and Fiat. Basically, it facilitates streaming online content live, even at different quality rates. And it does that with two simple actions; one that helps you start the broadcasting, and a split screen that captures everything you’re seeing. For an action as complicated as live broadcasting, the app manages to be extremely lightweight and unobtrusive.

Wearable Technology Conclusions

Remember, every interface should be designed to empower and educate the user to perform a desired activity more quickly and more easily. This is true for every interface platform, not just wearables.
But that said, please DO go crazy! Wearable technology is a revolutionary field, and even though you can look out for principles and patterns for a safer job, you should always make some room for crazy, playful ideas that won’t even make sense. You can make your own answers here.
Wearable tech interfaces represent a wide open playing field. Have at it!
This article was originally published on Toptal

References:

http://www.toptal.com/designers/ux/the-psychology-of-wearables

About Author:

Antonio Autiero is a Software Engineer at Toptal. Antonio is a digital art director and UX designer with experience in information architecture, brand development, and business design. He has been working for over 10 years all around the world with amazing clients such as Nike, Rolex, Ferrari, and more. He lives in Gubbio, Italy where he's surrounded by beautiful countryside, and he always likes to meet nice people.
Email: irene@toptal.com




Share:
Continue Reading →

Sunday, 23 August 2020

Compatibility of Internet of Things (IoT) and User Experience (UX)

Internet of Things (IoT) and User Experience (UX)


Internet of Things (IoT)
The Internet of Things (IoT) is one of the fastest-growing fields which is driving new ways for interacting with appliances, tools, and devices in entirely new and unexpected ways. IoT is becoming a bigger part of our everyday lives — from our gym and walks to travel planning, home security, and countless other uses. As IoT devices become more common the user experience of using such devices becomes increasingly important. Being a new technology and on the other hand addressing the fact that users have a very low tolerance for the inconvenience of learning something new or doing something differently. That’s why user experience is very vital for IoT products.

Initially, IoT solutions focused primarily on technical capabilities. But with more than one-third of these new objects and services are abandoned by their users after only 6 months of use, it became a primary result of the fact that IoT significantly depends on the UX design for the connected objects. The UX community being new to IoT projects, there is still a lot to be done to develop new best practices specifically for IoT projects.




UX for IoT is different and complex because there are two sequences of interaction ie from user to the virtual system and from the virtual system to the physical system. Also, each physical system has its own set of interactions and moreover there are different users to use the application. Hence the designers have to face new challenges while working on IoT projects.

Therefore, as UX experts we need to know some of the challenges of IoT and find out if we can address them to provide good user experience for our users.

1. Connectivity issues
Internet of Things as its name suggests is primarily based on network connection as the internet is needed to transfer data between the app and physical devices. Most of the IoT devices require a good WiFi network.
We are used to occasional connectivity problems on our smartphones and computers like poor connection during a video call, slow websites, etc. Whereas we don't expect such problems with our physical devices like toasters, room lighting, and opening or closing doors. The user experience expectation of everyday physical things is different from that of the web. When we turn on room light, we expect an immediate response or else we assume that there is some defect with light. But when the internet connection is poor it may take a couple of minutes for the lights to turn on. The users might not be ready for such delays in response from physical products and might lead to frustration, worry or abandoning of IoT system. The first generation users certainly have to go through such problems.
Connectivity issues are going to have a significant impact on the IoT experience and there is little we can do about it since it is a technical problem.


2. Multiple apps for different devices
One of the main problems with IoT devices functioning is that there are plenty of connected devices. Individually these devices might be smart and useful, but as a team, they might not work in sync. Users need different apps for different devices and this becomes overwhelming and a plenty of mental overloads. In the current scenario, users cannot control the whole collection of IoT devices from a single app and make them sync the data. So for instance, if you have a smart car, a smart gym, and a smart toaster, and you have different apps to control them. You won’t be able to apply the rule to adjust the temperature in the room according to your workout data and start your toaster after 30 mins of workout and start your smart car as soon as you lock the door from outside. This does not give a unanimous user experience of all IoT products. This kind of broken UX can make the tasks more complicated whereas the purpose of smart devices is to make the user’s life easier. Another example can be lots of smart home apps don’t work together for example, a user might control the sound system with one app and lights with another. Even in some cases, lighting from different manufacturers may require different apps to control, which leads to bad user experience.

3. Synchronising Data
Another IoT design challenge is to separate the useful and irrelevant data while a lot of data is flowing from various sources and devices. Synchronising data flow between different smart devices is the key to UX design in the IoT platform and is also a difficult task.


Iconfinder 50% off

4. Third-party integrations
UX designers rely on the supplies from many third-party vendors to develop an IoT device. Different components (sensors, processors, controllers) from different vendors can be difficult to integrate and lead to disoriented user experience. Supplies such as application processors, sensors, controllers, and platforms may not all come from one supplier. Expecting different pieces to work together to produce a seamless UX might be impossible. IoT devices require repairs and updates after a certain interval. Third-party integrations are not always seamless.

5. Impact of hardware
Hardware is a big part of the solution in IoT products and, depending on its type and quality it has a large effect on the user experience. Hardware selection is generally based on the technical specifications, compatibility of running software and cost to the user. The combination of hardware impacts user experience to a large extent. In case the user chooses a lower-cost system, some functionality might not be applicable which compromises the user’s experience. Therefore it is important to select appropriate hardware components.

Conclusion
The key to creating great IoT user experiences lies in understanding the fluid nature of IoT and designing interaction for it. A good user-research is a must to understand the user’s expectations from IoT devices. The fundamentals always remain the same, just that the designer needs to spend more time understanding how IoT works.
In the IoT domain, a user task flow may span different devices, different interaction paradigms, and different contexts of use. This increases complexity by orders of magnitude for the designer. Conversely, user expectations are also increased because users expect the experience of using these disparate connected devices in concert to be more than the sum of their individual experiences.

About Author
Author profile picture Neha Srivastava

Neha Srivastava
Manager | User Experience | HCL Technologies, Noida 
Email   |  LinkedIn 


UX EBOOK FREE
Share:
Continue Reading →

Saturday, 19 January 2019

User Experience in Artificial Intelligence

Two years back, Toyota offered us a glimpse into their version of the future where surprisingly, driving is still fun. Concept-i is the star in the autonomous future where people are still driving. And in the case of Toyota, it's so much fun because they're cruising along with their buddy Yui, an AI personality that helps them navigate, communicate and even contributes in their discussions.


Yui is all over the car, controlling every function and even taking the wheel when required to. It's definitely an exciting future where the machine sounds and “feels” like a human, even exhibiting empathetic behaviour.

Related: Preparing for the Future of AI

That's the kind of future I'd imagine awaits user experience (UX) in the world of AI. A time when the human-AI connection is so deep that some experts say there will be “no interface.” But currently, UX does depend on an interface. It requires screens, for instance, and they don't do much justice to it. Integrating AI into the process will mean better experience all around.

From websites to homes and cars, here's how AI could help patch the holes and bring UX closer to maximum potential.



1. Complex data analysis.

Until now, to improve user engagement in their products, UX teams have turned to tools and metrics such as usability tests, A/B tests, heat maps and usage data. However, these methods are soon to be eclipsed by AI. It's not so much because AI can collect more data -- it's how much you can do with it.

Using AI, an ecommerce store can track user behaviour across various platforms to provide the owner with tips on how they can improve their purchasing experience, eventually leading to more sales. AI can be used to tailor the design to each user’s specifications, based on the analysis of the collected data.

All this is achieved through the application of deep learning that combines large data sets to make inferences. Additionally, these systems can learn from the data and adjust their behaviour accordingly, in real time. Thus, designers applying AI in their work are likely to create better UIs at a faster rate.




Free UX ebook




2. Deeper human connection.



By analysing the vast amount of data collected, AI systems can create a deeper connection with humans, enhancing their relationship. This is already happening in a couple of industries. When you think of Siri, you see a friendly-voiced (digital) personal assistant. When Amazon first introduced Alexa, it took the market by storm. But its usefulness could only be proven over time. And it was. Smart-home owners are using it to do a million things, including scouring the internet for recipes, schedule meetings and shop. It's also being used in ambulances. Even Netflix’s highly predictive algorithm is a case example of AI in use.

Toyota says Concept-i isn't just a car, but a partner. From the simulation video, you can see that Yui connects with the family on a level that current UX doesn't reach.

By using the function over and over, consumers end up establishing an interdependent relationship with the system. That's exactly how AI is designed to work. You use the system; it collects data; it uses it to learn; it becomes more useful; gives better user experience; you use it more as it collects data, learns and becomes more useful; and the cycle continues. You don't even see it coming -- and before you know it, you're deeply connected.



3. More control by the user. 

A common concern about the adoption of AI to everyday life is whether the machines might eventually rise and take over the world. In other words, users are concerned about losing control over the systems. It's a legitimate concern with the autonomous cars, robots guards and smart homes expected to become commonplace.

This lack of control is mirrored in the skepticism for the future, but it can also be seen in commerce and other areas where user experience is of great importance. For instance, a user will be more likely to enter their card information into a system if they feel they have control over when money is transferred, to whom it goes and that they can retrieve it in case something goes wrong.
As AI develops, users will gain more control over the system, gradually improving trust which will lead to more usage. 

In Which AI Could Enhance Your Company's UX 

UX design is about a designer trying to communicate a machine's model to the user. Meaning, the designer is trying to show the user how the machine works and the kind of benefits they can get from it, from the former's point of view. 

Traditionally, this involved following certain rules, and designers understood them very well. A designer knows how to create a web page by following certain rules that they can probably manipulate. With AI, however, the design is dependent on a complex analysis of data instead of following sets of rules. To be able to design using AI, designers will have to really understand the technology behind it. 

Mixing UX and AI as we can have played with “AIBO” 


Artificial intelligence
Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks - as, for example, discovering proofs for mathematical theorems or playing chess - with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

What Is Intelligence?

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger waspSphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced.


Fixing the AI in real time

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The program selects actions from a list of means—in the case of a simple robot this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reached.

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.
About Author
Jagannathan Kannan
UX Lead Designer @ Verizon wireless
LinkedIn





  
Share:
Continue Reading →