Saturday, 28 June 2025

Challenges in Rethinking User Interface Design for Age of AI

Challenges in UI Design for Age of AI


Published on 28 June 2025 | Author 
Avi Singh, Principal Product Designer, Flextract

The history of User Interfaces has seen steady progression of interface patterns. From the WIMP (Windows, Icons, Menus, Pointers) paradigm born at Xerox PARC to the touch first mobile UIs ushering in direct manipulation UIs. Every one of the changes has brought in fresh opportunities as well as fresh challenges. With the rise of Generative AI we are entering an era marked by minimalism, prompts, predictions and invisible reasoning. 


Designers are expected to design for building trust, allow learning prompts, understand reasoning and elevate existing workflows. And designers need to do this without the traditional visual interfaces. In this article I explore the important design challenges facing the design professionals and how we can begin to solve them.



Breaking of UX patterns 

Since the early days of GUI development at Xerox PARC in 1979 users have had access to carefully crafted user interface workflows. These GUIs use the WIMP (windows, icons, menus, pointers) paradigm that provides users control over computer interfaces. Current AI interfaces break this metaphor by presenting users a textbox and mostly empty page. Chat based systems fail to take advantage of the numerous tricks interface designers have created to communicate with users using GUIs.

Can designers think creatively and use any of the traditional patterns along with the new age prompt based user interfaces? Some of the patterns that can supercharge AI interfaces are:

  • Use rich dynamically generated user interfaces like cards instead of bullet points.
  • Render charts to allow users to comprehend trend information faster.

IxDF Free Membership UX Courses



As UXNESS being official Education Partner Interaction Design Foundation (IxDF), brings you 25% discounts (3 Months Free) subscription on UX courses.


Cognitive Overload with Prompts

Prompt based user interfaces take users back to the days of terminal based interacting with computers before GUI based computers became commonplace. Prompts have specific syntax and users have to understand capabilities of different commands. These commands need to be learned and remembered. This need for learning made computers only usable for niche users in academia, research and military applications. The kind of users who would spend the time and energy to learn these systems in spite of the difficulties. 


Primarily because capabilities of these chat based AI systems are not well understood. Some prompt based systems like MidJourney have their own arcane system of prompting. We need patterns that support prompt based systems. For example patterns like auto-complete suggestions, starter templates and interactive prompt builders can allow users to get up and running faster.


Explainability 

Explainability is the ability for a system to communicate cause and effect in a manner that is easily understandable by users. This property of explainability is linked to reasoning ability of LLM models. Models like ChatGPT and Claude are capable of complex reasoning tasks. Our jobs as designers is to be able to explain this reasoning to the end users or provide enough information to them so that they can trace the path for this reasoning. 


So far LLM reasoning has been difficult to understand for end users. This has resulted in user confusion and lack of trust in the solutions provided by LLM tools such as ChatGPT and other enterprise LLM implementations. The magnitude of this problem is way higher when these LLM models have been implemented for enterprise grade applications used by professionals to boost productivity. If these applications are unable to communicate their reasoning then it is not possible for users to know when LLM responses are wrong. It is even more problematic when the responses seem correct but are not correct. 


Add to this the inability for LLMs to respond that don’t know the answer. There have been situations where Gemini has suggested adding glue to pizza to make the toppings from slipping off the top. 


3 Months free IxDX membership for UX designers


Hallucination

Hallucination is a problem in which Generative AI tools (like ChatGPT, Claude, etc) don’t know when their training data does not contain a correct answer for the questions asked by users. In these circumstances GPTs  generate an answer which could be gibberish at best and convincingly wrong at its worst.


Hallucination can occur due to the following reasons:

  • Poor training data set could lead to wrong predictions by the LLM. 
  • Data sets that are over fit for the problem being solved could lead to LLM having insufficient information for the cases it is being used. 
  • Language slang or phrases that are not part of the training data set which are part of common spoken language may lead to LLM misunderstanding the user requests.
  • Insufficient ways to solve specific issues like math problems could lead to LLMs using out of context phrases in training data to solve the requests. 
  • Knowingly trying to make the LLM hallucinate or say things that it would ordinarily not say, e.g., trick questions. This is something referred to as an adversarial attack.


How can designers solve or control hallucinations? To a great degree hallucination is a technical challenge with LLMs. Designers can provide ways for users to detect when this happens and stay in control. 


  • Citations allow users to detect when the AI is hallucinating and access the actual information. Thereby still prove useful to users. 
  • Designers can audit response logs to see if AI produced appropriate responses to the users. This would allow designers to understand if training data needs to be augmented.
  • Provide confidence scores or uncertainty flags to allow users to understand if AI is not certain of the response.


Building Trust Through Transparency

If you want to build trust with users then provide users control and access to information. Designers can use some of the below guidelines to generate trust with users while using AI systems:

  • Provide users the ability to get to the source of information. This source can be linked to information used to generate responses to users’ utterances. Users may not use these links but having these links available provides assurance to users that the system is using legitimate sources to generate a response. In the event that AI is hallucinating or providing bad answers, users can blame the source instead of the system. 
  • Provide ways for users to provide feedback. This allows users to retain control and provides a way to vent if they find a response bad. This also provides the system designers a way to learn what the users think about responses. 
  • Remembering user preferences using memory features is a great way to generate relevant content for the users. This increases user trust and provides designers the ability to customize responses. The caveat in this case is to be aware of privacy risks of creating memory. Users need to be kept aware that certain information is being added to the memory. Also user trust will be enhanced if they have the ability to remove certain information from memory. 


3 Months free IxDX membership for UX designers


Elevate User Workflows

Users have specific workflows that they have followed for years. They are expected to follow these workflows even while using AI tools. So far AI tools have been built to be standalone and not be integrated with any existing workflows.The challenge for designers is to be able to carefully integrate with existing workflows and enhance the process using AI. Following are few ways in which this can be achieved:

  • User Research current product users. As a designer you need to understand user goals and their frustrations as well what works for them. 
  • Understand capabilities of this technology. Understanding the capabilities of the models available will improve your ability to design better with AI. 
  • Think of what data you need to build enhancements on this feature. More data can be used to train enhancements. Be aware of the privacy issues 


Dealing with Bias

This is a major problem in situations where the AI presents decisions in systems like hiring or other mission critical decision environments e.g. analysis of temperature control in a pharmaceutical factory. For example, biases in AI could result in women candidates being rejected from a position in favor of men since there were more men hired for similar positions in the past. AI learns the patterns correctly in this case, but the patterns have a bias in them. Even subtle things like showing a percentage match or ordering of results in a page can lead to perpetuation of this kind of bias. 


Following are some tools designers can use to detect and handle bias:

  • Audit response logs for signs of bias. You can read the query from users and read the response provided to detect if the AI is not providing acceptable responses. 
  • Understand what public datasets are being used for training and understand what biases exist within this dataset. Public datasets which have been produced through academic research usually discuss the kind of biases which exist in the datasets. 
  • Make users understand the system limitations and where the system is unsure of the responses. 
  • If your product is using an external LLM model like ChatGPT, Claude, etc then you have to keep an eye on the response logs and customer support requests. 


Latency

More restrictions on the AI means it has to generate and filter before presenting data or information to the user. This results in higher latency. If the AI does not have any restrictions then it can generate information that is too random and hallucinate more. Designers need to understand what they want to generate and how it affects the latency of the system. By understanding this relationship designers can strike a balance between generation quality versus speed of providing results. 


According to Nielsen a delay longer than 1 second will make users lose attention and wish the computer was faster. And up to 1 second will keep the interaction flowing seamlessly. 


Designers can resort to creative solutions like streaming results as they are available, skeleton UI or loading results in multiple stages. This can help show that the system is fast and responsive as well as keep users from waiting too long before they lose their flow. 




Conclusion: What is next for Designers & AI?

Generative AI is a new technology that promises to create new solutions and unlock new avenues to solve problems. With every new technology there are going to be new challenges that need to be solved. These challenges create room for new and innovative solutions. Designers in this time need to fall back on what we understand and how we have solved challenges with previous generations of user interfaces. 


Designers need to familiarize with new tools available and make these part of their workflow. Some of these tools for example response evaluations are lagging indicators of user experience with the product. So designers need to de-risk the experience in traditional ways like user research and iterative design process. The ideal process going forward will be a mix of traditional and new techniques to deliver an ideal user experience. 


Designers are expected to design for building trust, allow learning prompts, understand reasoning and elevate existing workflows. And designers need to do this without the traditional visual interfaces.


Models like ChatGPT and Claude are capable of complex reasoning tasks. Our jobs as designers is to be able to explain this reasoning to the end users or provide enough information to them so that they can trace the path for this reasoning. 



Author Bio

Avi Singh, UX Designer Author

Avi Singh

Principal Product Designer, Flextract

I am a UX Designer & Researcher with 15 years of experience. I am currently working as a Principal Product Designer at Flextract AI. Previously I worked at Moveworks where I designed many 0-1 AI products for medium to large enterprises. I am also a design mentor on ADPList with more than 95 mentorship sessions completed. 

LinkedIn  Website



IxDF Free Membership UX Courses



As UXNESS being official Education Partner Interaction Design Foundation (IxDF), brings you 25% discounts (3 Months Free) subscription on UX courses.

Share: