How to improve UX in AI, part 1: Focusing on Search

Co-founder and CEO at 10 Senses

These days, Artificial Intelligence (AI) is on everyone’s lips.

Although the birth of the concept dates to the 1950s, the true fame of AI was sparked by the introduction of Large Language Models (LLMs) in recent years.

At that time, AI was already present in our everyday lives when we used our autonomous robotic vacuum cleaners or speech recognition software on computers. Nevertheless, these were LLMs like ChatGPT, Bard, Gemini, and others that made us realize how hugely AI can impact how we work and live.

They have shown significant potential not only in understanding and generating human-like text but also in transforming how we interact with technology.

Unfortunately, hardly anyone is talking about user experience (UX) in AI and LLMs, which is extremely important as it bridges the gap between complex technologies and user needs.

Let’s then check what UX in AI means and how to enhance it.

What is UX in AI?

In brief, UX in AI refers to the user experience design specifically tailored to AI systems.

It involves creating intuitive interactions between users and AI-driven products by designing interfaces that clearly communicate the capabilities and limitations of AI:

      • ensuring transparency,

      • providing feedback that helps users understand AI behavior.

    Such a feature is especially crucial with the upcoming EU AI Act, about which you can read here.

    Moreover, good UX in AI emphasizes ethical considerations, such as privacy and bias, to create fair and user-centric experiences.

    Finally, UX in AI aims to make advanced technology accessible and beneficial for all users.

    Therefore, ensuring a seamless and intuitive UX in AI-driven products can enhance not only user satisfaction but also adoption rates and compliance with European legislation.

    The most common user issues with LLMs

    As already mentioned, Large Language Models are used by loads of people in the world. Nevertheless, the popularity of the model doesn’t automatically go with a better user experience.

    Although at the beginning everyone was impressed with the capabilities of these tools, the test of time showed that all that glitters is not gold, and there are certain drawbacks that should be fixed.

    Issue 1: Losing context during conversation

    Currently, we ask LLMs a lot of questions, whether business-related or private.

    Nevertheless, the conversions with chatbots are usually long. The first prompt doesn’t usually resolve our issues, or we don’t get a satisfactory response.

    As a result, we provide more and more details, which prolongs the chat. Consequently, we may very quickly lose context in a long conversation.

    Issue 2: The user does all the thinking

    AI chatbots are advanced technologies with elaborate algorithms behind the scenes. Nonetheless, these are always the users who steer the conversation and do the thinking.

    Although AI chatbots provide answers, these are users who must direct the conversation on the right track and come up with the most accurate prompts to get the solution they need.

    Issue 3: You cannot fully trust the answers

    AI chatbots are great assistants in everyday work, but you need to be careful when using them. They do make mistakes and sometimes have difficulty distinguishing between true and fake news.

    Currently, AI chatbots generate responses to our prompts but don’t tell us what the source of information is. As a result, it is difficult to sift through the data and fully trust the correctness of the responses.

    Do you need support with UX in AI?

    Let’s check if we can help you

    Better UX with better information search

    As you can see, there are certain problems with Large Language Models that impact the user experience negatively.

    Luckily, there are hands-on solutions that can resolve these issues, and one of these is focusing on search.

    Why is improving search in LLMs important?

    Compliance with the AI Act

    The better searchability and quoting of sources are especially important, not only for the UX but also for the upcoming EU AI Act. It is a piece of legislation coming into force that requires all AI models to be fair and high-risk models to monitor bias.

    As a result, high-risk models not only can but must provide information about their sources. Medium- and low-risk models are not required to do so, but if they do, it can have a significant impact on the user experience and usability of the Large Language Models.

    More effective document screening in companies

    Such a capability in AI models can be extremely helpful to companies that process loads of documents. If employees used LLMs to ask questions about the contents of documents, it could save a lot of working hours that could be devoted to more value-adding activities.

    What is more, it could help in scenarios where knowledgeable employees leave the company or retire. Instead of digging through the documentation, employees taking over the responsibilities would be able to quickly find what they needed by simply writing prompts for AI models.

    They would not only easily get the information they need but would also be equipped with the exact information where a certain piece of data is placed in the document.

    What is the basic information about sources?

    As you can see, adding basic information about data sources can have a significant impact not only on the user experience but also on the flow of operations and processes within companies.

    The basic information about data sources that every AI model could include is:

    1. General information, such as:

    • document type,
    • title,
    • article,
    • chapter,
    • page.

     

    2. Extracted metadata, like:

    • client ID,
    • category
    • process owners.

     

    3. Filters, for example, types of documents included in the search.

    Such pieces of data can also be helpful for, for example:

      • patients looking for information on drugs or therapies,

      • doctors querying patient documentation or therapy manuals,

      • sales representatives analyzing client data,

      • data analysts taking over responsibilities,

      • new joiners to the company or the team, starting with their new tasks.

    Once users can clearly see what the basis is behind each piece of information generated by LLMs, they can have better trust in AI models and benefit more from using them.

    Instead of manually verifying each extract by themselves, they can quickly look into the sources and check whether the information comes from reliable and high-quality sites.

    What is more, within companies, it could streamline processes and facilitate screening of internal documents and files. Such a feature could save companies a lot of money and allow employees to contribute more work time to strategic tasks. 

    Summing up, AI with Large Language Models has a huge potential to revolutionize various industries all around the world, bringing innovative solutions that were unimaginable beforehand.

    Nevertheless, in the frenzy of taking LLMs to the next level, we should not ignore basic considerations like UX in AI, which is extremely important as it applies to each and every user. Improving search in AI models is the first lever that should be taken care of, but there are more, which we will discuss in the next parts of this series. Stay tuned!

    Talk to our expert

    Are you looking for expert skills for your next data project?

    Or maybe you need seasoned data scientists to extract value from data?

    Fill out the contact form and we will respond as soon as possible.