Info Image

Navigating Conversational AI for Businesses

Navigating Conversational AI for Businesses Image Credit: Prostock-studio/BigStockPhoto.com

As companies in Southeast Asia increasingly embrace digitisation to stay competitive, the question of integrating conversational AI into business processes becomes more pertinent. Powered by Large Language Models (LLMs), conversational AI is a form of generative AI that enables people to intuitively interact with deep learning models via natural language to generate customised content. Conversational AI is sector-agnostic, and with its ability to boost business productivity, many businesses across the region might consider adopting the technology to streamline workflows. At even the most basic level, the automation of repetitive tasks in areas such as HR or customer service, information retrieval and policy querying, conversational AI would already bring time and resource savings for the organisation.

According to a recent report by IDC InfoBrief commissioned by Dataiku, the percentage of businesses across Asia Pacific using AI doubled from 2020 to 2023, from 39% to 76%. Enterprises in Asia Pacific surveyed expressed that they would invest in AI for internal integration and automation projects, aiming to increase productivity, agility, efficiency, and cost reduction.

However, the integration of conversational AI-enabled by generative AI into business operations also raises concerns regarding data protection, security threats, and ethical considerations.

The Challenge of Personal Data in Conversational AI

Most companies aspire to cultivate their own training datasets when adopting conversational AI. This ensures a higher probability of obtaining an accurate output to user queries when compared to a model without such fine-tuning. These datasets, comprising customer data, HR data, and various other information, become the foundation for interactions within conversational AI models. However, placing proprietary and sensitive data within these models carries the inherent risk of unintentional leakage of personally identifiable information (PII). It is not uncommon for organisations to inadvertently include PII when constructing datasets, potentially compromising user privacy.

The importance of scrutinising these datasets cannot be stressed enough. Imagine engaging with a customer service AI chatbot and stumbling upon someone else’s personal information. This scenario, known as the leakage of PII, highlights a significant concern for privacy in conversational AI. Stringent guardrails need to be put in place to ensure data security and the privacy of the dataset.

Navigating Conversational AI Safely: The Five Stages

For companies across Southeast Asia looking to safely embrace conversational AI as part of their digitalisation efforts, they should take note of each of these five critical stages to ensure the safe and responsible integration of generative AI into business operations:

Ensuring the dataset is unbiased and diverse, representative of the intended coverage as well as not containing personal data, company confidential information (in cases where the chatbot is meant to be customer-facing), or copyright-infringing information.

  1. Data Set: Ensuring the dataset is unbiased and diverse, representative of the intended coverage as well as not containing personal data, company confidential information (in cases where the chatbot is meant to be customer-facing), or copyright-infringing information.
  2. Pre-processing: Undertaking a stringent review and cleanup of the data set, including the diligent removal of any personal data.
  3. Model Selection: Choosing established Large Language Models (LLM) that align with the organization's needs.
  4. Fine-Tuning: Adapting the model for specific Natural Language Processing (NLP) tasks.
  5. Deployment, Training and Governance: Doing continuous testing and providing staff with AI training and establishing an AI governance team to oversee responsible AI practices.

Threats in the New Age

Traditionally, threats to data security were associated with storage breaches, but with the advent of LLMs, the landscape has shifted. Similar vulnerabilities that plagued data security, such as SQL injection, now extend to LLMs, posing new challenges. A major threat with conversational AI is the potential leakage of intellectual property, with terms like prompt injection, prompt leakage, and jailbreaking becoming synonymous with security issues in LLMs. To counter these threats, organisations must stay vigilant to potential vulnerabilities in AI systems and implement robust security measures to safeguard against potential breaches.

An Instrument of Both Attack and Defense

The rapid advancement of generative AI has spawned more powerful exploits. Malicious actors have also tapped into the technology to create cyberattacks that are reaching new levels of sophistication in social engineering and realism. Cybercriminals are employing it to craft convincing phishing scams, create fake profiles, and even simulate personnel of authority through deep fakes. All this corresponds to a surge in ransomware incidents from victims clicking on baited attachments and webpages, resulting in them having their data held hostage by hackers. At the same time, in data protection, the ability to leverage AI for fraud detection and cybersecurity can offer organisations the means to detect irregularities in data and combat cyber threats perhaps more effectively than before. As such, it is being employed not only as a potent tool for malicious activities but also a defense mechanism against cyber threats. This dual nature of AI, as both an attacker and a defender, underscores the importance of staying ahead in the race of technological advancements.

Conversational AI Challenges in 2024

Conversational AI is developing at breakneck speed. As AI systems become more autonomous, ethical concerns surrounding bias, fairness, and accountability become more pronounced. How will organisations navigate these ethical considerations to ensure responsible AI deployment?

As technology advances, challenges for businesses also increase. Take for example, how chatbots, once a cumbersome project, have evolved dramatically. Today, chatbots are built on LLMs, and with their capabilities for semantic and similarity searches, can generate human-like responses, making it increasingly difficult to distinguish between human and machine interaction.

The Turing Test, a benchmark for determining a machine’s ability to exhibit human-like intelligence, becomes increasingly relevant. The success of conversational AI emulating real conversations brings the challenge in differentiating truth from fiction, especially if and when AI is involved in decision-making processes, such as hiring.

The challenge in navigating and adopting conversational AI within the organisation remains on how the technology is responsibly integrated into the workflows. Ensuring the AI governance policies within the organisation are adhered to, and upskilling employees on AI technologies will facilitate smoother integration and empower teams to harness the full potential of these systems.

The regulatory landscape for AI is evolving and businesses must stay abreast of changing requirements. Adhering to data protection and privacy regulations is crucial to avoiding legal repercussions and maintaining customer trust. If organisations can adopt a holistic approach that balances innovation with ethics and security, they could find that the benefits in adopting conversational AI are immense.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Kevin Shepherdson is the CEO and Founder of Straits Interactive, a data privacy consultancy and training provider, based in Singapore.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic