Info Image

The Security-Innovation Balancing Act: Trust in Gen AI Era

The Security-Innovation Balancing Act: Trust in Gen AI Era Image Credit: maxxyustas/BigStockPhoto.com

As we witness unprecedented advancements in Artificial Intelligence (AI), the fresh impetus in creativity is sparking a major change in operations across industries. Little wonder then, that it seems to dominate spending priorities for the coming year and beyond. However, the transformative capabilities of tools like Generative AI also come with inherent risks, particularly in relation to the handling of sensitive data. Indeed, IDC notes that although corporate executives have Generative AI top of mind for their budgets, there is noticeable skepticism about whether their data will be used responsibly by vendors. These concerns are not in the slightest bit unfounded.

Simply put, although this technology is groundbreaking, truly getting the most out of it hinges on being able to manage the wide array of risks it poses. With new threats to privacy, cybersecurity, and compliance, this balancing act is incredibly acute. Falter and businesses run the risk of being left in ruin.

Monitoring the utilization of artificial intelligence

Since it was made available for mass usage, there were already concerns about proprietary information and other sensitive data falling into outsiders' hands. In fact, a recent survey we did of 1,200 IT and security leaders found that exposure of personal information was the second biggest concern among these decision makers. Closely behind were compliance violations as a result of exposure of personal data, while divulging intellectual property also made the top five concerns - but was interestingly the top concern among Singaporean respondents.

Clearly, the more data employees feed into Generative AI domains, the more likely it is that sensitive data may be exposed. But while the knee-jerk reaction is to prohibit use of Generative AI altogether, enforcing this will quickly run up against a wall and bring about the rise of 'shadow AI' - which, ironically, further hinders monitoring of how employees are using Generative AI. Sure enough, the survey found that 73% of decision makers said employees used AI tools frequently or sometimes. In contrast, although 32% of survey participants said a ban on Generative AI was implemented at their workplaces, just 5% said employees in their organizations never used AI tools. Presumably, if such restrictions were effective, the figure would be much higher than 5%.

What this clearly demonstrates is that employees will find ways to use tools that make them more productive. Instead of bans, organizations must strive to foster responsible usage and visibility. Leveraging tools that show the users and devices connecting to Generative AI services and the amount of data users are sending to Generative AI domains will be a game-changer, as it enables organizations to determine whether sensitive data may be at risk of loss or exposure. This will empower organizations with a mechanism to audit employee compliance with internal Generative AI policies.

Upholding data precision and safe use

Ensuring precision in Generative AI output is crucial as concerns arise about potential inaccuracies and misrepresentations. The technology's ability to replicate human-like content necessitates vigilant scrutiny to mitigate errors and biases. While 82% of participants in the earlier mentioned survey said they were at least relatively confident about protecting against AI threats, it is worth remembering that the number of these respondents that have outright bans are not insignificant, which as previously discussed, is hardly effective. There is also a likely element of overconfidence here, and we shouldn't lose sight of the fact that Generative AI-related risks are constantly evolving. Businesses must thoroughly test and continuously monitor the tools they are deploying to identify potential biases and ensure the model performs reliably in various scenarios.

Meanwhile, it is also critical for businesses to ensure high-quality data, embrace ethical AI practices, and train employees to use Generative AI responsibly. The last point, in particular, is a hugely important part of AI-readiness and can be utilized to create an environment where employees can provide feedback and organizations can continuously improve their AI capabilities. Collaborating closely with experts in the field will also help immensely, especially in regard to adherence with regulatory standards such as Singapore's proposed Model AI Governance Framework for Generative AI. The said framework emphasizes accountability, transparency, and security in AI systems. Implementing security measures that safeguard AI models and uphold reliability is essential in fostering trust in the AI ecosystem.

Maximizing security to drive innovation

Investing in AI while maintaining robust security involves strategic considerations. Organizations should consider establishing a cross-functional task force with representatives across diverse functions to explore use cases, evaluate pros and cons, as well as source training for employees.

In addition, decision makers should take charge and put in place holistic mitigation policies. This could cover what data can and cannot be shared with public Generative AI tools, the circumstances under which AI tools may be used, and how use of these tools should be disclosed to customers.

Exploration is key here, and can help the organization cover as many bases as possible through low-risk use cases - especially since Generative AI is still in its early days. We must not forget that monitoring is also another key. Keeping track of where and how employees are using these tools will be paramount, so organizations should consider investing in monitoring tools that help them realize the full transformative potential of Generative AI.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Chris Thomas has over 20 years of experience in the cybersecurity industry, with roles in frontline incident response, technical sales engineering, and professional services including architecture and implementation consulting. As senior security advisor in ExtraHop, he helps customers and partners across the region realize the benefits of visibility, detection, and response.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic