Info Image

ServiceNow Partners with NVIDIA to Develop Enterprise-Grade GenAI

ServiceNow Partners with NVIDIA to Develop Enterprise-Grade GenAI Image Credit: ServiceNow

ServiceNow yesterday announced an expansion of its partnership with NVIDIA to advance the use of enterprise-grade generative AI (GenAI). ServiceNow is one of the first platform providers to access NVIDIA NIM inference microservices, enabling faster, scalable and more cost-effective large language model (LLM) development and deployment.

Announced today by NVIDIA at GTC, NIM are part of new, enterprise-grade GenAI microservices created to optimize inference in LLMs. ServiceNow is using NIM to serve its Now LLMs – domain specific LLMs that power capabilities within Now Assist, ServiceNow’s generative AI experience. The NIM-deployed Now LLMs will allow ServiceNow customers to scale generative AI across new use cases.

NVIDIA and ServiceNow announced their initial partnership to develop powerful enterprise-grade generative AI capabilities in May 2023. Since then, the companies have launched programs such as AI Lighthouse to fast track the development and adoption of GenAI; delivered new GenAI-powered industry innovations like Now Assist for Telecommunications Service Management (TSM); and collaborated with technology leaders like Hugging Face and the Big Code Community on StarCoder2, a family of open‑access LLMs for code generation that sets new standards for performance, transparency, and cost‑effectiveness. NVIDIA also uses ServiceNow Now Assist to streamline its IT operations and improve employee experience with conversational capabilities, empowering employees to self-solve.

ServiceNow continues to improve its rapidly expanding generative AI portfolio so enterprises can bring the power of GenAI to any department, scale to other parts of the business quickly, and accelerate value from AI spend. Internal GenAI use cases at ServiceNow are delivering cost savings and increased productivity. For example, in just the first 120 days using Now Assist, ServiceNow projects it will save millions of dollars per year with increased case deflection through improved self-service and is realizing a 54% incident deflection rate with GenAI for employee issues. Additionally, ServiceNow has seen a 48% code generation acceptance rate internally with GenAI on the ServiceNow platform.

NVIDIA NIM inference microservices are already integrated within the Now LLM and available to all ServiceNow customers with Now Assist installed.

ServiceNow Chairman and CEO Bill McDermott

ServiceNow and NVIDIA are building a future where businesses can break through every barrier. GenAI is unlocking a new era of growth, completely reimagining digital experiences at scale. This is a once-in-a-generation opportunity, with ServiceNow and NVIDIA fueling technology breakthroughs.

Jensen Huang, founder and CEO of NVIDIA

Generative AI is driving a transformative leap that is shaping the future of technology and business. Together, NVIDIA and ServiceNow are helping enterprises everywhere embrace generative AI within the platforms they use to serve customers, manage employees, enhance their operations, and transform their industries.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Principle Analyst and Senior Editor | IP Networks

Ariana specializes in IP networking, covering both operator networks - core, transport, edge and access; and enterprise and cloud networks. Her work involves analysis of cutting-edge technologies that drive application visibility, traffic awareness, network optimization, network security, virtualization and cloud-native architectures.

She can be reached at ariana.lynn@thefastmode.com

PREVIOUS POST

Sprint to Showcase Live 5G Demo in Copa Soccer Stadium

NEXT POST

STC Signs New Contract with Intelsat to Grow VSAT Services