Jenson Huang and Frank Slootman – the CEOs of NVIDIA and Snowflake, respectively – joined each other on stage at Snowflake Summit 2023 to announce a new partnership that will add NVIDIA GPU acceleration to the Snowflake Data Cloud’s growing list of capabilities. Â
Access to GPUs within Snowflake will allow organizations to build and harness AI, machine learning (ML), and large language models (LLM) right within Snowflake.Â
Running these applications within Snowflake will provide significant performance and security benefits for organizations that aim to build custom AI and LLM applications using proprietary enterprise data.
In this blog, we’ll highlight the new capabilities NVIDIA is bringing to Snowflake, the incredible potential of bringing AI to your data, and opportunities for enterprise teams to drive their business forward with AI apps.
What Capabilities Will NVIDIA Bring to Snowflake?
NVIDIA is a long-time pioneer and market leader in the space of GPU-accelerated hardware. GPUs (graphics processing units) were initially developed to accelerate digital graphics in gaming and media but later found an important application in machine learning and deep learning.  Â
Training a machine learning model on a GPU can be 2-10x faster on a GPU than a traditional CPU and often cheaper. Today’s most cutting-edge Generative AI and LLM applications are all trained using large clusters of GPU-accelerated hardware.
The partnership between Snowflake and NVIDIA will make GPUs available within Snowflake. Customers will be able to run AI applications on Snowpark using GPU-accelerated warehouses.
The launch of Snowpark Container Services will also provide a runtime for containerized applications within Snowflake. At Snowflake Summit, NVIDIA is showing demos of its NeMo platform to show the power of these new capabilities within Snowflake.Â
What are the Benefits of Using NVIDIA GPUs in Snowflake?
As a data platform, one of Snowflake’s core differentiators is the concept of bringing the work to the data. Snowflake’s SQL engine and Snowpark already accelerate workloads by distributing code to scalable compute instances in the form of Warehouses.
Now, GPU acceleration from NVIDIA allows organizations to expedite those applications even further, particularly for AI applications that process unstructured data like photos and text.
Beyond performance, GPUs in Snowflake will allow organizations to build and run AI applications within their Snowflake security perimeter. Since AI learns from data, an enterprise’s proprietary data becomes a vital asset that organizations must protect from competitors and potential disruptors.
Organizations already trust Snowflake to secure and govern their most sensitive enterprise data, and GPUs will permit them to leverage AI without sharing data across the public network and to less secure cloud environments or third-party processors.
What Does This Mean for Enterprise Data Science and ML Teams?
GPUs in Snowflake allow Data Science and ML teams to innovate and build rich AI applications on enterprise data. Here are just a few ways organizations could get started on building their next AI application.
- Run an open-source LLM on Snowflake to process or index internal datasets. Running your own copy of an open-source pre-trained LLM can sidestep the security and intellectual property risks associated with using a third-party service like OpenAI. Â
- Fine-tuning models on enterprise data to build purpose-built AI for specific applications. Large Language Models with 10B or more parameters are very powerful for general-purpose tasks like conversational AI but can be very costly to run at scale. Smaller models have been demonstrated to perform exceptionally well for specific targeted applications.Â
- Distill large foundation models to a smaller footprint on enterprise data and use cases. Distillation is a technique that uses a large language model and specific (potentially proprietary) prompts to build datasets that can train smaller models. The resultant models are often more efficient and cost-effective to run at scale and often perform nearly as well as the large foundation models.
What Does This Mean for Application Developers?
Snowflake’s Native App Framework is rapidly developing as an ideal environment for developing data applications. Many AI applications are ideally suited for enterprises and will need to be customized by processing enterprise data.Â
Snowflake’s Marketplace, Data Sharing, and Data Clean Rooms can be combined with Snowpark, and the Native App Framework helps developers build and distribute applications without the headaches of traditional third-party data processing pipelines.Â
What Does This Mean for You and Your Business?
Generative AI is already transforming the way businesses operate and innovate. By harnessing the power of your enterprise data and modern technologies like LLMs, your possibilities are truly endless. Â
Are you looking to take the first steps toward enterprise AI? Wherever you are in your AI journey, phData is here to help. Contact us with questions, or sign up for one of our free Generative AI Workshops!