HPE Data Science Workshop, London

Fine Tuning LLMs while maintaining explainability.

Generative AI (such as Chat GPT) is taking the industry by storm and with almost weekly releases of pre-trained foundation models, enterprises can now drive AI outcomes with significantly less initial investment. While the barrier to entry for general purpose AI has been lowered, research has shown that enterprise use cases can greatly benefit from fine-tuning these models with custom and often proprietary data and on specific tasks. Additionally, we're seeing regulators reacting to the lack of reproducibility of these models and our limited understanding around how they've produced any given response.

In this workshop, we'll walk through a guided tutorial on how your data science team can take a Large Language Model (LLM) and fine-tune it on an industry-specific dataset to drive a specific application. All while maintaining complete reproducibility through data and experiment lineage, scalability to handle the size and complexities of this workflow, and automation to ensure a repeatable and effective enterprise-grade machine learning platform. There will be interactive discussions with HPE AI experts on deep learning and AI modelling, and the opportunity to share best practices and network with other industry-leading AI experts.

HPE has already helped multiple customers in various industries to accelerate their adoption of AI. Customers in sectors like automotive, healthcare and energy are already using Pachyderm – find details here 

DaysHoursMinutesSeconds

Download Joey Zwicker' bio here

Venue Location

South Place Hotel

3 South Place
London
EC2M 2AF

What 3 words
///bravo.tonic.crowned