Generative AI took the consumer landscape by storm in 2023. and, in 2024, enterprises are expanding use cases and transitioning workloads from early experimentation into production. Common building blocks, architectures, and testing patterns are being developed and formalized (e.g., by the LF AI & Data Foundation with their Open Platform for Enterprise AI, or OPEA).
Leaving this workshop, you will be equipped with processes and knowledge to architect and build enterprise AI applications. Participants will learn about and get hands-on with building blocks for state-of-the-art generative AI systems including LLMs, data stores, and prompt engines. They will also implement end-to-end workflows following the most impactful industry blueprints for retrieval-augmented generation, summarization, multimodal chat, and coding assistance. Finally, participants will learn what it takes to evaluate generative AI systems as related to performance, features, trustworthiness and enterprise-grade readiness.
Speaker
Daniel Whitenack
Founder & Data Scientist @Prediction Guard, Co-Host of the Practical AI podcast, Previously Built Data Teams at Two Startups and an International NGO
Daniel Whitenack (aka Data Dan) is a Ph.D. trained data scientist and founder of Prediction Guard. He has more than twelve years of experience developing and deploying machine learning models at scale, and he has built data teams at two startups and an international NGO with 4000+ staff. Daniel co-hosts the Practical AI podcast, has spoken at conferences around the world (QCon, ODSC, Applied Machine Learning Days, O’Reilly AI, GopherCon, KubeCon, and more), and occasionally teaches data science/analytics at Purdue University.