GenAI for Productivity

At Wealthsimple, we leverage Generative AI internally to improve operational efficiency and streamline monotonous tasks. Our GenAI stack is a blend of tools we developed in house and third party solutions.

Roughly half of the company utilizes these tools in their day to day work. This talk will cover the tools we use, the lessons we learned and how user behavior drives the intersection behind LLMs for productivity.

Interview:

What is the focus of your work?

These days, most of my time goes into driving strategy for our ML Engineering and Data Engineering teams: how do we evolve these platforms to further democratize access to data and abstract the engineering complexities behind productionizing new AI/ML products?

What’s the motivation for your talk?

User behavior is an important aspect that is often overlooked when examining the intersection between GenAI and productivity. Over the past year, we have launched several new tools and learned many important lessons along the way. I would love to share these insights more broadly.

Who is your talk for?

Anyone who supports the rollout / strategy of Gen AI tools (engineering leaders, project managers, etc)

Is there anything specific that you'd like people to walk away with after attending your session?

The main takeaways I want them to walk away from are:

  • The role user behavior plays in the change management process (for GenAI specifically)
  • Some of the ways we have been effectively leveraging LLMs and multi stage retrieval systems to drive productivity internally

What do you think is the next big disruption in software?

Edge computing will be the key to commodizing Generative AI by unlocking smaller models available on our mobile devices.


Speaker

Mandy Gu

Senior Software Development Manager @Wealthsimple

Mandy is a Senior Software Development Manager at Wealthsimple, where she leads Machine Learning & Data Engineering. These teams provide a simple and reliable platform to empower the rest of the company to iterate quickly on machine learning applications, GenAI tools and leverage data assets to make better decisions. Previously, Mandy worked in the NLP space and as a data scientist..

Read more

Date

Tuesday Nov 19 / 11:45AM PST ( 50 minutes )

Location

Ballroom BC

Topics

Generative AI Tooling LLMs

Share

From the same track

Session AI/ML

LLM Powered Search Recommendations and Growth Strategy

Tuesday Nov 19 / 02:45PM PST

In this deep exploration of employing Large Language Models (LLMs) for enhancing search recommendation systems, we will conduct a technical deep dive into the integral aspects of developing, fine-tuning, and deploying these advanced models.

Speaker image - Faye Zhang

Faye Zhang

Staff Software Engineer @Pinterest, Lead on GenAI Search Traffic Projects, Speaker, Expert in AI/ML with a Strong Background in Full-Stack Development

Session LLMOps

Navigating LLM Deployment: Tips, Tricks, and Techniques

Tuesday Nov 19 / 01:35PM PST

Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defense.

Speaker image - Meryem Arik

Meryem Arik

Co-Founder @TitanML, Recognized as a Technology Leader in Forbes 30 Under 30, Recovering Physicist

Session AI/ML

10 Reasons Your Multi-Agent Workflows Fail and What You Can Do About It

Tuesday Nov 19 / 03:55PM PST

Multi-agent systems – a setup where multiple agents (generative AI models with access to tools) collaborate to solve complex tasks – are an emerging paradigm for building applications.

Speaker image - Victor Dibia

Victor Dibia

Principal Research Software Engineer @Microsoft Research, Core Contributor to AutoGen, Author of "Multi-Agent Systems with AutoGen" book. Previously @Cloudera, @IBMResearch

Session Machine Learning

A Framework for Building Micro Metrics for LLM System Evaluation

Tuesday Nov 19 / 05:05PM PST

LLM accuracy is a challenging topic to address and is much more multi dimensional than a simple accuracy score. In this talk we’ll dive deeper into how to measure LLM related metrics, going through examples, case studies and techniques beyond just a single accuracy and score.

Speaker image - Denys Linkov

Denys Linkov

Head of ML @Voiceflow, LinkedIn Learning Instructor, ML Advisor and Instructor, Previously @LinkedIn

Session

Scaling Large Language Model Serving Infrastructure at Meta

Tuesday Nov 19 / 10:35AM PST

Running LLMs requires significant computational power, which scales with model size and context length. We will discuss strategies for fitting models to various hardware configurations and share techniques for optimizing inference latency and throughput at Meta.

Speaker image - Charlotte Qi

Charlotte Qi

Senior Staff Engineer @Meta