Transformation of the Week
Laura Funderburk

November 8, 2024

Chris Lusk

It's AI Makerspace’s Transformation of the Week. And today I'm joined by Laura Funderburk. Listen to how she went from call center agent to AI engineer, who is now writing a book on Gen AI.

Transcript

Lusk: Laura, thanks for joining me today. And congratulations on being the transformation of the week winner. Tell us a little bit more about your background.

Laura: It’s great to be here. So my background is in mathematics. My first job actually was at a call center. I would make these phone calls to raise funds, and that was a very tough job, but it gave me a lot of interpersonal skills, so I was able to build on top of that to later on get an IT help desk position.

When I was an IT helpdesk assistant, I discovered that people were using programing to solve problems in bioinformatics. And that absolutely blew my mind. I went to my first ever hackathon, and I learned to program Python. I started to get more involved in data science, and I started off by doing automation tasks.

I transitioned from that into data science. So I have a bit of a mixed background, but I would say, I would say I was always very eager to get job experience.

Lusk: So what got you interested in Gen AI?

Laura: I heard about Gen AI pretty much at the same time everybody heard about ChatGPT. At the same time that OpenAI released ChatGPT, they also released the API entry point. For me, it was just absolutely mind blowing that I could make an API call in the same way I make API calls to access data, that I could make an API call to leverage a large language model. That absolutely blew my mind. I wanted to know more about it.

And within my job, I was always very eager about setting up these processes to make things easier for the users. So for me, being able to suddenly access a large language model through an API call and then hook it up to a database to answer questions about it was absolutely tremendous. It was a it was a tremendous moment for me. So I was very, very eager to learn as much as I could about generative AI and how to use these models to extract and generate reports from data.

Lusk: Since graduating from the LLM Ops course about a year ago, have you been able to put anything that you learned into practice at work or on personal projects?

Laura: I would say the first example of me using what I learned was when I secured a book deal, to write about building NLP pipelines. And that book, opened up an opportunity for me to apply what I know as part of my job.

So right now I work for Bytewax as a Senior Developer Advocate, and the focus for Bytewax is streaming analytics and real time applications. So when the opportunity came up to work as part of streaming, they were very interested in bringing Gen AI and building with LLMs and mixing that to this idea of streaming. So how to build a real time RAG system is one of those examples, or how to fine-tune embedding models in real time, as another example. So I had an opportunity to bring what I had learned as part of LLM Ops and that entire year into my job, where my focus is now real time LLM applications.

Lusk: We always talk about building, shipping and sharing around here and writing a book is the epitome of sharing. Can you tell me a little bit more about the book?

Laura: Absolutely. One of the packages that I absolutely fell in love with when I started to learn about building Gen AI applications was Haystack. I was a little surprised by how the landscape evolved. So one of the dominating packages was of course LangChain and LlamaIndex. Back when these LLM applications started to explode, Haystack was definitely not one of those packages. And I was very curious about this because Haystack and the Deep Set team, in specific, they had been developing packages for this concept of NLP pipelines and before we knew these applications through the name of RAG or LLMs, the name for them was NLP. Working with LLM models, working with transformers, building RAG applications, this is still part of NLP. This is a subset of what you can do with an NLP team.

My goal in the book is to highlight how you can build some of these robust RAG and LLM-based applications using haystack and leveraging the Python ecosystem. Now I get to bring in the knowledge that I took from work, and I can actually tell others how they can combine Bytewax with Haystack to build real-time RAG systems.

Lusk: So what advice do you have for someone who is considering moving into an AI engineering role?

Laura: To focus more on engineering, and less so on the AI piece. Because at the end of the day, what companies want when it comes to building, shipping and sharing is shipping something that is maintainable, shipping something that is well-tested, shipping something that is reliable.

No matter how high quality the AI models you use are, AI will never replace the knowledge of building robust, scalable and reliable systems. Focus more so on what are the things that software engineering or computer scientist are doing to build robust, scalable and reliable systems and then look into how can I bring the AI through these API or inference endpoints into the picture?

Lusk: That’s great advice, Laura. And that’s why you’re this week’s Transformation Of The Week. Where can people follow the progress of your book and stay up to date on the projects you’re working on?

Laura: If you’re curious to follow the progress on my book, project is being released on an open source basis on GitHub. I’ve already published notebooks up to chapter six, and I will continue to add more in Jupyter notebooks and more scripts. Anything that I write for my work, you can follow us on Bytewax.io/blog I’m always active on both GitHub and LinkedIn. Also very happy to connect if you have any questions or if you’re seeking advice.

Thank you so much for having me. I had a blast chatting with you today.