Scott Brokaw, Director of Product Management, Data Integration at IBM, gave this presentation at the Generative AI Summit in Boston in October 2023.

Today, I’m going to talk about what IBM has been doing in the artificial intelligence space and how we've been working with some of our clients to think about how we can productize and bring AI into practice for enterprises. 

What I want to focus on specifically is how you establish a foundation of data that can be the competitive differentiator when you look at some of these AI use cases. I also want to talk about some of the principles that we think about when we think about a data platform that can enable AI to be successful.

Foundation models: Embracing the hype

There's no shortage of hype in the market created by foundation models. I think that hype is a really good thing because it's helped us refocus our attention on some of the really cool problems that can be solved with AI. 

From an IBM perspective, we've been thinking a lot about how we can help clients go the next step into implementation and how you combat some of the risks. How do you combat some of the risks around privacy, IP, bias, and explainability

How can we help clients get more confidence in terms of being able to provide value to their clients and have both internal and external use cases?

When we look at some general market statistics, there's a survey that shows that 80% of enterprise leaders are already doing something with GenAI. 

There are different phases of that, like prototyping and ideation, and some that are already in production with certain use cases. But it just goes to show how real this hype has become. A lot of companies are starting to act on and experiment with what they can do with generative AI

A lot of enterprises are starting to realize that this is something they need to do because if they don't, they'll be left behind.

So, finding ways to be able to get automation value out of the process is really important. 

Boston Consulting Group projects that about a third of the AI market will be generative, and I think that's a really telling impact on where we're at in terms of the industry. 

A lot of enterprises are finding generative AI an easier way to get started with AI projects versus some of the traditional AI methods like machine learning, where you have to do really high intensive labeled data, lots of training on particular data sets, etc. 

Generative AI is a really good way for lots of clients to dip their toes into easy ways to start to experiment and find some quick wins and quick value points. 

But it doesn't mean that that's the only type of AI. Two-thirds of the AI market is still going to be traditional ML and AI. So, we want to think about how we build a platform that allows us to take advantage of some of the things in generative AI, but then also attract some of the upsides of traditional AI as well.

How generative AI is revolutionizing the future of work

These are some of the most common use cases that we've seen when working with clients, especially in the generative AI realm. A lot of this is around natural language processing (NLP), thinking about how to do summarization, content generation, and how to extract entities from particular types of documents. 

One interesting pattern is retrieval-augmented generation (RAG), which is the ability to give the model insights that you have on your private data, whether that be more up-to-date information than what the model was actually trained for, or trying to focus the model in a particular direction. 

This is a really nice pattern to start to bring together the data that you're curating with what a model can ultimately do from a natural language processing perspective. It’s a really common technique that starts to allow you to increase the applicability of these models without necessarily having to go to the next step of fine-tuning and training. 

We don't believe that fine-tuning and training are actually that far out of reach, but RAG is a good way for you to start taking incremental steps toward increasing the validity and the freshness of what the model is reacting to.

With a lot of these new technologies, IBM tries to be client zero for ourselves. So, before we put any sort of technology in the world, we try to use this stuff internally. 

I'll give you a couple of examples here.

We have a really big software company at IBM. We have a whole bunch of software products and a whole bunch of enterprise clients that work with us to report different technical issues.

Based on that ticket volume, what we've been able to do is start to analyze what some common questions are that we don't have documentation for, and if generative AI can then help us proactively generate that documentation. 

So, it starts to enable our documentation writers with superpowers because they can start to pump out meaningful documentation that we know is relevant in terms of open caseloads. 

It also allows us to do cool things around our search and assistance that we put in front of clients to be able to give them quicker access to the repository of information that we have across our documentation repositories and no-defect repositories, giving clients ways to interact more in a chat-like style with an expert that has experts across all of that knowledge base. 

It starts to improve our overall ability to deliver support to our clients. We've already seen really good improvements in terms of our NPS scores because of it.

There are some other really cool use cases around HR and internal processes. We've created a bot called AskHR, and what's really interesting about this bot is it doesn't just do questions and answers and look across the different IBM policies, it actually allows you to get insight into the question you're asking, but then provide action and automation. 

You're not going within a different tool. With AskHR, it's almost like I’m talking to my HR representative. I'm able to initiate a promotion of an employee, increase an employee's salary, and transfer to another team, all within that tool and experience. 

It starts to bring together the idea of this assistant concept, where you can start to get question-and-answer responses and tie them with actual automation tasks that drive productive value in the business. That's an incredibly powerful experience because it allows us to be able to scale our HR organization in a way that we couldn't do otherwise. 

Now, the de facto response if an employee has a question is to go ask the HR bot because typically, it'll have a quicker, more reliable answer than some of our HR representatives.


From prototype to product with generative AI and large models
Shivani Poddar, Engineering Lead at Google, revisits the challenges with GenAI prototypes today and shares three strategies to overcome them.