Generative AI Meets Responsible AI: Explainability in the Age of Generative AI

Table of content

AI has advanced rapidly over the past decade and is only accelerating in its evolution. George Mathew, Managing Director, Insight Partners, discusses the progress we’ve made and where we’re heading.

Watch this session on Explainability in the Age of Generative AI to learn:

  • How generative AI has opened up new modalities of human and machine interaction
  • Examples of generative AI advancements and milestones
  • Challenges society will face as AI usage spreads and improves
Video transcript

Mary Reagan (00:07):

I'd like to introduce our next speaker who’s George Mathew. So, George Mathew is a Managing Director at Insight Partners, which specializes in venture stage investments in AI, ML analytics, and data companies With 20 years of experience, he's had leadership roles at Kespry, Alteryx, SAP, and Salesforce. He's driven strategy and team development. George has a BS in neurobiology from Cornell and an MBA from Duke University. And with that, I think we're getting George on as soon as we can see his video, he could, if he could turn your video on, George. Almost.

George Mathew (01:00):

Should be able to hear me and see me now.

Mary Reagan (01:01):

I sure can. Yes. Welcome George with that, please take it away.

George Mathew (01:06):

Oh, great. Thank you again and thank you for Krishna and the team to just have an opportunity to chat with everyone on where things are continuing to evolve in a fast and furious way in all things related to generative AI for today's talk, what I thought I would do is just kind of set a little bit of groundwork in terms of where the current sort of focusing around generative AI has kind of gone, but like what got us here, right? And and what's the, what's the sort of underpinnings around generative in particular that has gotten to, you know, the level of sort of scale that we're seeing today in the market. And in that context, you know, what I wanted to try to just explain to this conversation through this conversation is just what we have to think about as practitioners, as investors, as builders in this space. And so Mary, can you see my screen okay right now? Just wanted to check on that as far as sharing.

Mary Reagan (02:08):

Sorry, I'm muted. Yes, I can. Looks great.

George Mathew (02:10):

Okay, perfect. Okay. So in this three-part talk, the first part of the discussion is just, you know, what is kind of underlying the scale of the generative models that have emerged and like why it's happening in the sort of present moment that we're seeing. And so I wanna just kind of lay out a few examples you know, for the audience just to, to indicate like, what are some of the sort of key innovations and the key progresses that we're making, particularly when it comes to the use of generative AI. First and foremost, you know, we started to see generative AI continue to create modalities that went beyond just, you know, what we've historically known as, like text analysis numeric analysis. And so the example that I wanna highlight here is something that came out of Google Duplex a few years ago, which a few people may have seen already, but the interaction between humans and machines at scale. So this particular example is a reservation that was being made at a Chinese restaurant.

Restaurant: Hi, how may I help you?

Google Duplex AI: Hi, I'd like to reserve a table for Wednesday the seventh

Restaurant: For seven people.

Google Duplex AI: It's for four people.

Restaurant: Four people? When? Today? Tonight?

Google Duplex AI: Wednesday at 6:00 PM.

Restaurant: Oh, actually we reserve for upwards of five people. For four people, you can come.

Google Duplex AI: How long is the wait usually to be seated?

Restaurant: For when? Tomorrow or weekday or

Google Duplex AI: For next Wednesday the seventh.

Restaurant: Oh, no, it's not too busy. You, you can come for four people. Okay.

Google Duplex AI: Oh, I gotcha. Thanks.

Restaurant: Bye-Bye.

George Mathew (04:08):

Okay, and so in this interaction, there was some pretty interesting things that happened, right? One was just a little bit of confusion on when you can get that reservation, but two was just how natural the conversation flowed, right? And this was the real genius behind what was Google Duplex at that time, back in the 2015, 2016 timeframe, where the introduction of the generative model, in this case, being able to create a human voice with the right tonality, the right set of ums and ahs that made what would be a typical human machine interaction, one that's, you know, less robotic and more human in nature, was kind of the opening salvo of what we started to see and what these interactive voice applications could emerge and look like particularly at scale. And so now we're seeing not only the use of Google Duplex, but other interactive forums of content now emerging between humans and machines, largely interacting on a iterative dialogue basis where it's almost impossible to tell the difference between a human and machine in the interaction.

And this kind of plays out to what I believe is happening in this space. There's a lot of questions about are machines replacing, replacing humans at scale? And I, and really I don't believe that is the case, right? I think what we're seeing here is what I'll get to in a moment in the conversation around what is a man symbiosis part of this sort of man-machine symbiosis was kind of seen early on back in 2016, where the team that devised AlphaGo, which was the DeepMind's team that was working on creating an algorithm method to be able to play Go at scale, was introduced into, you know, the market by having Go players in this case be tested against against the algorithm methods that were generated by the DeepMind's team. In this particular example, it was never conceived that Go as a game could be cognitively built in a way where an algorithm set of methods could be the human.

And why is that? Because there was a lot of specificity in the topology of a Go board that was specific to a moment that the game was played, how the player was kind of going into a portion of the board and there wasn't necessarily a brute computational way of being able to address all the permutations accommodations that were supporting a, you know, complex Go game. Well, it turns out that the neural network based approach that was used to build the model at that moment was so well built initially, that Go players didn't actually take it seriously in terms of the ability for a machine to be able to beat a human at Go. Lee Sedol, the number three player in the world, went up against AlphaGo back in the 2016 timeframe in Seoul, South Korea. And he took the first game almost so casually that he didn't even imagine that AlphaGo could beat him.

(07:26):

Turned out it did. And then in the second game, he kind of came back and became more serious about how he played the machine. And this particular match on the 37th move, AlphaGo made a move that was somewhat unique, that the contestant, as well as the referees thought that the algorithm actually had broken, right? That there made such a strange move that, you know, it must have been a mistake. Well, when the contestant being Lee Sedol took a break, and the commentators and the contestants were kind of looking at what happened on the Go board, it actually turns out that this unusual move was generatively built by the algorithm in a way that has never been played in the 5,000 year history of humans playing Go. And so Lee himself was shocked, right? And he basically says, you know, that he's speechless because, you know, he has to admit that like his ability to even understand the game was transcended by in this case AlphaGo creating a maneuver in Go, that has never been seen in human history.

And so I think we're, we're at this moment where, again, less and less of this is going to be about humans being pitted against machines in the case of the example of, of AlphaGo here. But more and more the case that these combined algorithms and humans working side by side each other to solve problems is actually going to be the likely way that we proceed in the future. Another example here more recently is a series of celebrities that look oddly familiar. It actually turns out that none of these celebrities have ever existed in human history. It's because you can take a lot of the composites of what would be say for instance, images off of Instagram and be able to train a generative, in this case adversarial network again, to be able to produce the output here that we see these images that are vaguely familiar and almost sort of semi recognizable as, as celebrities on the internet.

But in reality, these are all generatively created from a model that was basically trained on data that in this case was Instagram celebrities that were posted for, you know, taking about a thousand images and then now generally creating these synthetically generated faces. And again, the technology here was done about two years ago. If you look at where Midjourney today is or where Stable Diffusion is has certainly gone in the last few months. The generative capabilities of these models has been profoundly improved even in the last even in the last you know, three, four months. So what, what I view this as ultimately is as an opportunity to unleash human creativity at scale. And so, you know, and on my own personal journey with these models, I just started to look at what DALL-E 2 and, and what Stability and what Midjourney can actually do.

(10:42):

And so here's some recent co-created efforts between George and the generative model in this case. Imagining "a future city on Mars after terraforming photorealistically built." "A painting in impressionist style of lemons raining on the Amalfi coast," right? These, these creative possibilities are are quite extraordinary. As we continue to see what's possible in this space. I don't know why this is coming up, let me just this and the more that that we see these models continue to work alongside of humans, the more, more opportunities, the more democratization at scale is happening as we speak. This last one is "machines at the edge of space streaming of the future," like quite profound imagery that's just being created by the fact that a human like myself could be thinking about an idea and, and now it can have a copilot, a machine.

Help me generate these, these images. And so a lot of what's happening is underpinned, as I mentioned earlier, by what I think is a set of foundational things that have built up in this space for almost a decade. Well, why is this all kind of coming about now? But one of their big things that I did want to indicate is like, this is not necessarily, as some people might imagine a march to, to AGI, right? To artificial general intelligence. I think this is much more profoundly important when it comes to what are the possibilities for narrow AI in terms of being able to, to automate and take what be traditional human tasks and activities like, you know, someone creating a poem, someone creating an image and just turbocharging the possibilities, right? In a way that, that we haven't necessarily seen before.

I was not someone who would create to, who would categorize myself as a person who is creative by nature. But now my creativity can be unleashed at scale using a generative model to support the way I'm thinking about, say, for instance, a visual piece of imagery that, that I would've had a hard time being able to create as opposed to now just being available at my own fingertips. The underpinnings of all this is still based on things that we know quite well but also things that we don't quite fully understand. So one part of this is the fact that yes, we have now seeing the use of unsupervised deep learning models at scale, the use of transformers. It's still based on this idea that deep learning models can create this, this sort of cascade of multiple layers of basically non-linear processing units for extracting features and transforming them, right?

(13:39):

So, or the case of facial recognition, you can extract all the local patterns, you can build the facial features up to identify ears from noses, from eyes, and then effectively create the sort of supervise or unsupervised output the pattern analysis that emerges so that those multiple representations can correspond to different levels of distraction kind of forming this sort of full hierarchy of concepts that are, that have emerged where a face can be understood as a base. Now, that set of techniques that we've seen so far have been pretty useful, but it's not also quite understood why they work so well, right? So in the example here of me highlighting a specific type of dog for the audience who are, are pet lovers and dog lovers in particular, you'll, you'll immediately recognize that it's a French bulldog and it so happens to be a purple french bulldog.

Now, I can guarantee you no one in this audience has ever seen a purple french bulldog in their past. But because our brains are tuned in a way to drive these effective information bottlenecks, you can now use the precision and recall that exists in our own brains to be able to come back and say, I've seen the color purple, and I've also seen a French bulldog in the past, and therefore when George shows me the picture of a French bulldog that's purple, I'm able to instantaneously recall that. Well, in a lot of ways, the information bottlenecks that are surrounding these deep learning models are ones that enable the machines to simulate how humans work at scale. Part of this is underlying now the fact that we have these transformer-based models, which I alluded to these effectively unsupervised deep learning models that are now merging at scale, that are profound game changers in terms of how the contextual element of how machines understand language work today.

And it turns out there's a fundamental difference between a transformer model with everything else that's been historically been available to us as far as at least text-based analysis up to this moment in the sort of practice of national language processing. It turns out, like if you use a keyword word in Google, it doesn't really effectively differentiate between the three words of "shark eating a person" versus "a person eating a shark." But the ordering, the sequencing of the language here matters, right? The attention that's laid to the, to how the semantics of those three words being ordered matter quite a bit. It actually turns out this is exactly how transformers work, right? And so transformers use this sort of attention-based you know, sequencing of understanding that the context of "a shark eating a person" versus "a person eating a shark" is actually quite different.

(16:42):

And now what we're seeing is the scale of, of what's possible with these transformer-based models in particular, not only on text, but also multimodal going into to vision, to going into acoustic wave forms, to going into other modalities where language and other modalities are being analyzed and parameterized at scale. So as of this past week, we've now seen the sort of GPT forest style of large language models emerge, where there's as much as trillions of parameters of large language models that are now in market. And many of those are now competing against each other to build, you know, this next generation of, call it GPT forest style large language model. There's a lot of investment that's gone into the space, whether it be course OpenAI, which everyone knows about, but also just Cohere and Anthropic and AI21 and Character.AI and a number of key players that are now building these LLMs at scale.

Now we're reaching a moment where these LLMs are going to be pervasively applied to all kinds of domains, all kinds of use cases. And this is the moment where I kind of go into what I think about as what our responsibilities are, our moral obligation is as particularly these models get to scale. Kind of leaning back to, you know, some data that, that I collected in the past around the use of models, even scarce models even in the last or two decades or so, this is the data that shows what the loan acceptance rates were for refinancing that occurred across multiple ethnicities and races in the US, across various cities and, and and suburban municipalities. In this particular example, what's very clear is that one of the lenders that has been historically known for providing large you know, amounts of, of refinancing and loans in general because of the way those models were built and not, you know, understood in terms of what the model drift, what the sort of improvement of those necessary models were over time can kind of continued to run for decades on end in terms of approving and denying applications to US citizens and naturalized immigrants who are applying for home mortgages and refinance events.

Well, it turns out that these same models, because there wasn't that explainability, there wasn't that understanding of bias historically denied black applicants by astronomic scales in relative comparison to other populaces across 20, 30 major metropolitan areas over the last three to four decades. And so the most important aspect of wealth creation that occurred between mostly the 1980s to the early 2010s was in your home in the creation of value in your home. But if you weren't able to either purchase or refinance your home because of your, you know, ethnicity or your race it actually turns out you were just generationally denied of that wealth creation. And certainly that happened again because we didn't really keep the models up-to-date. We didn't really understand what was inside of them. We didn't understand how much bias was being applied over time.

(20:11):

And so my view now is that as we are practitioners in this space, like part of our jobs part of our day-to-day work has to be in really understanding how to solve for these major dislocations because ultimately the dislocations are only getting larger, right? You look at like the amount of income separation that occurs between the richest and poorest in this country even today. I mean, it used to be a very small spread that was in place between what would be the disparity of what you know, the, or a middle class in terms of their income growth was in relative comparison to the, you know, sort of the most influential, most affluent. Well, it turns out in today's day and age and, and the 2010s and beyond that income growth has actually declined quite dramatically for the middle class.

And at the end of the day, the most affluent are finding the most amount of it growing. And why is that? I mean, ultimately it's very clear that this is happening because, you know, even despite our corporations and our companies being the most productive they've ever been in society we are still not seeing that inflection in how much wages and compensation are driving benefit to, to, to most people that are working in these companies. It turns out the reason that is because technology and particularly automation AI is keeping that participation particularly in the increase of wages relatively tapered in comparison to productivity. So we have to think about ourselves not only as just sort of practitioners and builders, but focused on practitioners and builders that are really thoughtful about redefining and reformatting what meaningful work was over time.

So I don't even think about this as like, you know, the great resignation that occurred during the pandemic today, I think about this as almost third grade upskilling opportunity that exists for most anyone who's working on building this next generation of AI that can support the original thesis that I started with, which is like humans and machines working collaboratively side by side. Cause if we don't, ultimately this disruption is only gonna accelerate across all forms of labor in society. It's not just the fact that you know, the blue collar labor, the, the skills that would be replaced over time, whether it be a cashier suddenly having an automation checkout experience being replacing the way that, you know, you would buy things at a retail store. But if you look at this, this particular graph turns out credit officers, loan officers, benefits consultants, managers are all possibly being disrupted faster today than ever before.

(23:10):

So it's all forms of labor, both blue collar and white collar can be disrupted with the fact that you have these generative technologies emerging even before, you know, the AI movement or even after the, the AI movement to kind of played itself out for the, even the last decade and a half now, seeing this sort of full acceleration in the generative movement that's occurring. My view is as builders and, and investors and people who are kind of thinking about this future, like we have this great calling upon ourselves really to be able to think outside of our own little prosperity bubbles and with humility, understand how all forms of society can sort of participate in this sort of great future that's happening as we speak. In 1960, J.C.R. Licklider man who was far ahead of his time, he really thought about this in the seminal paper that he wrote called the Man Machine Symbiosis.

And in it he talks about the future of work not being about machines replacing us, but about machines augmenting the human potential. And so we focus this attention that we have in this moment towards where this could go, where we could actually be largely participating in the benefit of society over time. My view is that machines can assist humans, that we can actually be supported with artificial intelligence and machine learning that works responsibly on behalf of society, and ultimately a more moral and more just use of data and models at scale can build better business outcomes, better communities, and better society. And so, with that in mind I wanna turn it back to the rest of this great event that Krishna and team are putting together, Fiddler, but with a hopeful message that we should all participate in this great future and do it in a responsible moral way.

Mary Reagan (25:08):

Thank you so much, George. Really interesting. I was just fascinated by a lot of the points that you brought up there. I'm gonna turn to, we have a couple audience questions. So this first one I'm not quite sure at what point they put this in, but the question is how should we think about GPT-4 in this context?

George Mathew (25:25):

Yeah, I'm glad you asked the question about GPT-4, right? it, it's a pretty, pretty tremendous moment for humanity, right? When, when you have this emergent large language model and it's not the only one. There's a few GPT-4 style models that are coming and the likes of Cohere and Anthropic and a few others, but GPT-4 represents it's a step function change in terms of what the cognitive ability and the generative ability of the models that, you know, have been evolving around. You know, as we know what GPT stands for, this is a generative pre-trained transformer, right? What I think is happening at this moment is that anything you historically had built in around GPT-3, it almost has to be chucked out and you have to kind of rebuild it on GPT-4, right?

And so, so I think, I think the best way to think about this right now is that there's still gonna be a need for very finely tuned models, right? There's gonna be a need for things that are necessary on a prune and tune basis off of an LLM, like a GPT-4. But what you typically have built in a previous generation of a model, largely will not have anywhere near the fidelity of a generalized model in this next generation. And so then you have to kind of rebuild again. And so my view is like, you gotta kind of really prepare for that ongoing rebuilding you know, in this experience. And then the question was, next one was, George, what application did you use for generative AI imagery in this presentation? I used a few, but it was predominantly built off of you know DALL-E 2 as well as not Midjourney, excuse me, as Stable Diffusion. I haven't been able to, you know, get enough time to just hack away it with Midjourney 5 yet I hear it's actually incredible. So it's kind of nice on my list of things to work with, but those three are the most exciting ones right now.

Mary Reagan (27:35):

I've used up all my credits at DALL-E and I'm waiting for them to yeah.

George Mathew (27:38):

Yeah, just <laugh>. Yeah, I definitely re-up my DALL-E credits for sure.

Mary Reagan (27:45):

I have see this one other question I wanna get in before the end, which is what do you see as some historical precedents that we could look to as a reference? This is from Eric, and alternatively, what science fiction about an imagined future can we reference?

George Mathew (28:02):

Yeah, great questions. So the historical precedence that I use right now is the space race itself, right? The closest thing I can imagine to what's actually happening today is that there's a fundamental amount of research that went into getting, getting a person onto the moon, right? And, and that was a space race that, you know, happened through most of the sixties and, and well into the seventies. But what was fascinating was to see that that fundamental research in terms of the space race also generated a tremendous amount of opportunity of just what in industry looked like, right? And so the examples, you know of those days was like Northrop Grumman and McDonald Douglas, and, you know, the aerospace industry, the defense industry, all kind of emerged from the space race itself. And so I kind of historically used that as, as a little bit of my reference point for what this, this sort of ship that we're seeing here will look like with the introduction of, of large language models at scale.

And so, so I think when you think about the benefits of that, I mean, the point around responsibility here matters, right? Because some of that technology ended up being used for very nefarious purposes for society, and some of it was a great benefit. I think we're in a similar precipice here where you know, I gave an example of like, you know, just even, you know, a non generative, you know, model that was used to historically deny black applicants of loans for, you know, a good 20, 30 odd years, right? So I think having a much more active view of where bias model, drift model performance in particular kind of plays out over time is going to be quite important because you know, stuff to kind of monitor, you know, the models that are in the wild and production for these things.

I think kind of answering the question of science fiction of an imagined future that we can reference you know, my favorite actually, science fiction is actually kind of near science fiction. And I don't know if anyone's kind of read The Expanse, right? But, but the Expanse kind of talks about, you know, what human progress looks like in 50, 100-year, you know segments, particularly as we sort of expand ourselves into, into the rest of the solar system. But in the, in the, in the near term, you know, my view is that, you know, this is not a world where, you know, the Hollywood-style sort of Terminator, you know, Skynet reigns, you know, missiles from the heavens and destroys the atmosphere and we become, you know slaves to machines is necessarily a future that I really see. 

Mary Reagan (30:50):

I think we're all glad to hear that.

George Mathew (30:52):

Yeah. Yeah. Hopefully, right, right. I think we doing this right, you know, this could be a massive benefit for humans and machines, particularly humans working alongside of machines for a more positive society. But I think we have to do that the right way. We have to be thoughtful about it. We have to be deliberate about it. And that's, that's kind of reasons why I kind of, you know, built the talk the way I did.

Mary Reagan (31:18):

There's one, well, I'm gonna do this one last question, and then it's actually, I'm gonna close this off. And again, just a reminder that we can continue this conversation on the Fiddler community. So just sign up there again, we'll drop instructions into the chat for people to do that if you don't know how. But George, how can we audit or vet for bias in training data with vendors that we see, especially given the training data training data set details are often seen as trade secrets.

George Mathew (31:46):

Is that interesting? Yeah, I mean this is, I love this question, right? Because and on one level, you know, most of the advantage over time in how the models are going to be used at scale is not gonna be around the public data that we're seeing around, you know the GPT-4 style models and potentially even the GPT-5 style models. But over time, we're gonna run out of human corpus and we're gonna lean on private data even more, right? And up to this point, you know, that private data was used for training, but by the way, there's gonna be a fair amount of private data on the inference alone, right? The reinforcement learning around the inference of this stuff particularly as you get fidelity and just usefulness of these models you're gonna see large amounts of private data accumulate in just the inference alone.

I think part of the training question in general, and maybe even the inference question is that you have to balance the, you know, how you built it and what you built it with and, you know, almost sort of at least giving some good footnotes, right? Some good references in terms of what was used without necessarily giving away the secret sauce of the methodology or the specifics. And there's a balancing act on that, right? I mean, just like in any good paper, you can kind of illustrate, you know, how and what you did at the end of the day, but you can still kind of keep the, you know, sort of a process of how you get it somewhat of a trade secret. I think there's still ways that we can do that, right? I don't believe like you have to be completely dark about the methods and the processes you use because again if you have models and the combination models and data, you know, being produced in a complete block box, ultimately that is the moment that you introduce these massive pools of bias because you can't introspect, you can't, you know, explain, you can't, you can't really understand what's going on with covers.

So I think, you know, I used to kind of say this at best, like, don't build black boxes. Try to build as much clear boxes as possible that at least you can understand what's there. And there's still ability to have very, you know, call it proprietary clear boxes that you can own the IP on over time. And you don't necessarily give away trade secrets, but you still have to have good organized way to be able to vet the bias and the model drift and model performance that are challenging over time.

Mary Reagan (34:21):

Great. Excellent. George, thank you so much for a very interesting talk.