Generative AI Meets Responsible AI: Thinking of AI as a Public Service
Despite the fear and uncertainty around AI, it can be used as a tool for widespread good and societal benefit. Saad Ansari, Director of AI at Jasper AI, dives into this potential future.
Watch this session on Thinking of AI as a Public Service to learn:
- The importance of intentionality when designing new technology
- Applications for generative AI across different domains
- How the future of AI can be directed to societal benefit
Mary Reagan (00:07):
Our next talk is with Saad Ansari from Jasper AI. He's the Director of AI, where he leads a team building AI systems, models, and feedback loops. He previously worked in other AI companies and served in the Obama administration where he helped the government navigate AI responsibly. So with that, hi Saad, nice to see you. I'm gonna turn it over to you.
Saad Ansari (00:30):
Thanks so much for the introduction, Mary, and thanks for Fiddler, for inviting me. So I'm super excited to chat with you all about thinking about AI as a public service. I think this is kind of an interesting way to frame it. And we'll get into why I think it's an interesting topic. Quick screen check. Are you all able to see my screen?
Mary Reagan (00:50):
Yes.
Saad Ansari (00:50):
Okay, cool.
Mary Reagan (00:51):
Do you wanna go ahead? You can put it in the slideshow mode actually, if you would like.
Saad Ansari (00:55):
Ok.
Mary Reagan (00:57):
That works, great.
Saad Ansari (01:00):
So you know just quick intro, I had graduated from grad school and I was slated to join the Obama administration. You know, security clearance takes a long time to get, so I started my first company right after grad school. It was based out of the MIT Media Lab, and it taught me a lot about what does it mean to build a technology that does something good. So this first company, it was based out of the Center for Bits and Atoms. It was in collaboration with a bunch of MIT PhD students and professors. And it was a kind of hardware software company that focused on optimizing energy in a disaggregated energy grid. And we actually got, like, really, we got patents for it. We got, we made it to MIT 100K Finals, MIT INFINITY LABS.
(01:42):
But the startup totally failed. It did not monetize, and it was really hard to sell it or sell any of that technology. And you know, like I thought of it from the technology perspective. I didn't think it from the business perspective, but you're buying, if you're building something for like, refugee camps and like folks in disenfranchised areas you know, you don't really, you can't really sell to that group cause the part of the problem was that a lack of money to begin with. So that taught me a lot about good technology. I did end up then after that joining government, and I worked a lot with like national laboratories and a lot of like, hardcore research and development. Kind of those two experiences combined, it taught me that if you're trying to build a technology that does something good, and I know that good, good AI and responsible AI aren't exactly the same thing, we'll get to that in a minute as well, but if you're trying to build something good that benefits people, it's not inevitable that technology happens.
(02:32):
There's probably a lot of great technology that could exist that doesn't simply because market forces don't support it. If you wanna develop a good technology, it requires intentionality. So for example, here we have a picture of something, I think that it was also invented at MIT, eBraille. It's kind of like a pad that if you are blind you're able to kind of get like, kind of like how folks that have vision can see in an ebook. If you're using Braille, it'll like kind of make the dots pop up and you can read it. You wouldn't have developed this if you were only trying to solve for the market. Like you needed intentionality to develop eBraille. And like without intentionality, if a technology does exist, that's a market technology, it could incline towards something that's generally beneficial.
(03:11):
But there's some technologies that inclined towards probably harm as well. And I'm not talking about anything AI specific, but like, for example, like a weapon of violence, like alien clients towards harm, like by definition. And when you're trying to get one of these things to market, sometimes you have to do with partners. And those partners can include the government but it can be other, you know, other partners as well. And by the way, for this one, eBraille, eBraille tablet just, you know, guess the price, we'll get back to that later.
(03:41):
Now artificial intelligence, like we're having a moment, you know in generative AI. And I think there will be things beyond, beyond generative AI as well. but we're having a moment and it's an incredibly powerful technology. It's literally powerful. So if technology is defined as something that does work, this does a lot of work just as measured by its computation. It's powerful in its application. So it's applied. This is a technology applied to another technology, like human language is a sort of technology. It might have been the most intimate, important technology to our species, like to humanity. And now we have a technology that applies. We have a powerful technology that applies to this other very powerful technology. And by extension, I don't just mean text, I also mean like images and media and you know, I think this might even enable new media.
(04:28):
Like maybe, maybe like games become as easy to produce as like images were once, and like that becomes, you know, interactive media becomes more of a thing. And also another thing, another reason AI is powerful is it might be highly accessible. It might not just be something you need a PhD to access, but we might be able to make it in such a way that it's like using I was gonna say it's like using Adobe Photoshop, but to be honest I'm not too good at Photoshop. It might be as easy as using you know, like an iPad you know, you just drag and drop a couple things and you software interoperability and you're able to create value using AI. I don't think this is inevitable though. You know, just going back to previous principles, like things aren't inevitable.
(05:14):
Like, you know, Apple couldn't have, there's, if Steve Jobs didn't come and do this and meet this person, like maybe Apple wouldn't have existed. There's many things that might not exist. And you know, my team and I, we went to NeurIPS recently new is kind of like the big AI R&D kind of conference of the year. And I'd noticed that a lot of the R&D was still focused on modeling and not like AI systems. Modeling is a huge part of AI systems. But, you know, monomodelism just thinking about one things, one model at a time it does, it does quite limit what you can do with artificial intelligence. And we haven't, I haven't really seen much of a movement of like thinking in design terms around building not just AI applications, but like the modularity of AI itself.
(06:02):
You know, going back to eBraille, GPT-4 came out recently and I was really happy and excited to see this organization called Be My Eyes, that was able to like put together an application that allows blind folks to or folks vision, you know, visual impairment to like take a phone and like point it at something and have you know, have a voice interpretation of what they're looking at. If you think about, if you just think about the backend of this and how you would use GPT-4 to build it, it's a very quick turn. And, you know, if you go back to your previous question, like guess the price of it, this. So this was probably like, oh, millions and millions of dollars in R&D, and this tablet ended up still costing $2,600, which is great.
(06:40):
Like, it's very much worth it for folks who need it. There's other forms of tablets that are like $25,000. Things like this could be free. Like this could be and the development time was very quick once we had this platform developed under it. You know, I think that's the question I'm getting at. What's the modularity of AI required to not just build stuff like this, but things that are much more mature and much more complex as well. So I tried to come up with a framework and I wouldn't say I succeeded. I probably failed at this. But I try to come up with a framework for like, how might companies think about themselves in positioning on a responsibility spectrum, and then how do we think about use cases for that? So I totally failed at this.
(07:20):
I didn't succeed. You know, it's a community effort to come up with good frameworks to think about these things, but when we're thinking about the applications of artificial intelligence, I think we can think about who does it affect. There's somebody who owns the AI. There's B2B customers typically have like a very close relationship with companies, customers, people whose data was used to train the AI or train any given model. Let's call these people authors and consumers, people who you know, if you're like using AI for coming up with marketing content like Jasper does, who reads that content? And developing relationships, relationships with them. And I always think it's important to like have a bucket for outliers. There's people who are, you never really think of them as even your key customer, but they're someone also affected by you, or it might be a key customer or a key consumer who's affected in a way that doesn't really fit like your big personal paradigm.
(08:08):
So typical distance from engineering. And then over here in the bottom you have kind of responsibility level. I, by the way, really appreciated the previous conversation with Miriam, Toni, and Ricardo. I would say like a lot of government frameworks like trust and safety frameworks, some of which I'd been a part of in a previous life are very much focused on the do no harm level, which is incredibly important. There is like a duty to benefit level too. Like do no harm doesn't get you this. Duty to benefit, like thinking about how to use this power to benefit others. Oh, sorry. I have to I will not take my own poll right now, but you need to benefit, like encourage you to think about great applications for it.
(08:54):
And by the way, yes, the poll did come out and I think this first question's on like, how should people I'm sorry, where should companies fit in the spectrum? So I think the numbers that correlate to the poll are 1, 2, 3, 4, 5 here. I think we can imagine like different scenarios. So for example, there might be a company that only cares about its B2B customers, but doesn't really care what happens to the people affected by AI. Like I'll just say like, I think SenseTime is like that, is this company that like uses facial recognition to profile like Uyghurs and like, you know, put them in camps. Like that's a very bad thing. That enables a lot of harm. You have companies that will probably like be in the do no harm spectrum and they'll have various levels of relationship or concern for like, end consumers and outliers.
(09:35):
And then you'll have some that are like very much focused on like duty to benefit. Like I don't know the Be My Eyes company too well, I just read about them, but like, I feel like they're probably really concerned about you know, like, hey, let's build something for the blind, or hey here's group of people with a rare language is dying out. Let's make sure we have our AI like address their language as well. So I think we can think about companies or companies orientations and stuff. I know this isn't perfect, like, you know, you can see scenarios where somebody cares about outliers and end user and consumers, but not B2B. But I'll make a better visual next time. You can also think about it in a use case by use case level, this gets a little bit more messy.
(10:11):
So take the scenario of like ad tech, we all love and hate ads, so we don't like it when we get them, but we also read them all the time. A traditional ad tech approach would focus on, of course, like benefiting the B2B customer. We want the people making ads to get more money. And then you think about what does this mean for like, just like democratizing it, can anybody create anything using the same sort of platform? And can they monetize it too? So like selling to not just like a big business, but you know, mom and pop shops or even like, you know your high school student club so that they can like, you know, somebody can like campaign for being, you know, senior class treasure or something like that.
(10:57):
But then it's like you have to ask yourself about, well what about the authors? Like the artists, like if we had you know, not only are were we being responsible to people whose data was used for training, but like, what if there was an option where like you can create your own original human work, your, you know, human-like tone and style or human-like, you know, game style or art. And then if people can select a module with your, like you can, like, they can get your license and then you can create work that they originally authored. So there's like another level of responsibility where we're enabling artists and like, I think it's be really cool for like rare, you know, once again, like rare, like rare language or like, you know, unrepresented art.
(11:37):
But then it's like, well, what about these consumers? Like, are we creating an ocean of spam? Like I think you could say like content without curation, like typically is spam. So how do we enable them to like not only curate content, which is now going to be produced at like a 1000x you know, frequency maybe, or maybe even more. But to be able to like opt in or even like ask to be nudged for the right content. Like hey, only show me ad, no I want you to show me content that helps me exercise more and or help or encourages me to go hiking, you know in Utah if you're in Utah, like how do we get end users, empower them to curate the wave of stuff that might come their way?
(12:21):
There's other levels of that too. And you know, this could be like, by the way, this kind of feature could be like, you know, like if this is like a spam filter, this is, I don't know, like a glam filter, like only give me the awesome stuff. Like not just don't give me bad stuff. Like give me awesome stuff. And then here, you know, you have like safe and preferred content filters. I think when you get to outliers, you think about, well, what if, you know, what if somebody has an addiction and a safe ad for the main group of people is a dangerous ad for this, you know, particular person. Outliers in considering them is not only I think possible with artificial intelligence and generative AI and the level of granularity we can have, but you know, maybe it's a duty as well.
(13:00):
Then, you know, of course you have the dystopian evil version of everything. So just we're doing this hackathon in Jasper and just yesterday somebody was like, we're going over like a couple applications. And somebody's like, whoa, what if somebody uses this to like, you know, like the movie Blade Runner, where like the little ads come and talk to you and say, well, it wouldn't just do that. It would be like, the holograms would be like a picture of you and it, you know, it could be like very predatory, like a bank trying to give you like a high interest rate. Mortgage could be like "hey, if you don't take this mortgage, like you'll be impoverished. Here's a picture of you looking like poor and destitute" and like be very manipulative. So it does enable like a lot of harmful things as well.
(13:36):
And I don't say that cause I'm paranoid about it. I say that because unless you think about it, you know, you're not able to like build against it. So there's a lot of scenarios that could happen from here and I think it's always helpful to like think about the best case and the worst cases. And you know, the reality happens in the middle and the reality is only you can control. So, you know, you know, let's start with bad case. So we end up with good case. So bad case scenario, like really sticky evil kind of on fire spam. AI is used to create emotionally manipulative, you know, manipulative, personalized ads that controls people behaviors and unhealthy choices like, that could really happen like it's technologically possible. There's a, you know, few people who benefit from generative AI revenue, minority languages, small authors, you know sorry, go extinct.
(14:19):
I meant to say here, like, it's very possible for like the main value of it to be like a little bit hegemonic and focused on like couple interests and a couple of languages and not enabling others. I know that generative AI, I know, like for some, these models are able to like, speak in every language, but that doesn't mean all the applications will, so this could be an outcome and outliers, their problems worsen. I mean, even with the internet, like it took even like companies like Google, like decades to start, you know, dealing with outliers and like being responsible towards them and, you know, it could happen again. You know, so let's call this like scenario sticky spam. In like a good case scenario like, you know, RenAIssance it looks better than it sounds. Or RenAIssance, I think maybe. Anyone can access top tier interactive human social multimodal education.
(15:07):
I should really think education's a wonderful use case for generative AI. And also certification because education without certification it's like a tree that falls and nobody hears it make a sound. You need certification to get jobs. I think both of these things, you know, are solvable. And I actually don't mean like, I don't mean MOOCs, I don't mean I don't even mean like something that's just you and the AI. I think AI can help social interactions and social connections such that it facilitates education, which I think we found in research is a social endeavor. People learn through the human interaction and the human incentives and relationships even. So new businesses make money, but they're consumers intrinsically benefit as well. Kind of like that, you know, glam filter idea. Like give me the media I want, don't just give me like ads.
(15:49):
New modes of art and types of artists emerge. So like, I don't think it's just like, "hey, let's make more images." I think it could be like, "hey, everybody can make a game now." Like, we can make games and we can make more interactive media or you know, a AI is not just about content, it's about logic flows that are produced and then turn into apps. So I think we can actually make like entirely new art form even or new types of games. Language and other barriers fall and people nudged into any knowledge. There's so many things that we don't know cause it was just in some other language or it was in some other like group. Generative AI combined with like recommender systems and routing could like really help with that. And of course, like, you know, outliers and minority languages flourish, like these are two extremes and two scenarios and we can kind of bend this technology to go either way.
(16:32):
So the future is undetermined and, you know, we should understand all the future scenarios that could happen. We should understand nothing's really inevitable without like human agency. And that is one of the, you know, I agreed with Ricardo's point that like we shouldn't anthropomorphize AI and artificial intelligence. And like agency is certainly something that I think, you know, you think about it very differently for AI agency versus human agency. But we have agency and these scenarios aren't inevitable. They require intentionality in our agency. And even unless we have that sort of like guardrails and guidance, like technologies do tend to do their own thing, like with society and like how people view them and we shouldn't assume that they'll lean good or be or their like, you know, their harm might be mitigated by itself.
(17:16):
I actually think this is a unique technology. I know everybody has always said every technology is the most democratizing technology. We know Bitcoin and this and that and, you know, blocks like the internet. I actually think this might, this could do that. Mostly, I don't think it's inevitable, but I think this could be the most accessible and powerful technology that benefits people. In previous like AI world stuff like we would do these like, you know, AI for good hackathons and it was very much like pushing something like, I'm gonna pull up, pick this boulder up and move it there and get it to do something good. Like this is more like a fire, it gets spread by itself. And I think it's different and I think we can make it even more different in a good way.
(17:53):
Technologically what does this mean for all the builders and developers here? So first of all, kudos to the open source community. Oh, and by the way, sorry, I totally forgot. Thank you also to Fiddler for facilitating conversations like this and not only facilitating conversations and building a platform that helps people think intentionally about what they're building and what effects it has and how to monitor it as well. Tech companies like that are super helpful and valuable. So also kudos to the open source community and you know, there's a lot of groups out there like LangChain and Dust, and a couple others that are thinking about like the flow and modularity between pieces. So I'm a big fan of talking about like, modeling helps AI, but AI is not just modeling.
(18:35):
Like you have to think about how things fit together. Like between retrieval and foundational models and adapters and like the databases that they feed into and how like prompt modules and things of that like work together. You know, I think if like people had access to like an Apple design, like level, like really well designed studio where you had chains and you know, there was some sort of like interoperability protocol with like models. Cause right now we make models and they're in the open source and like, you know, develop, like, you know, well-trained developers can make interoperable, but like the average person won't be able to know what that means or you know how to do that or you know, how like how you would weigh training or fine-tuning models differently such that once they're in a system with adapters, retrieval and foundational models and other pieces that it would it would result in a predictable effect.
(19:19):
We do a lot of trial and error, but I think we could get to predictable effects in terms of what the AI then does in terms of semantics and knowledge and, you know, logic flow and other things like that. So I think like a studio that's simplified, like as if Apple did this, right, a company very well known for design made it so simple that anybody could create like AI applications. And then it'd be predictable in terms of what would come out of that. I think that'd be extremely powerful and I think that would like, it would really speak to you know, getting these end consumers like moving up in like the value chain and getting like more companies like this out that like are doing even like more non-obvious things that are further on the system. And just to kind of like draw an analogy for how far I think we are from the endpoint, you know, when the internet first came out, like the low hanging fruit companies started coming out and like you get like internet and search, like internet's about a new relationship with knowledge that is primarily centered around the verb of search.
(20:12):
You have companies to come up later, which is like Search for Friends, like social media platforms like MySpace and Facebook, and then people come together and then it says, well search for products. And Amazon is like, you use the internet, but like, it's really a supply chain company. There's like 95% of it and you know, and then like you kind of get these other pieces like bolting on, like we're maybe like 5% of like the things that you could have bolted on, like the ways that this can enable other things. So there's a lot more to come and like a studio where, where like that becomes more modular, interoperable and easy to do even if you're like, you know, just a kid in middle school, you know? I think that that'd be such a valuable thing.
(20:49):
And I think the open source community plays a huge role in that as well as like getting people to kind of go like commit to frameworks. So I think we'll move to discussion. Sorry for the long monologue. I was hoping to end a little bit earlier, but open to questions and feedback and thoughts and I'm actually really curious how we did in the poll. But yeah, I'll pause now and I'll turn it over to my host to help me out with the polls and questions.
Mary Reagan (21:13):
Yeah. So really interesting discussion. Thank you so much Saad. We have some audience questions here, so maybe we'll start with how do you balance the effectiveness of your AI systems and their bias?
Saad Ansari (21:26):
Yeah, so there's a couple of ways to do it. So one is like you try to prevent the bias from happening in like the way that you would train the model or selected the model and then like fine-tuning. But bias is like and you know, bias means bias means different things. So it could be like, "hey, give me a picture of a rich person, or tell me a story about a rich person" that tends to be like white male is, tends to come out. So you can solve that like in the actual model creation process but sometimes we don't know the exact behavior for every scenario because like I said before, like outliers make it really hard. Like we don't know what the outliers are, so then we have to test for those. So for example, there was like something that wouldn't have been perceived as racist, which came out in like one of the models, one of the outputs recently.
(22:14):
And like we were able to get, detect that not because our system was ready to filter and understand that, not because we had trained for it, but simply because we got user feedback. And so then, you know, we took that feedback and we were able to go back into the process. We did have to like prevent that from ever happening and you know, have that be like a part of the live feedback loop. I'm not sure if that answers the question because bias can be interpreted in a lot of ways in a lot of applications for it.
Mary Reagan (22:37):
Yeah, that makes sense though. I think very broadly. Would you, so there's this question from Mia Meyer who says, would you comment on the bias of language systems and yeah. eg in particular she's talking about man is to program her as woman is to homemaker and the risk of bias, you know, for bias amplification from these models.
Saad Ansari (23:00):
Yeah, absolutely. I'll start with the good news and go to the bad news. I like that. I'll start with the bad news and go to the good news. So the bad news is that what models represent is typically what humans represent. So like they're just a mirror for like the data they were trained on and that like we made that data, but like that was us, you know, that was us humans. So the bad news is, is that that's in there and it's bad. The good news is that models are more controllable than like human discourse. So we can actually, we can like, to some extent prevent the behavior like in the training process and feature, feature engineering data selection and like in how some of the models are designed. But then once we detect it, we have ways to like remove that and like balance it out. So like I think some of the more recent models don't have that there, but were we to find it, we would know how to change it.
Mary Reagan (23:57):
Mm-hmm. Mm-hmm. Yeah. Yeah, I think, I mean it's so hard right? When it's like our society is so biased and, and then we're creating the data from ourselves. So.
Saad Ansari (24:06):
But the good news is that there's no theoretical reason why we can't eliminate it. Like, it's not a theoretical problem, it's a problem of like practically applying this and like getting it done over time now.
Mary Reagan (24:18):
And so Mia has a follow up with what evaluation metrics go beyond the traditional performance that you would recommend for language systems?
Saad Ansari (24:26):
Yeah, so this is like one of my favorite areas. So you all know like there's those typical like technical metrics like bleu and rouge and then there's some like pretty interesting benchmarks like bench and so on. Just from like a customer's perspective though, like form follows function, the metrics, which would've been, which would've happened to have been most important to your customers. Not even like what they told you, but like what you think would be most useful for them. Converting those into something that you can measure on content would be the most important metrics for you. It's just form follows function. So for example, like even BIG-bench didn't have some of the things that were important for us at Jasper cause like our customers wanted different things that were the things that were in there. So we developed metrics that like measure what success looks like for them.
(25:01):
And then content metrics correlates to what those end success metrics are. So I can't get too into it, but like for example, let's just say that a customer, a marketing customer wants like more like, you know, LinkedIn likes and then like, let's just say we're analyzing for further blog, further blog posts, then we can analyze like, okay, what are the correlates content correlates with of like LinkedIn, like semantic, semantic complexities, sentence semantics, structure semantics, what type of topics like, you know, what type of like, you know, what type of like tones, what type of length, what type of, there's a whole bunch of things you can break it down into. Wit, you know, like is it humorous and so on and so forth. And then you kind of like do a bunch of experiments. You correlate the success metric to the content metrics and you more consistently make those content metrics. So I think the rule of thumb is form follows function even for metrics.
Mary Reagan (25:50):
And let's see, what time were we at? So I want to just mention that the polls actually, we posted them in the Fiddler community. So maybe after this we can continue chatting there cause it's gonna be hard for me to share the screen to show people.
Saad Ansari (26:01):
Oh, okay. I did remember seeing that everybody thought that we should be at 5 though. I think that was like from the poll.
Mary Reagan (26:08):
Totally. There was one defector who thought there should be a 2 there and I.
Saad Ansari (26:12):
Yeah, so I'm glad that you all think it should be a 5 and then I think we were more pessimistic in terms of like what would happen though. Oh, good. So we're all relatively 4s and 5s here, yes. One person thought that we should not and then for the second one, yeah. Wow. So you're all a bit pessimistic. So we think it's good. We want 5, but we think we're gonna get 2. And we're getting close to the end of time but like, I would love to have converse with you all into like, you know, what we think the difference between those two could be.
Mary Reagan (26:50):
I guess maybe really quickly I would love to hear like, what would you have answered on both of those?
Saad Ansari (26:56):
I mean, obviously I for the first one, like I hope, you know, we aspire to a 5. For this one, like it's kind of like why would you vote? Or like why would you ever run for public office if you think like everything is really, really bad. It's like I feel like we have no choice but to say we also think it will be 5, but then take on the responsibility to like build and do it ourselves.
Mary Reagan (27:21):
Yeah. Do you think that there needs to be any sort of other mechanisms to incentivize companies to become more responsible?
Saad Ansari (27:28):
Yes. A 100%. I mean like back to even like the first lesson, they sometimes saw what non-market partners, including government. Some of these things won't, some of the best outcomes of this won't exist without like incentives which are not market incentives.
Mary Reagan (27:48):
Yeah. So we're right at time. So for the folks that I didn't get to your questions. So I'm just laughing at this last one, which is there a Ralph Nader for AI. Hilarious.
Saad Ansari (27:59):
Yeah. Let's see. Yeah. That's such an interesting campaign.
Mary Reagan (28:04):
Yeah, in any case again, you know, we can continue anything that I didn't get to, I apologize and we can continue that conversation on our community. Saad, I really wanna thank you for your very interesting talk today and sharing your thoughts with all of us.