Generative AI Meets Responsible AI: Best Practices for Responsible AI

Table of content

Responsible AI principles and practices are necessary to ensure fair, ethical, and safe usage.

Watch this panel session on Best Practices for Responsible AI to learn:

  • Limitations of large language models
  • The importance of model governance
  • Key pieces for building out a Responsible AI framework

Moderator: Krishnaram Kenthapadi, Chief AI Officer and Scientist, Fiddler AI

Panelists:

  • Ricardo Baeza-Yates, Director of Research, Institute for Experiential AI, Northeastern University
  • Miriam Vogel, President and CEO, EqualAI; Chair, National AI Advisory Committee 
  • Toni Morgan, Responsible Innovation Manager, TikTok
Video transcript

Mary Reagan (00:07):

So our next panel is titled, Best Practices for Responsible AI. I'm gonna introduce our moderator and he's gonna introduce all of our panelists. So our moderator today is Krishnaram Kenthapadi. So he's the Chief AI Officer and Chief Scientist here at Fiddler. Previously, he led responsible AI efforts at Amazon AWS and LinkedIn AI teams. Prior to that, he was part of Microsoft Research. Krishnaram received his PhD in computer science from Stanford University in 2006. And hi Krishnaram.

Krishnaram Kenthapadi (00:42):

Hi everyone. Thanks Mary for the kind introduction. Thank you all for joining today's event. And I'm really excited about this session on the Best Practices for Responsible AI. So as panelists are joining today, let me maybe start by introducing Toni and Miriam, who I see are already here.

Mary Reagan (01:11):

Just a note if Miriam and Toni could turn their videos on. Great. Wonderful. Thank you.

Krishnaram Kenthapadi (01:18):

Awesome. So joining me today is Miriam Bogel. Miriam is the President and CEO of EqualAI which is a non-profit created to reduce unconscious bias in artificial intelligence, and to promote responsible AI governance. So many of you may have seen Miriam hosting the popular podcast "In AI We Trust" along with the World Economic Forum. Miriam is also the Chair to the National AI Advisory Committee, which is mandated by Congress to advise the President of the United States and White House on AI policy. Miriam has some long and rich career and background in law policy including academic positions as well as before that in various legal positions. So it's really a pleasure to have Miriam today.

(02:24):

Also with me is Toni Morgan. Toni is the Global Responsible Innovation Manager at TikTok. And in her role, she drives the strategy for platform ethics and inclusion. Her work focuses on embedding equity and fairness in AI best practices into various product efforts, including TikTok's moderation product and policy ecosystems. So before TikTok, Toni led teams, both in academia and NGO-sectors including, again, some roles at the Northeastern University School of Law, and also as Project Director at the Center for Ethics at Harvard University.

(03:19):

We are also have a third panelist Ricardo, who is with us today. Ricardo Baeza-Yates is the Director of Research at Northeastern University Institute for Experiential AI. He's a fellow of the ACM and is actively involved in various responsible AI initiatives and committees as an expert. This include Global AI Ethics Consortium, Global Partnership on the IADB's fAIr Latin America, and the Caribbean Initiative Spain's Council of AI and so on. Before this he held roles as the CTO of a search technology company, NTENT, and also as VP of Research at Yahoo Labs. And you may have seen his book title "Modern Information Retrieval."

(04:19):

So thank you all for joining today. Let me start with a question for Ricardo, about a year back, you wrote a VentureBeat article highlighting some of the major limitations of large language models. For the benefit of our audience today, could you walk us through these limitations and how your thinking has evolved since then?

Ricardo Baeza-Yates (04:46):

Thank you for the invitation to be here and nice to meet you, Toni, Miriam. And I would like to show this example I prepared to show the limitations because I think if I can share just three slides. I will show an example of ChatGPT that shows the issues behind large language models. So one second. Can you see it?

Krishnaram Kenthapadi (05:25):

Yes.

Ricardo Baeza-Yates (05:26):

Yes. So this all started in Brazil. Someone in Brazil searched for Brazilian researchers. And I appear the least because of false reasons but also some very important false facts. So for example, this is my biography as in February done by ChatGPT, the 3.5 version. Some facts are right, but for example, I never worked at this university. I never won that award. I wish I could win the SIGIR Life Achievement, and I don't think I will ever win it. My main contribution is my book, but just that. But then, continues and basically says that I died in 2021 and with the wrong age, if you compute. The difference between my birthday that is correct, my birth year to that. So, of course I was curious about how I died, right?

(06:35):

So I needed to ask that. And then basically the system said that has not been officially announced after two years. And, but many of his colleagues reported that. So I said, okay, show me one of the reports. And the system apologizes. This is the first thing. So it's a system that we feel very confident, but in some sense, it's very like naive because believes anything you say. Even if you tell them the system lie, because the system doesn't know how to lie, the system doesn't know what is writing. The system is more doing like, I will say, use the word hallucinating, hallucinating, but I think there's a better word for that, but I shouldn't use it here. And then couldn't basically provide any statement. So one thing I said, again, is there any way that the system could find that I was dead?

(07:35):

Well, I found that one person with almost my same name, but not my second surname, died in 2021. Maybe this is a reason that the system generated that. But now we have that the new version uses the GPT-4. So I said, okay, maybe things have changed. And then I did the same query like two, three days ago for this. And now my birthday is wrong, the birthday and the year. So now I'm much older, but it computes rightly based on this new date, when should be my bachelor degree. But in 1938, I still was in high school, so that's wrong. And then invented a thing in Waterloo, Waterloo was my alma mater PhD, alma mater, but I had never been a cryptographer. So this is also a wrong fact. Then this is very early 2002. I mean, like Yahoo! Labs they didn't even have a search engine at that time.

(08:42):

So this is also wrong. And finally, when I keep going, there are more wrong facts and basically haven't won this award. And again, insist that I had this work from SIGIR. So this is a problem the system doesn't know when is not saying the truth, or it's not lying, it's just inventing things. And worse this version has more mistakes, but maybe I don't for me, it's not as important because of my life, again, for the system. But it's another problem, which is more complicated. If you ask the same in a Spanish or Portuguese, these are two language I speak. Well, there are more wrong facts that are different from the ones in English. But we always said that with the problem was the bubble tower was about having too many languages. And this is the first problem. We have around 7,000 languages that are still alive, and only like 300 have a Wikipedia and maybe 100 has enough large language resources to do these models.

(09:53):

So one problem is that we are increasing inequality in which people can use these tools if we don't speak the language that they speak. So here is, there are more mistakes. Also, there things are not translated. So the system didn't translate as Spain to Spanish. Repeated facts so here, these two are repeated, so basically doesn't realize that repeated the same text again and keeps doing more mistakes, and in Portuguese will be the same. So now the bubble tower is not about different languages, if, in some sense, different knowledges, which is much worse. Imagine that someone has an argument that says ChatGPT said this, or worse, ChatGPT also this, and they're not consistent. So this is the problem. So you get different false facts in Spanish and Portuguese, and also you get also different right facts in Spanish, English, or Portuguese. Now, there's a more subtle problem with this inequality of languages, is that of inequality of education. The people that can use these tools will basically thrive. The people that don't have the education for use these tools will be worse than before. So I see for me, the main ethics danger here is that inequality in the world will increase more than today. And I think I will stop here. I didn't talk about biases, but of course there are many biases, gender bias, ethnic bias, classism, sexism, and so on. So this is another problem that I'm sure my colleagues will address.

Krishnaram Kenthapadi (11:41):

Thanks Ricardo for that really insightful example and highlighting some of the issues firsthand. Let me move to Miriam. So given your experience, you know, being plugged into both technology as well as law and policy, do you have thoughts on what maybe ways through which we as a society or as part of the government, can bring up the right level of regulations to address or some of these concerns?

Miriam Vogel (12:17):

I do. Thank you for the question. I think you know, savvy people, Toni, others at different companies, I know you have across the globe, have started to see this is really a must have. It's not an interesting question about whether or not you need to think about responsible, trustworthy AI. It's an imperative. I think there's a few different reasons why people land on that, but I think whichever one lands you that is the, it's the right place to be. I'm just glad you landed there. So, first of all, some people are motivated by the fact that it's actually an employee retention issue. If your employees know that you are using AI in a way that is hurting people or not, including people if it's biased or dangerous in some way, your employees are generally don't wanna be a part of that.

(13:13):

And, you know, employee retention can take, you know, keeping the best talent is obviously a priority for all good companies. Second of all, if that's not enough, you have brand integrity. If your AI system becomes part of a headline or even a rumor that it's discriminatory, that it's biased, that it's unsafe, that there's cyber risks that could be found through its implementation, well, your brand is gonna lose its integrity. And obviously regaining that is, if not impossible, pretty close. On top of that, smart companies realize that the more companies that are able to use their AI, Ricardo was talking about, you know, challenging the AI in Portuguese and Spanish. We know too many programs, AI programs are not trained in other languages that are interpreted from English into other languages. They're not native speakers in other languages. If you're a smart enough company to create AI programs built in other languages, you have more populations that wanna use it.

(14:16):

So it's a consumer advantage to be thoughtful about responsible AI. And fourth, and finally, in getting to your point originally with the question as well, is it's a liability. There's litigation risk, there's regulatory oversight that is already underway, and that is coming. So if you are savvy and you wanna make sure you're not caught up in a lot of litigation and other legal challenges in addition to the bad headlines, you're really gonna start thinking about what I call good AI hygiene and implementing responsible AI governance within your company. There are laws currently on the books that are increasingly being used in the AI space, whether it's EEOC, who that has an AI initiative and and historic joint statement with DOJ where they said last spring, that if you're using AI that is in violation of the ADA, they're looking out for you.

(15:18):

They're looking for those cases. Well, how many people have facial recognition that they're using that is not discriminatory, that is not biased, or leaving out some protected classes, let alone voice recognition, which is not that great at different voice intonations, different dialects. Well, what if you have a speech impediment, then you have a real problem not only in engaging your customer, but a potential ADA violation under US law. So we know that there are many laws on the books that are increasingly going to be applied in the AI space in addition to those that we've heard about coming down the pike, whether it's the EU AI Act or others.

Krishnaram Kenthapadi (16:00):

Thanks Miriam for that context and going into this different dimension. So let me move to Toni, who has a lot of experience being part of the responsibility innovation team at TikTok, which is a prominent social media platform, right? So given your experience, what are some of the related challenges that you've seen at TikTok? How do platforms like TikTok go about moderating content produced by the community? What are some of the responsibility challenges for that?

Toni Morgan (16:35):

So first, thank you for the invitation to speak here. And that is a big question because it's, in fact, we have to moderate millions of pieces of content each day. And so that means technology, like machine learning plays a pretty significant role in the ways that we worked to keep TikTok safe. And so when we think about the way we're using technology at TikTok, we use technology for scale, and we use, I'm gonna talk about content moderation, because that's largely where our team is driving a lot of work, and where I think there's gonna be some resonance here. So we use content moderation for context. So for example, you know, how do you train a model to tell the difference between someone who is using a slur in a hateful way versus a creator who is recognizing that the slur is being directed at themselves or their community and wanna uses it in, and wants to use it in an educational context as reappropriated speech.

(17:37):

How do you train a model for that, right? And so that's where this balance of, you know, thinking about tech for scale, knowing that this is happening around the world in a number of ways in different linguistic context, in different regional contexts and where we need to think about how do we govern these individual pieces of content so that, you know, folks get to continue to participate on the platform and enjoy all the benefits that TikTok offers. And so that comes with its challenges, but I would say that these challenges aren't necessarily unique to TikTok. The industry is facing these as well. There are three in particular that I think we really focus on. So one, as I mentioned, is content moderation. So, you know, thinking about creating a safe and inclusive online environment for folks means governing our work through our community guidelines.

(18:29):

So in our rules on hate speech and misinformation and explicit material, there are very clear guidelines in place. And this could lead, you know, if without those rules or without those rules, excuse me, we might run into a situation where there are incorrect content decisions. And so we've gotta work pretty diligently to do that. And AI driven content moderation systems struggle with context and understanding things like sarcasm and reappropriation and the use of the nuances in different languages and cultures. So that is one a challenge that we run into. A second challenge that is, again, not unique to TikTok, but one that we're really focused on, on our team is thinking about fairness and bias. So realizing that, you know, AI can inadvertently, you know, perpetuate biases that show up in our training data.

(19:23):

And so we work really closely with teams to develop systems to ensure that there's fairness in our systems and that specific communities aren't inadvertently impacted. And so for example, whenever there is an issue where, you know, there's a struggle in our systems to determine what is hateful and then what is reappropriated, we use this content moderation, our work with content moderation teams to address that. So doing that requires ongoing research, collaboration, and then, you know, thinking again about our community guidelines making sure that we are leveling the playing field for all users by demonstrating our commitments. And I actually wanna just highlight really quickly that this week we published our, our community, we updated our community guidelines and commitments. And so we are really striving to ensure that the public understands what principles are guiding and informing our work, especially when it comes to questions about bias and fairness.

(20:25):

The third thing I would say is that, again, not unique to TikTok, but definitely something we're focused on is balancing automation with human oversight. So we know that, you know, AI-driven systems offer significant benefits in terms of efficiency. So again, using technology for scale, but relying on them solely for content moderation in this content moderation context, can lead to unattended consequences, right? And so figuring out how do we strike the right balance between automation and human oversight is critical to making sure that the ways that we deploy machine learning models systems align with human values and societal norms. And those are ongoing, you know, ongoing challenges. I do wanna share really quickly as well you know, one way that we are thinking about this and tackling this head on is to work actually really beside, you know, shoulder to shoulder with our content moderation teams by introducing frameworks and introducing new mental models to help them understand context no matter where it comes from in the world.

(21:33):

So what happens is we think about how our policies are applied, and let's use different body types as an example, right? So a machine model can determine, you know we have, you know community guidelines around adult nudity, sexuality and, what is permitted on the platform. But in context, sometimes a human moderator might see a body that is slim, conventionally perceived as, I will say, normal, and then see a plus size body, and both bodies are doing the same thing in the content. Maybe they're at the beach, maybe they're on vacation, and we'll make an incorrect decision. And so, one of the ways that we are working on this is helping our moderators embody the fairness principle by deploying a training program that allows them to understand context a little better. And so the additional benefit of that, and the way that that connects to improving our models is that over time, those inputs begin to inform what the on our moderation system or on the machine side of the system begins to inform what the inputs look like.

(22:46):

So eventually we get to a place where then if we were in a position to look at how our moderators performing in their evaluation of different types of bodies or slurs versus reappropriated context, there will now be information from human inputs that help to improve models and reduce some of those challenges.

Krishnaram Kenthapadi (23:07):

Thanks Toni, I think that's really very insightful. So one of the yeah, a couple of questions that we see from the attendees or around like, fairness and safety specifically around how do you balance between or make trade-offs between say, societal harm versus engagement or growth. I believe this is not something unique to TikTok, but more broadly for all social media platforms, right? Social media or even possibly like professional networking and all these platforms, how do you balance between any harm and engagement and growth and profit related metrics?

Toni Morgan (23:49):

Yeah, that's a really great question. So the work that our team focuses on is thinking about, you know, how we get essentially a billion people to trust the way that we're using technology about our decisions. And so you know, because again, because our work is embedded within the trust and safety organization that is really at the heart of everything that we do, is figuring out how do we engender trust? How do we think about these trade-offs and balances? So I'll share what we're doing currently in our community and recognize that this is an ongoing thing. So there is no perfect solution, and there certainly is no panacea. But I think that with the establishment of our team and some of the work that we're doing we are getting closer and closer to better understanding what does it mean to create a trustful space for folks to create and thrive on the platform.

(24:39):

So one way is that, as I mentioned, we shared our community principles for the first time. And so I think that helps us bridge the gap of explainability. So thinking about how our principles are used to improve folks understanding of the ways that we're making decisions about our content and thinking about that internally also means that we're thinking about or what we're communicating is showing the inherent tensions in our systems that our moderators are grappling with every day to keep TikTok safe. So, on one end, folks are understanding externally, you know, what are some of the rules? What are some of the ways that we're making decisions? And realizing that this is not a targeted approach against any particular group or community, but that it's grounded in a set of principles.

(25:29):

And that not only are we grounded in those principles, but we're operationalizing them and explaining how we're thinking about that in ways through in the ways that we've published our community principles helps folks also understand what happens when you're faced with one piece of content about a bigger a plus size body, a bigger size body versus a slim body or a community member who's using a term to educate others when they're realizing there might be some mis or incorrect information about a community being, being shared. Right? Another really crucial way that I think we're thinking about balancing these tensions is frankly the previous speaker talked about this idea of building clear boxes. And so, you know, we have recently launched our transparency center, and every quarter we share data about the enforcement efforts that we use to keep us accountable to our community, as well as the others who have a stake in our, in the safety of our platform.

(26:30):

So that is, I think, you know, one way that we're thinking about building and thinking about managing that tension and building a conversation with the public and those that might be thinking about what does it mean for us to serve such a large group of people using AI, recognizing that there are many ways that folks could go off the rails if there aren't the appropriate guardrails in place. And then the final way, I, one quick thing I'll, final thing I'll say is we also provide users with the opportunity to appeal, appeal a decision. So in this idea of being transparent with our efforts, we're really striving to demonstrate to our users that, you know, when we notify them of an action that we've taken against their content or their account, we are explaining why we've taken that action and then giving them the opportunity to appeal so that there is a way to have a conversation back with our platform about the content on the platform.

Krishnaram Kenthapadi (27:33):

Thanks Toni. So I think Ricardo also has some thoughts to add to this record. Ricardo, would you like to share your thoughts?

Ricardo Baeza-Yates (27:43):

Yeah, because this is, I think, a key question on how you do that. And I think we still don't know what, how to do this, how to basically balance the risks and the problem. So but we are, we have done some steps on that. So I hope I sent a letter to the chair. Basically last October, the ACM published the, in a statement on responsible algorithmic systems, and there are nine principles. The first one, which is one I pushed for is a new one, is called legitimacy and competence. So legitimacy means that you have done ethical assessment, basically analyzing the risk, analyzing the opportunities, and you have decided that the benefits are much larger than the risk. For example, let's say covid vaccines will be a case like that where there are some risks for some people, but in general, for most people, it's the benefit.

(28:43):

And of course, also you need to have the competence, not only technical and in the domain of the problem, but also you have the management competence. So basically, you have the permission of whatever an institution needs to give you the permission to do that. Many times there's some main cases in the world where the person, some person did something that was not allowed to do, even a school director or even hall ministers. So and one of the, this the nine principles also contestability and auditability, the last example of Toni that the people can contest. But the main problem when you want to do the right balance is that we are focusing on evaluating success, which is a big mistake. The example I always do is that if you go to an elevator and the elevator says it works, 99% of the time, you will not take the elevator.

(29:41):

You need to see something like the elevator doesn't work 1% of the time, and when it doesn't work, stops. And then I take it because I know I'm safe. The same with drugs, the same with food, the same with travel, and so on. So I don't care if your system is 99% accurate or really care is what happens in the 1% that you fail. And today, basically all errors are equally valuable in the sense that we weigh errors with the same number one, but maybe I prefer to have a car that is slow, the self-driving car that's slow, gets you everywhere late, but doesn't kill anyone. That one that gets earlier but kills the person. And this is what we need to work today. It's like how to include this kind of a weighing scheme in models where you do the trade off between the harm to people, and then you have some guardrail saying this unacceptable, like in the EU Act, where some applications of AI are prohibited and what is the profit for the company. So you need to limit harm to almost nothing. And then also you can work in the other part to maximize the profit, but not by harming people. So this is something we need to do more research on, I don't think has been done yet.

Krishnaram Kenthapadi (31:09):

Thanks, Ricardo. So let me move on to Miriam. So your podcast is very aptly titled "In AI We Trust." Could you share your thoughts on how far are we from developing AI that is trustworthy, or at least knowing when it's going to fail? So as Ricardo pointed out, we can defer to maybe human experts or we can have fallback mechanisms.

Miriam Vogel (31:34):

Thank you. That's a great question. And I've never had it tied to our podcast before. So I love that thought and other people should listen to our episode with you for more great thoughts that you've offered on this topic. How long until we see trustworthy AI? The good news is from EqualAI, a lot of our work is with companies and senior executives who care deeply about this. They are doing this work, they're committed to this work. So I know from our extensive work with them to ensure that they are finding, establishing, operationalizing best practices on responsible AI governance, that it is in the world, and it is in a lot of the AI systems we're using. But as you point out the challenges, we don't know where it is and where it is not. We don't know if it's continuing to be monitored as AI iterates.

(32:24):

Are there ongoing systems in place to ensure and are there standards for what is trustworthy, which is so hard because as much as we need it in our own country, a guideline for companies within the country, you really need a global standard because obviously AI has no borders. Companies are operating, and their products, their AI systems are operating without borders. And so it's really unfair and not in our best interest to not be clear across the globe on what trustworthy is. So I do hope, and I am seeing some progress on developments in that space. Again, I bucket this all as good AI hygiene, and if a company has an incentive to talk about and to do the work of building responsible AI governance, I think what they would be doing from our experience working with companies is establishing a framework. Saying this is what our AI principles are, these are our values, and this is how our AI supports our values and doesn't impede our values.

(33:32):

You would ensure there's accountability. You would make sure that someone in the C-suite is responsible at the end of the day for any big decisions or problems. And so people know who to talk to and who has the hand in that decision. And then you need to standardize it across your enterprise so that everyone knows what the process is, they can build trust within your company, and then that builds trust within the general public. And so, you know, these practices, while external guardrails regulation is so necessary, there's so much a company can do internally to communicate the trust that their AI systems can support. And another way to do that is by documentation, which I think of as another really important step in the five step process in good AI hygiene. If you document what you have tested for, how often have I tested and make sure that it's in a way that's translatable across the lifetime of that AI system's use. As well as auditing going forward, making sure you have a set cadence that people know when you're gonna be testing what you're testing for at those different iterations and what you're not testing for.

(34:38):

So that if someone uses it and realizes they have a use case that you haven't thought about you know, they can, they can do that on their own. They can see which populations are under and overrepresented and either take steps to accommodate or make sure that people are on notice that AI medical systems for identifying any healthcare issues don't misrepresent for populations that might not be well established in the success rate of that AI program. So the short answer is we do have trustworthy AI, fortunately in many places in our ecosystem currently. I think that the more there's clear statements, either a consensus across industry or hopefully with government regulation and clarity and statements we will all take comfort that these are guidelines that are being followed. So in addition to my optimism in with companies that are doing this, we see steps on the global stage. For instance, the NIST AI Risk Management Framework that was released in January. It is voluntary, it is law agnostic, and its use case is agnostic as well. And so if you want to follow best practices, you now have a guidance document that has taken into account stakeholders from across the globe across industry and organization that can give you that level setting document.

Krishnaram Kenthapadi (36:08):

Thanks, Miriam. I see one of the questions is very related. Perhaps we have covered most of it. How should companies start or continue to build their responsibility framework to follow various regulations and minimize risks for their customers? Perhaps the context is that many of our attendees or ML practitioners, so they're curious as to how they should go about ensuring that whatever they're building and deploying or complaint.

Miriam Vogel (36:37):

That's such a great question. Thank you for asking. Obviously the answer is, join the equal AI badge program so your senior executives can work with our AI leaders on general consensus on what those best practices are. So I say that tongue in cheek, but it really has been helpful because those trying to navigate what is responsible, trustworthy AI looks like. Fortunately, we now have a NIST document, but it's still unclear across the globe. And so that's why we created this program so that there's a group of leaders across industry working with AI experts to define what those best practices and most importantly, talk about how to operationalize it. But there's in addition to the NIST document, we're fortunate that we are still not in that, we're not in early days. We still have work to do. We have a long way to go.

(37:27):

But this is something that has been under discussion and where many companies and organizations and governments have tried to establish what best practices are. So you can look directly at different organizations, you know, as I mentioned, the EEOC and DOJ in the US, the FTC as well has given guidance on what they expect to be an unfair use or a false claim violation. But you can also see, you know, EqualAI has published articles. The World Economic Forum has articles that synthesize what best practices look like. Again, getting back to that five point good AI hygiene.

Krishnaram Kenthapadi (38:10):

Thanks, Miriam. Switching gears a little bit. Ram has a question for Toni. With the explosion of generative AI content on social media platforms, do you think that the solution you are proposing for content moderation, for instance, can keep up with it? What are the broad implications of the generative AI content appearing on social media platforms like TikTok?

Toni Morgan (38:43):

So that, so one, thank you for your question. I think that currently, while our team doesn't work directly on, you know, issues that are related to generative AI, I feel like it is safe to say that it does, you know, we're gonna face the same challenges as any company with AI systems. Some of which I know I talked about earlier. And I think it's really actually important. This is actually, I think, a really great opportunity to acknowledge that, you know, Fiddler AI is bringing us together, bringing experts together to have some of these discussions. And so I think there are many challenges that we could talk about. But unfortunately, I can't speak specifically to that question without also acknowledging that there are that these challenges are not unique to us, and that, you know, we need to continue to have conversations with the experts in forums like this about how to be prepared for what's coming down the pike.

Krishnaram Kenthapadi (39:50):

Yeah. Yes, I think these are, these are questions that are just emerging, right? With the abundance of AI generated content, which until a few years back, no one would've anticipated or no one would've considered as potential risks. And this has implications in all sectors, whether it is just content consumption or notions of truth, notions of reality, and so forth. So maybe going back to a little bit, right? One of the big concerns with large language models and generative AI models is the amount of resources they consider whether in terms of environmental costs, financial costs. This is a question for Ricardo. How far are we from developing mission learning models that learn like humans? We all learn from a few examples. We don't need enormous energy. We that our brain uses just a few watts of energy or we don't need so much training data to learn from. How far are we from developing AI that learns like humans?

Ricardo Baeza-Yates (41:00):

Yeah, that's a great question. May I challenge the question? So we always are asking how we can create things like us. When I believe we will create things that complement us, but I'm not so worried about being, like, for example, real humans. I think that's one problem with, when we talk about AGI, I would prefer that to say AGI is something different. We should define it as something different. That if their own intelligence between codes, and it's not human intelligence, I think that has many advantages. One, where people will be less afraid of AI. Second, we will focus on not on replacing people, but on basically helping people to live better. So, for example, we will not be afraid of inequality if at the end. AI allows for example, all of us to do whatever we really want to work on and we can contribute to society, but basically we're not doing work that makes us to just survive, like in many, many countries and many billions of people.

(42:07):

So I will say that that's a much better goal to achieve than worrying about how when AGI will be human. So I would prefer that we achieve the goal to this different objective. And then I think we can exploit the advantages of this system. They have amazing memory. They can analyze many, many alternatives much faster than us but maybe they will never be human. And that's okay with me. I don't want to have a computer that's human. Also, I don't believe we can play God and if we play God we are in trouble because who is the species that is contaminating the planet. We know that, and someone more smart, smarter than us will find that right away. And I'm afraid of what that person or that intelligence will do.

(42:58):

So I will say that we should keep, for example, many we need to regulate because of what I said of inequality of use and equality of languages, and also the harm to the environment. By the way, I mentioned the statement of the ACM, the nine principles is to limit the environmental impact of these technologies because I'm not afraid of the amount spent on training these things. I'm afraid of 4 billion people using at the same time, or 5 billion people. That's much more energy than it's worth than Bitcoin. So it's much more so we need to do something, and then if we really want to keep learning, we need to make sure that there is not a negative feedback loop. For example, if I think you asked Toni, if all people in TikTok is using AI to create more TikTok, well then the system will create TikTok based on their own. So basically it's not learning from humans, it's learning from itself. And that loop, I dunno what is the endpoint of that loop, but basically it becomes something that's not human. Maybe it's funny.

Krishnaram Kenthapadi (44:08):

Thanks, Ricardo. I think there are a lot of things for us to think about. We're very close to the finish of the panel. I would love Miriam to share any closing thoughts or perspectives for us to take home.

Miriam Vogel (44:23):

I'm just so glad that people are jumping in here to ask these thoughtful questions and to figure out this really important work, navigating what does it mean to be trustworthy across the globe where people have different expectations, but where we all want to make sure that the AI systems that have become an integral part of our lives in such important and fun ways. Making sure that it's being used in the way that we intend for it to be used, and to make sure that it's safe and that we can all continue to benefit and not exclude, discriminate, or be harmed in any way. So thank you all for tuning in and for the important work you're doing to make sure we have this better, safer, more inclusive AI.

Krishnaram Kenthapadi (45:06):

Toni, would you, do you have any thoughts to add to that?

Toni Morgan (45:11):

Yeah, I would just say one it's really great that there's a forum here for us to have these conversations. I know that you know, when we're thinking about how to operationalize a lot of these concepts a lot of responsible innovation teams are looking to think about how to get ahead of some of the challenges. And so I really am excited about the fact that we're having these conversations. And I would certainly encourage that we allow them to continue. In terms of the work that we do you know, we are embedded, as I said, in the trust and safety org. So safety is our number one priority. And so we're gonna continue to think about that. And as you think about the audience more broadly thinks about, what does it mean to embed a framework of responsible AI in the deployment of the work. I would want to underscore that a lot of us are also thinking about this in the context of safety and harm and making sure that, you know, in the context of generative AI rules on things like synthetic media, et cetera, are considered in the ways that we are protecting our communities.

(46:18):

And actually, one final thing I'll note. I know there was a previous question about how we do that work. And I wanna say that in our community guidelines, we've recently updated some language around how we use synthetic media. But again, it's such an early, you know, we're in the early stages of a lot of these conversations. And so it's not enough for us to just think about how we're gonna update that and reflect it in our guidelines, but how we operationalize it. So I wanna thank Krishnaram and the team for the invitation and the opportunity to talk about our work today.

Krishnaram Kenthapadi (46:49):

Thank you all for joining today and we're looking forward to continuing the discussion in the Slack channel. There are a lot of questions that we didn't get a chance to discuss it. All to you, Mary.

Mary Reagan (47:03):

Yeah, so, gosh, what an interesting discussion. I just wanna thank Krishnaram for doing a great job moderating, and again, all of our amazing panelists with your wealth of experience. So thanks for sharing your time.