AI is moving fast. But are we really keeping humans at the center? In this special live Rapid Response, recorded on stage at South by Southwest, host Bob Safian sits down with AI scientist, founder of Affectiva, investor at Blue Tulip, and host of Pioneers of AI, Dr. Rana el Kaliouby. Rana makes the case that human-centric AI isn’t just a safety guardrail; it’s the key to thriving socially, economically, and emotionally. She also cuts through the noise on the buzziest AI myths, weighs in on AI in therapy and Meta Glasses, and draws a sharp line between AI founders who are truly visionary and those who are simply opportunistic.
About Rana
- Founded MIT spinout Affectiva; sold in 2021
- Host, Pioneers of AI podcast
- Blue Tulip Ventures co-founder & managing partner
- Harvard Business School executive fellow
- Fortune 40 Under 40; Forbes Top 50 Women in Tech
Table of Contents:
- Why emotional intelligence is the missing piece in AI
- What Rana's family reveals about AI and human connection
- How to build a human-centric future for AI
- Why the AI bubble may be hype on top of a real shift
- Will robots replace work?
- How AI changes the economics of creativity and originality
- Has AI outsmarted humanity?
- Why AI risks becoming a boys club and what that means
- Which human skills will become more valuable in an AI world
- What's the future AI-native device?
- The future of world models
- Where AI companions can help and where they should stop
- How companies can help their employees keep up with AI
- How to spot durable AI founders and demand better guardrails
- Episode Takeaways
Transcript:
Humanize AI before it dehumanizes us
BOB SAFIAN: Hi everyone. Bob here. Today’s episode is a special live recording from the stage at South by Southwest in Austin, Texas, featuring Dr. Rana el Kaliouby. Rana is a repeat guest on the show, an AI scientist, founder of Affectiva, investor at Blue Tulip, and host of the wonderful podcast Pioneers of AI. In this keynote conversation, we discuss how to keep AI human-centric, not only as a safeguard, but also to help us thrive socially, economically, and emotionally. We also play a game parsing fact from fiction on some of the buzziest AI myths, and we debate the merits and pitfalls of everything from AI therapy to Meta glasses, to which AI founders are opportunistic versus truly visionary. So let’s get to it.
[THEME MUSIC]
I’m Bob Safian, and this is Rapid Response.
Put your hands together for AI scientist, entrepreneur, investor, podcast host, and my good friend, Dr. Rana el Kaliouby. Isn’t this fun?
DR. RANA EL KALIOUBY: It’s so fun.
Copy LinkWhy emotional intelligence is the missing piece in AI
SAFIAN: All right. So we’re going to talk today about controversies and opportunities in this moment of change. What’s real, maybe what’s not as quite as real, what’s myth, and how to stay human-centric in all of that. I want to start with you, Rana, with your background, because your journey to this world of AI wasn’t exactly predestined. I think the first picture we have here is of you as a kid with your family.
EL KALIOUBY: Oh God.
SAFIAN: Rana, you grew up in Egypt and Kuwait. Yeah, you’re like, “Oh, look at me.” That’s Rana right in the middle there. Your father was quite strict and traditional. Your mother was one of the first female computer scientists in the Middle East. It sounds like it was a dynamic household. Out of that, how did you find yourself studying machine learning?
EL KALIOUBY: Yeah, I would say we grew up in a very tech forward household. So my parents, my dad, as you said, is pretty strict, but he taught COBOL programming in the 1970s. It’s an obsolete programming language. Oh, some people recognize it. And my mom was one of the very first female programmers to sign up to take this class in Cairo, Egypt in the ’70s. So that’s how they met. Then we moved to Kuwait. And my earliest memories of my childhood with my two younger sisters was sitting around an Atari video console, video gaming console, I guess. Any Atari… Space invaders, anybody? Ooh, okay, great. And for me, technology brought our family together. And so I think that’s been a common thread throughout my career. How can we build technology that brings people together versus isolate us or pull us apart?
SAFIAN: Your studies took you from Egypt to London to then MIT, where you co-founded your company, Affectiva. And this is a journey that you capture in your book, Girl Decoded. I think we have a cover of the book. From the start, you were focused on the emotional context of AI, on being human-centric. Affectiva used machine learning, I hope I describe this the right way, to read people’s emotional states and sort of analyze nonverbal cues and things like that, sort of focusing on EQ as much as IQ. And I’m curious, given that background, when you look at what’s happening in the AI world today, how prevalent is that emphasis? Do the major players take EQ as seriously as they should?
EL KALIOUBY: The answer is no. But let me kind of unpack that. We’ve made a ton of progress in AI on the IQ front, on the cognitive abilities and the cognitive intelligence of machines. But to get to true artificial general intelligence, AGI, we absolutely need these technologies to have both emotional and social intelligence. And this is where I believe that the industry as a whole is really lagging, and it’s the next frontier to figure out this EQ. We need to marry the IQ and the EQ of machines. And if we look at human intelligence, of course your IQ matters, but your EQ matters arguably even more. People who have higher emotional intelligence are better leaders, they’re better managers, they’re better partners, they’re better friends. And I believe the same to be true for technology. And also, if you kind of consider how humans communicate, only 7% of how we communicate is the actual choice of words we use.
93% is nonverbal. It’s facial expressions, vocal intonations, gestures, body posture. And all of that technology is completely oblivious too. If you think about AI today, it’s mostly focused on what you’re saying, not how you’re saying it and what’s the context around it. So I believe this is going to be the next frontier of AI. AI ought to communicate with us the same way we communicate with each other, through conversation, perception, and empathy. But I also believe strongly that we only build what we measure for. And all of the benchmarks in AI today, they’re very IQ focused. So I guess my call to action to the audience here and whoever’s tuning in and listening to this, we need benchmarks around the EQ of AI.
SAFIAN: And when you talk to your colleagues who are at some of these places, the hyperscalers and whatnot, and you raise this issue, are they like, “Yeah, yes, I agree?” Or are they like, “Yeah, yeah, yeah, but I don’t really buy it.”
EL KALIOUBY: I think there’s recognition that this is important, but I think it’s also a function of who’s designing these technologies. I mean, I’ll give one example. If you look at all the leading humanoid robotics companies, the robots are pretty impressive. They can unload your dishwasher and fold your laundry and, I don’t know, organize your living room. But I wouldn’t want any of these robots in my home. They’re big and scary, and they don’t really know how to interact with humans. So the teams building these things are really, kind of really obsessed about the functionality and they’re not really thinking about, “Okay, when this thing goes out into the real world, how’s it going to live with us?”
Copy LinkWhat Rana’s family reveals about AI and human connection
SAFIAN: Well, the next visual I have has a little bit about your life. It’s a picture, you’re a mom with two kids. Here you are with your two kids. And you were telling me that each of their approaches to AI are very different, that your son is kind of super enthusiastic and he’s using all the new tools and he’s doing everything. And your daughter is a little bit sort of the opposite direction like, “IRL, I want to unplug a little bit.” It almost sounds like your family dining table is like a microcosm of the discussions we’re having in society at large.
EL KALIOUBY: It really is. This picture is from a number of years ago, so they’re a bit older now. My son is 17. He’s very AI forward. He’s actually my teacher in many ways. Even though I spend every day in the AI space, he’s always surfacing new tools. His latest project is using AI workflows to translate the diaries of Egyptian workmen from the 1930s who worked at the Giza Pyramids, and they wrote these diaries handwritten in Arabic with a lot of images and whatnot, and he’s using AI to translate them. And he’s actually running into obstacles because he’s pushing what the AI can do, which is really awesome and cool.
I love that he’s using it to advance knowledge and combine history and archival research and AI. So that’s Adam. My daughter, Jenna, is a food anthropologist. She just graduated in the spring from Harvard, and she does not use AI at all. And the project she’s working on is bringing, she calls it a cultural salon/cafe. So they bring young people, you don’t have to be young, they bring people to this space and they host book talks and poetry readings and embossing workshops and whatnot.
SAFIAN: Sounds old-fashioned.
EL KALIOUBY: But they’re packed every night and it basically tells you people are really longing for this in real life human connection. And so-
SAFIAN: Yeah, there’s a reason we’re all gathered here in this room.
EL KALIOUBY: Exactly, exactly. We’re not doing this over Zoom.
SAFIAN: Yeah.
EL KALIOUBY: And so I think both realities are true. We need to both at the same time lean into AI. I keep pushing her to at least try ChatGPT or something. And then at the same time, I think we should really nurture our human connection as well.
Copy LinkHow to build a human-centric future for AI
SAFIAN: I mean, I was curious, you exited Affectiva in 2021. You’re an investor now, as I mentioned at Blue Tulip. But you’re also the host of this podcast, Pioneers of AI. Are these tools, between the investing and the podcast that you’re using to try to shape where AI goes from here? What is your goal in that?
EL KALIOUBY: Yeah. So Affectiva was my baby. It was literally my third child. It really was a big part of what I did and my identity. And so when I sold it in 2021, I spent a lot of time thinking about, what do I want to do next? And I kept coming back to this idea/question that we absolutely need to build a future of AI that is human-centric, that prioritizes how these technologies are going to affect our everyday lives and our relationships. And I mean, I believe that AI has massive economic opportunity. It really does. And at the same time, it has this opportunity to unlock human potential. So my point of view is that AI should not replace our abilities. It should really amplify and augment what we can do. And ideally, we can harness AI and use it to solve really meaningful problems facing society today.
So that’s kind of my thesis around that. And then I was like, “Okay, how do I shape that? How do I become a real player in that space, given my background too?” And I landed on three things. So one is investing. So kind of backing founders who are building these generational category defining human-centric AI companies. Two is storytelling, amplifying the voices of AI that maybe you may not have heard from. There’s a very small set of companies that dominate the AI headlines, in my opinion, but there’s a lot of innovators and thinkers and creators in the AI space. And I want to make sure that we are a platform to tell their stories and give them, be a door opener too. And the third one is a convener, which is why I like to do these things. I love bringing people together with disparate backgrounds and perspectives and just seeing what magic unfolds.
SAFIAN: You use this phrase about sort of humanizing technology before it dehumanizes us. And in the dialogue today about AI, I always wonder about for the practitioners, and you were one of the seminal ones, how much responsibility you feel like you have for what the future of this technology ends up being, and how deep is that conversation in that community as opposed to giving lip service to it, but I just got to get ahead of the company next to me?
EL KALIOUBY: I feel a very strong responsibility. And I would actually argue we all in this room have a responsibility as well because we get to vote with our feet which AI tools we’re using every day. Who’s getting the $20 a month subscription from all of us? And I think asking questions around, does this company care about the ethics of the technology? How is it being built? Are they thinking about bias, both data and algorithmic bias? Are they thinking about trust and security and privacy? Are they thinking about the use cases of this technology? Where should it be deployed and where should it really not be deployed? I think these are big questions that we all should be asking of the tools we’re using. And as an investor, there’s a set of questions. We have a rubric that we ask founders, and if the founders have not at all thought about it, if they’re not open, then we’re not investing in them.
SAFIAN: Yeah. I thought maybe we’d do something very human and we’d play a game, if that’s okay with you.
EL KALIOUBY: Okay. All right.
SAFIAN: So because there’s so much noise surrounding AI right now and so many myths, it’s sort of hard to know what to pay attention to. I think we all feel that. So this game is called fact or fiction, and I’m going to share a few video clips, some of which come from Pioneers of AI, the podcast, and each of them lead to a myth surrounding AI today. And I’ll be eager for your take about whether it’s mostly fact, mostly fiction, or somewhere in between. Are you ready? Okay.
EL KALIOUBY: Let’s do it.
SAFIAN: So let’s play the first clip.
Copy LinkWhy the AI bubble may be hype on top of a real shift
AUDIO: Are we in an AI bubble? Of course. We’re certainly seeing lots of evidence of bubble-like behavior.
The excitement that the hyperscalers had kind of got away from them a little bit and it’s starting to face reality.
It has the world wondering if we’re about to see a big pop of the AI bubble.
Whenever this bubble pops, there’s going to be tens, if not hundreds of billions of dollars that will literally be incinerated.
SAFIAN: So Rana, of course, the first myth, we’re in an AI bubble. Is this fact or fiction?
EL KALIOUBY: I think actually it’s mostly fiction. So I believe there are signs, there are signs of potentially a bubble. For example, there are a lot of companies raising, the frothy valuation problem, there are a lot of companies raising hundreds of millions of dollars at billion dollar valuations, but they’re pre-product, they’re pre-revenue, that’s a red flag. And there are also some concerns around the circular money machine. You look at these handful of companies, they’re all investing in each other. They’re all buying chips from each other and then-
SAFIAN: NVIDIA gives money to OpenAI. OpenAI uses that money to buy chips from NVIDIA.
EL KALIOUBY: Exactly. You kind of wonder what is the net new value creation here? But the world I’m in every day, the ecosystem of founders building real products that are going to be transforming real industries and companies that are really trying to figure out how to bring AI to be more productive, this is real. And it’s very early days. So that’s where I focus my energy. And I think we’re in the very early days of massive, massive economic opportunities.
SAFIAN: And so I mean, a lot of those clips we saw were from investors. Maybe the investment marketplace, there might be some bubble in, which might be cautionary for all of us because we all have money in these companies now. But in the long run, you think the technology itself, we maybe are even undervaluing?
EL KALIOUBY: I think so, yeah. The technology itself, it’s very early days and the use, the applications of the technology is very early days. We spend a lot of our time, our thesis is basically AI is transforming every industry and vertical, but we focus on three in particular. One is how AI is driving this health span revolution. So think about sensors, data, AI, and how that can advance healthcare in every aspect of it. The other is the future of work. So how can we employ and deploy AI, whether it’s physical AI or AI coworkers and agentic AI to transform businesses and especially antiquated industries. Often they’re very boring and unsexy, but there are lots of opportunities there. And the last is sustainable living. How can we use AI to apply that to planet health, whether it’s food innovation, rethinking manufacturing, climate, energy?
SAFIAN: All right.
EL KALIOUBY: Yeah.
SAFIAN: All right. So are you ready for another myth?
EL KALIOUBY: Okay.
Copy LinkWill robots replace work?
SAFIAN: All right. This next video is from the Pioneers of AI show. So let’s see the next one.
VINOD KHOSLA: Somewhere in the early 2040s, we will get a billion bipedal robots. They will do more work than all of humanity does today. Now, people are terrified that these jobs will get displaced and they should be.
SAFIAN: Okay. Such a happy thought from Vinod Khosla there, legendary tech investors. So the myth here is that the robots are taking over. So how real is that?
EL KALIOUBY: I mean, Vinod’s legendary, and he’s obviously been super successful. I think he was one of the first investors in OpenAI, actually. But I kind of disagree with his point of view a little bit. I don’t think robots are taking over in the sci-fi movie kind of Terminator kind of way. I do think robots are going to take over a lot of jobs, often like repetitive, mundane, even dangerous jobs. We’re looking at a company that’s using them, they’re building humanoid robots for ship welding.
SAFIAN: Ship welding.
EL KALIOUBY: Yes.
SAFIAN: Yeah.
EL KALIOUBY: And it’s a very dangerous job as it turns out, and there’s not enough humans who even want to do it. That’s a perfect job for a robot to take on. So I think there’s going to be a lot of that. But again, if you take a human-centric angle to that, we want the robots to take over the tasks that we as humans probably don’t want to do. And yes, that will mean we’ll have to think about what we want to do and what does that look like?
SAFIAN: But it doesn’t necessarily mean that we should be threatened by these robots.
EL KALIOUBY: I don’t think so.
SAFIAN: All right. Well, that’s reassuring. All right. Let’s try the next myth. Can we play, this is another Pioneers of AI video? Let’s play that one.
Copy LinkHow AI changes the economics of creativity and originality
MARK CUBAN: People don’t realize that IP gets more valuable in an AI world because if a foundational model is not trained on that IP, it’s behind. And trying to make decisions about whether or not you publish the work you do because you want the accolades, that’s the exact wrong way to do things now. Maybe you don’t want to patent it because the minute you publish it, every model is training on it.
SAFIAN: All right. So our good friend, Mark Cuban. So basically the myth is that AI is bad for creators. Mark is kind of arguing that it’s not. It’s good for creators. Where are you on this?
EL KALIOUBY: I don’t think AI is inherently bad for creators. I think AI is reshaping the creator economy. When I put my positive hat on, AI is also democratizing access to creation. I have zero graphic skills. I can create videos and content, and I think it lowers the barrier to content creation, but that also, I think at the same time means that there’s going to be a premium on human originality and human perspective and lived experiences. And how do you encapsulate all of that because AI is not going to, that’s not going to be a differentiating factor.
SAFIAN: I mean, it’s almost sort of the definition of progress in some ways. The floor goes up, but that doesn’t mean the ceiling doesn’t go up also, which is where I guess the best creators will end up.
EL KALIOUBY: Yeah, I love that. I love that.
SAFIAN: All right. We have two more myths. Let’s play the next video also from Pioneers of AI.
Copy LinkHas AI outsmarted humanity?
ARIANNA HUFFINGTON: Humans will never be more intelligent than AI, which is an incredible opportunity to realize that we are not defined by our IQ. Let AI be more intelligent than humans and let humans be wiser than AI.
EL KALIOUBY: I love Arianna. She’s just so cool.
SAFIAN: So the myth here, and Arianna Huffington is talking about it there, AI is on course to outsmart humanity, which she thinks is a good thing.
EL KALIOUBY: It’s okay, yeah. I agree with Arianna. Her point of view is basically, yeah, let AI be smarter than us, but let us kind of, she uses this term like AI can be the GPS of our soul. How can we use-
SAFIAN: The GPS of our soul.
EL KALIOUBY: Yes.
SAFIAN: Wow.
EL KALIOUBY: Yeah. Basically, we are in this moment of time where we can use AI to double down on what makes us uniquely human and tap into our intuition and this kind of wisdom and intelligence that AI doesn’t have. And I really like that. So my book’s called Girl Decoded, and it was very much about how to bring emotional intelligence into machines. And I learned a lot from that whole journey about my own emotions, but I keep wondering, my next book should be called Girl Embodied. And it should be about our intuitive intelligence, like our body intelligence. When you get goosebumps, that’s a signal. When you have this gut feeling, we’re so disconnected from that type of intelligence, but it’s true intelligence. And I think our opportunity as humans in this age of AI is to really double down on that.
SAFIAN: Because in the tech world, because we can measure other things, you were alluding to this before about IQ, because we can measure this, this becomes the definition of intelligence. As you sort of look at it, you’re like, “Well, maybe not really.”
EL KALIOUBY: There is a different form of intelligence that technology has no access to right now that we’ve lost access to as well because we’re always rushed. We’re in this world where we’re glued to our screens. I don’t think we’re in touch. I’ll speak for myself. I don’t think I’m always in touch with that kind of intuitive intelligence. It’s not easy for me to access it unless I spend a lot of time meditating. So anyway, I’m on a journey to tap into that intelligence. I think it’s really important.
SAFIAN: But it sounds a little bit like for all of us in some ways, as AI takes on more of the intelligence that maybe culturally we have emphasized that we all should be working a little harder to tap that other piece of intelligence.
EL KALIOUBY: The inner kind of wisdom. Yeah.
Copy LinkWhy AI risks becoming a boys club and what that means
SAFIAN: All right. One more myth. This is a constellation of random headlines chosen recently from TechCrunch of new AI startups. And there is something in common about all of these folks. They’re all men.
EL KALIOUBY: Yeah.
SAFIAN: Yeah. I mean, you and I were having a conversation at one point and you kept pulling these up and you were kind of animated. And I guess, so the myth here is, is AI a boys club? And is it? Is that a fact? Is that fiction?
EL KALIOUBY: That one is not a myth. That one is like… There’s no mostly about it. Yes, I think AI today is a boys club and I think diversity is not a very popular conversation topic these days, but I think it’s so important because AI is creating incredible economic opportunity. And if women are left out because they’re not founding these companies because they’re not getting the funding, we’re going to look back five years from now or a decade from now, and we’re going to have widened the economic gap like crazy. So this is something that really concerns me. It’s why, again, three out of my four investments out of Blue Tulip Ventures are women CEOs. I don’t just invest in women, but I really try to seek these women founders and support them, if not by a check, but in other ways as well.
SAFIAN: Because they’re not getting the opportunity that they should and that they need to.
EL KALIOUBY: Oh, thank you.
SAFIAN: Rana is so logical in how she describes the impacts of AI across all kinds of areas, even ones that are emotional. So what kind of role does being human-centric play in AI safety? And what human skills should we prioritize in an AI world? We’ll talk about that and more after the break. Stay with us.
[AD BREAK]
Before the break, AI pioneer Rana el Kaliouby parsed the facts and the fiction in today’s reigning AI myths. Now, Rana takes questions from the South by Southwest audience about the key skills needed in an AI world, whether Meta glasses will be the tech form factor of the future and the role of AI therapy and AI companions. Plus, will we all have a digital twin and more? Let’s jump back in.
Copy LinkWhich human skills will become more valuable in an AI world
Because you’re focused on the human-centric part of this, you really wanted a lot of this session to come from the humans in this room. So actually, I have some questions that I’m going to read to you. This is a question from anonymous. Anonymous, thank you for your question. Which human skills will become more valuable in an AI-driven world and how should individuals start developing them today?
EL KALIOUBY: Ooh. I think collaboration. Whether you’re collaborating with humans or machines, that’s going to be really important. I think communication is going to be really key as well. I think we’re actually all increasingly attuned to stuff that’s written by AI. We can probably all discern that. And so I think being a great communicator and an original communicator is really key. And then I still think we’ll need a lot of critical thinking and creativity. Yeah.
Copy LinkWhat’s the future AI-native device?
SAFIAN: Here’s a question from Sophia out there. She asked whether you use Meta glasses or what AI native devices are you looking at?
EL KALIOUBY: So we have Meta glasses at home. We have a couple. I don’t use mine. Adam, my son uses his a lot. We will literally be walking down a street together and I think I’m talking to him, but he’s got his glasses on. He’s listening to music and he uses it. I mean, it’s still very early days for these glasses. I would say they’re not really AI native yet, but I was at their annual event last week and they were kind of unveiling the visual intelligence capabilities that they will add to that. More broadly though-
SAFIAN: Yeah. Do you think those are the kinds of devices that we’re going to be interacting with AI through?
EL KALIOUBY: Yeah, I think this is one of our investment thesis. We are using AI on pre-AI devices right now. A smartphone is not an AI-native device. And so we are on the lookout for founders who are building these AI-native devices from the ground up, so hardware and software. And our thesis there is that it has to be perceptual, it has to be conversational, it has to have empathy, it has to have context, it has to have memory, it has to be ambient.
SAFIAN: Yeah. I don’t know what that is yet.
EL KALIOUBY: Yeah, we don’t. Yeah. And a lot of the big AI labs are investing a lot of money trying to build something. And I don’t know if it’s going to be, is it going to be glasses? Is it going to be a wearable pin?
SAFIAN: Because everyone wants to own the next phone. That’s what it is because it’s so much money, but we don’t really know what that is.
EL KALIOUBY: Exactly. There’s a lot of-
SAFIAN: Or if it’s just going to still be our phone.
EL KALIOUBY: Yeah. There’s a lot of experimentation on what the form factor will look like.
SAFIAN: Yeah, yeah.
EL KALIOUBY: Yeah, so we’ll see.
Copy LinkThe future of world models
SAFIAN: So there are a handful of questions that are around a particular theme about world models. What is a world model and how is it different from a large language model? The world sounds very generic to some people.
EL KALIOUBY: Yeah. There’s been an evolution actually in these foundation models. They started off being very language focused, think ChatGPT. And then they became more multimodal, so now they can deal with images and video. They can both ingest images and video and generate images and video and voice. So they’ve become multimodal, but they’re still not rooted in the real world. And to unlock physical AI, so AI that is, like robotics is one example, or an AI native device is another example of a physical AI, to unlock that, you need AI that understands how the real world works, the physics of it. It has spatial capabilities. So that’s what a world model is. It’s the equivalent of a large language model, but kind of rooted in the real world, in physics.
SAFIAN: So instead of feeding it the data, whatever, scraping all the information that’s on the internet, it has to be in a room like this and get all of the signals from all of the things that are in this room right now.
EL KALIOUBY: Correct. And actually, you know how this-
SAFIAN: That sounds like a lot more information it needs.
EL KALIOUBY: It’s a lot more information. It’s a lot more complex. You know how with the large language models, there’s these companies that train the bots, basically train these AI models. They generate a lot of text and they red team the text and whatnot. We’re starting to see companies that are doing the same, but in the real world. So you literally strap on a camera and you’re paid to walk around your house or your work or the streets and all of the data you’re capturing then becomes input data to these world models. So it’s-
SAFIAN: I’m like one of those cars gathering information for Google Maps. Is that what-
EL KALIOUBY: Exactly, exactly. But now it’s like people in their kitchen washing the dishes. That’s all incredible training data for a robot that will eventually do this job.
Copy LinkWhere AI companions can help and where they should stop
SAFIAN: All right. Let’s go to the next question. The next question is, oh, this is interesting. This is from Caroline about what your opinion is about using AI for therapy. I mean, there is this discussion about AI therapy, AI companions to replace human relationships. What is our emotional relationship to AI? What would be healthy about that?
EL KALIOUBY: I think there is a room for AI to be a therapist, to be kind of a supportive companion, but I feel very strongly that it should not take the role of an actual human. Yeah, thank you. Yes, I feel very strongly about that. But there is a value proposition in having something that you can, when you’re up at 2:00 in the morning and you really want to run something, you’re ruminating on an idea and you’re really struggling, that could be very supportive. But I think there needs to be human oversight and human in the loop. And there are unfortunately very, very few guardrails being built in these models that protect us when we’re using these models and not…
You’ve probably unfortunately seen a lot of very sad news where young people are using these ChatGPTs and other AI technologies and they end up harming themselves. So I think that is something we don’t talk about often. A good friend of mine, Eric Cohen, I’ll give him a plug, he’s building AI safety guidelines and measures so that, again, we need a benchmark for that. We need to be able… Every time we release a model, we should really test it against these safety guardrails to see if it passes or not.
SAFIAN: I mean, I was having a conversation with someone about this and I was like, so at some point you have your AI bot with ChatGPT, but sometimes it’s going to be like you’re going to have a shopping bot that you have a relationship with at Walmart or whatever. But you could end up having a conversation with that bot that’s about your emotional state. It’s like, is Walmart going to train its bot to worry about whether… Do you know what I mean? Because you can ask that bot anything.
EL KALIOUBY: Anything, correct. There has to be these guardrails and these roles that are well-defined for these bots.
SAFIAN: And between here and there, are we going to continually have inadvertently like, “Oh, sorry, my bad. I didn’t realize people were using my app for that or my bot for that.”
EL KALIOUBY: That’s why I think anybody deploying AI should really be testing against these AI safety guidelines. Now, this is a whole different question than should we have AI friends and AI partners. And a lot of people feel lonely. And to have something that is there for you 24/7, is very patient, there’s something to be set for that, but it does say something about us as humans.
Copy LinkHow companies can help their employees keep up with AI
SAFIAN: Yeah. All right. I’m going to go to a little bit more business-y question here. This is from Krika. I hope I’m pronouncing your name right. Current skillsets are disappearing faster than new skills are emerging. So what should a human-centric organization do to help their employees keep up? I mean, these tools, new tools, new models, they seem like they’re coming out every day. How is anyone expected to keep up with it all?
EL KALIOUBY: Yeah. I would say organizations should really encourage their team members to lean in and try these new tools. Even if it’s not going to be perfect, even if there’s going to be mistakes and hiccups, I think it’s important to lean in. So at our fund, we’re a very small team. We just implemented a chief of staff AI agent. We just named it Blu, B-L-U. And this thing, it does a lot of research on our behalf. It kind of updates our CRM. It does all these auto tasks that, again, you don’t necessarily want to be spending a lot of time on it, but I think it’s made me think about, “Ooh, what are we doing with our junior team members?” And I think young people or all these junior roles are going to be redefined and they’re going to have to incorporate AI in what they do. I think we’re all going to have to incorporate AI in what we’re doing if we haven’t already.
SAFIAN: I mean, I had discussions with two CEOs over the last week. One, Julie Sweet, the CEO of Accenture, the other Matthew Prince, the CEO of Cloudflare. And they both sort of said the same thing, which is the people at the very top of their organization get it and are using AI. And at the same time, they’re eagerly hiring young people out of school in bigger numbers than are expected because those folks are AI native. But the group in the middle, they’re really kind of worried about.
EL KALIOUBY: Well, I think the reality is all of our workflows are changing. And so you have to be really open to reimagining what these workflows look like. And it’s going to be a human AI collaboration. So one of the interviews we did for Pioneers of AI was with Evan Ratliff, who has a podcast called, a pod series called Shell Game. And he started a company with two co-founders, Kyle and Megan, and they’re both AI agents. He’s the silent co-founder, and Kyle is the CEO, and Megan is the CMO. And that was fascinating. I interviewed both Evan and also Kyle, he’s a very tech bro CEO.
I interviewed the AI on Zoom, and Kyle was like, “Yeah, I’m a rise and hustle and grind, whatever, blah, blah, blah.” And then he was like, “But on the weekends, I love hiking.” I’m like, “Kyle, you don’t even have legs.” And he was like, “Well, I live vicariously through other CEOs.” But it was fascinating. And I think we are going to… And Kyle shows up to investor meetings. They will literally send Kyle to meet with investors. And I wonder if that’s going to be our world.
SAFIAN: Wow. That’s an intense future though. I mean, I wonder, you and I both, we host podcasts. Our colleague, Reid Hoffman also hosts a podcast, but he’s created this avatar of himself using AI. We could create avatars of ourselves. We don’t even need to be here. We could have our AI version. Is that something to aspire to?
EL KALIOUBY: Well, again, I think about it as augmentation. I don’t want my digital twin, which I have, but I don’t like her because she doesn’t have my smile and she doesn’t have my energy, so we’re working on it. But I wouldn’t want her to be here because I love this, but could she go to China and speak in Mandarin? Amazing. That would be awesome. So I have her speaking-
SAFIAN: Your Mandarin is not that sharp.
EL KALIOUBY: I have zero Mandarin, unfortunately. So I think that’s an opportunity. Where can it augment what I can’t do? Now, there’s a whole bunch of questions around IP and what if it answers a question not in the same way I would? How do I trust this digital twin to be out in the real world on my behalf? We’re not there yet.
Copy LinkHow to spot durable AI founders and demand better guardrails
SAFIAN: We’re running out of time, but let me get one question here from Hector who asks, what pattern separates the founders who are building something that will matter in five years from those riding a hype cycle?
EL KALIOUBY: Yeah, because I’ve been in this space for over 25 years. I can separate signal from noise and there’s so much noise. Every company we get pitched by is an AI company. And within three questions I’m able to tell, are they really building something that is defensible? And defensibility has taken on, I think, a new kind of depth in this world of AI because you can be defensible today. And literally by the next version of Anthropic’s release or Gemini’s release or whatever, you’re obsolete as a company and as a technology. And so we really dig into how defensible is this technology not right now, but in the next year, in the next five years? Five years is too long actually to predict, but-
SAFIAN: You can’t see that far.
EL KALIOUBY: Yeah. But defensibility is a real thing. And also, how complex is the problem you’re solving really? And again, back to the IP, what kind of IP or moats do you have around what you’re building? And we spend a lot of time poking that.
SAFIAN: So as we wrap up, for those in the room, what can they do to help build a human-centric future? I mean, how much should we engage with AI like your son does? How much do we safeguard like your daughter does? And how much do we just have to roll with the tides and deal with whatever comes our way?
EL KALIOUBY: I would say lean in. I think it’s important that we are all adept and be playful about it. I think there’s a curiosity and a play mindset that we can bring to the table where we’re experimenting and kind of pushing the boundaries of what is possible. But I also feel strongly that collectively we need to be vocal about, yeah, there ought to be guardrails in these models. There ought to be benchmarks around effects on the environment. We’ve had several conversations on Pioneers of AI where we’ve hosted people who care about that and are trying to solve, build benchmarks where we can really get a sense of, okay, how bad are these… Every time you ask ChatGPT for an idea for what you want to have for dinner, what is the effect on the environment? So I think we just need to be vocal about what is… We need to ask for more transparency around how these models are built, how they’re validated, where they’re being used. It is moving very fast.
SAFIAN: Yeah. I mean, I guess that’s part of what the appeal of this technology is, that to keep human, we have to use these tools to be able to be human-centric. Well, Rana, as always, it’s great to talk. Be sure to subscribe to Pioneers of AI and Rapid Response.
EL KALIOUBY: And Rapid Response.
SAFIAN: Yes. And finally, a warm thank you for Rana el Kaliouby Talking with Rana, you can’t help but think about what’s our role in shaping the future of technology. It’s easy to bemoan the state of the AI industry or critique the hyperscalers, but building new technology is a human endeavor and it’s not too late to take agency over what we build. As Rana says, we can vote with our feet, choosing what AI we use and pay for and what ways we use it. There’s no reason to abdicate responsibility because what comes next is very much up in the air. Thanks to the team at South by Southwest for inviting us on stage and thanks to all those who brought their human selves into the room to join us. I’m Bob Safian. Thanks for listening.
Episode Takeaways
- Live from South by Southwest, Bob Safian and AI scientist Dr. Rana el Kaliouby argue that AI still excels at IQ but badly lags on EQ, empathy, and human context.
- Drawing on her own family, Rana says the AI future is already here in miniature: one child pushes the tools to their limits, while another craves more in-person connection.
- In a brisk fact-or-fiction round, Rana calls the AI bubble only partly real, says robots should take grim or dangerous work, and sees AI raising the floor for creators.
- Asked where humans still have the edge, Rana points to collaboration, communication, creativity, and wisdom, while warning that AI remains too much of a boys club.
- On everything from world models and Meta glasses to AI therapy and digital twins, Rana urges people to lean in with curiosity while demanding real safety guardrails and transparency.