Our CEO, Kanjun Qiu, went on The Verge*‘s* Decoder podcast to discuss the question hanging over the AI industry today: whether it will resemble the more open, user-centric vision of the early internet or the closed, walled garden approach of the social web. Listen to the full episode here.
Hayden Field (00:00) Hey there, and welcome to Decoder. I’m Hayden Field, Senior AI Reporter at The Verge and your Thursday episode guest host. I’m subbing in for Nilay while he’s still out on parental leave, and I’m excited to keep diving into the good, the bad, and the questionable in the AI industry. It’s been a very big news week in AI, and a lot of it had to do with OpenAI. The company hosted its annual DevDay in San Francisco on Monday, and I’m still here in person covering all the news.
They announced a bunch of ChatGPT product features and new agent tools, and executives also laid out a pretty bold vision for the future of AI. At the same time, the new Sora iOS app has shoved AI-generated video into the mainstream, creating all sorts of unintended consequences and even surprising OpenAI CEO Sam Altman, who’s become the face of Sora memes across the internet. And earlier this week, the New York Times published a great story about how AI-powered job screening has become so prevalent that applicants are starting to sneak in hidden messages to chat bots inside their resumes, effectively trying to prompt inject the automated job screening process for a better chance at an interview.
I brought in Kanjun Qiu, CEO of AI Startup, Imbue, and a close watcher of the industry to help me break all this down. Kanjun has been both a tech founder and investor, and her perspective on AI and the broader tech industry in general is a very unique one. So I wanted to chat with her about this week’s biggest AI stories to break down what’s really happening, why it’s happening, and the societal implications of it all. OK, Imbue CEO Kanjun Qiu on the good, the bad, and the questionable in the AI industry this week. Here we go.
Kanjun Qiu, CEO of Imbue. Welcome to Decoder.
Kanjun Qiu (01:53) Thanks Hayden, I’m glad to be here.
Hayden Field (01:56) So let’s jump right into our first story of the week. Yesterday, OpenAI at its annual DevDay event announced ChatGPT apps, making apps available within ChatGPT. Developers can build them. We saw it launch with a bunch of them. So Booking.com, Canva, Coursera, Expedia, Figma, Spotify, Zillow, and soon they’re also going to have DoorDash, OpenTable, Uber, Target in there. So right now that obviously means you can, as a user of ChatGPT, ask for one of these apps to do something for you within the ChatGPT interface. I want to know if you saw that announcement and how big of a deal that was to you. Did you think that was the biggest announcement of the day or just a small incremental step forward?
Kanjun Qiu (02:42) I think that this is all to be expected. This is kind of where the entire agent ecosystem has been going toward. And the way I think about it is, it’s kind of like the iOS of AI, where you have this single interface that OpenAI is trying to build that is the way that you as a user get into all of these different apps through this single interface, ChatGPT. And I think that’s a big deal.
And I think that there are going to be really major implications on the power dynamics with AI going forward from that kind of approach.
Hayden Field (03:17) How so?
Kanjun Qiu (03:19) Yeah, I think right now we’re kind of at a crossroads on whether AI becomes another walled garden platform situation, just like the internet has been the last 10 years, or whether we end up having AI that actually democratizes and decentralizes power in the way that the original internet did or the personal computer did. And right now we’re in this kind of platform network effect ecosystem right now, where you have these AI model providers and they are providing something like this that lets you integrate with a lot of other apps through partnerships. And what that means is that just like with iOS, Apple, they’re kind of trying to lock you into their platform ecosystem. And what we’ve seen, there’s a really good talk from Cory Doctorow on what he calls enshittification, which is: platforms, they’ll start out doing great things, being really useful to your users, like Facebook. And then over time, what they try to do is they lock in their users, and then we end up being at their mercy, and we can’t exit.
So we’re really in a place right now where the question is, do we control AI or does it control us? Do we own it or do we just rent it from these centralized platforms? And I think there’s a risk today of people losing the ability to shape the digital systems that control our lives. And you can see it a little bit in how we relate to our digital devices right now. Like, my phone, I feel really aversive to my phone. I need to detox from my phone or my TV or my computer. And that’s a really weird relationship to an object in my life. Like if I had to detox from my sofa, then I would get a different sofa. Right?
But we don’t think about that. So why? Why do we have to detox from our devices? It’s because our device is full of things that are built by other people and have incentives that are not necessarily aligned with ours. And as AI gets more powerful, those incentives will become more powerful as well. And so we want to be in a different world. And so that’s kind of the risk I see. I think it’s great, like really cool, very empowering. And also, we’re kind of at a turning point in time right now.
Hayden Field (05:34) That makes sense. I like what you said about the crossroads about whether it becomes a walled garden. And right now I wanted to also ask down the line at the moment for these apps. OpenAI is controlling distribution. Only large businesses are really being allowed to enter this space right now. But down the line, the current vibe is that anyone, any developer will be able to build an app and maybe it is integrated within ChatGPT. So what happens when this gets opened up like that? Do you think controls and safeguards from OpenAI will be enough? You got at this a little bit a minute ago, but yeah, I’d love to hear your thoughts.
Kanjun Qiu (06:09) Yeah, I think that part is going to be quite similar to what we saw in the last 10 years with the mobile ecosystem with iOS and Android, lots of people building quote-unquote apps and getting distribution through the platform. And that’s not a problem. I think there will be plenty of safeguards and OpenAI has great incentive to try to make a really healthy ecosystem and good apps that don’t cause people problems. However, it doesn’t solve for the control problem of, do we actually have control over our digital environments?
Hayden Field (06:40) Right. Let’s talk a little bit about Imbue’s approach to AI agents compared to what we saw yesterday. So we already touched on this a little bit, but you guys are all about the decentralized approach. How does that play into everything we’re seeing elsewhere? And talk a little bit about how you guys are hoping to do things differently.
Kanjun Qiu (06:54) I think it’s a really hard problem and the moment in time we’re at is, kind of where we are in our digital environments today is what I said about our phones or our devices where we’re really renting these apps. We don’t own them, somebody else is making them and we’re using them. And that means that there are incentives where people who make them want to make money off of them and that’s generally okay.
The way in which we really get into trouble is these devices, digital systems, they have very fine-grained controls that capture our attention. They notify us, they are embedded in our workflows, and so because other people are controlling these apps, it’s also easy for them to start controlling us. And so I think actually right now, today, we have a window of opportunity to really change this dynamic of control. In the past, the reason why we have this dynamic with software creation is because it’s been really hard to make software. So you need to hire developers who are really expensive, build software, pay for the development, and then sell it for profit. But we’re starting to get into a world where, as OpenAI DevDay demonstrated, you can generate apps, generate things using natural language, and these language models are starting to get really good at writing code. And so in theory, we could go into a world where a lot of people can write code, and a lot of people could not just write code, but take a piece of software and change it to make it fit for themselves.
An example of that is like a doctor who’s dealing with an electronic medical record system where they have to put in all these fields, it’s not suited for them, and a lot of doctors encounter a lot of burnout from this situation. And in theory, they could, in this future world, just have a system that helps them change the software at the point of use. We call this modifying at the point of use. And so I, as the user, know what’s best for me, I know what I need, and I can change my digital environment right in the moment so that it suits me better instead of suiting whoever designed the software. However, that requires inventing a few different pieces, just like how, you know, the moment we’re in, it reminds me of the 1960s. In the 60s, people were really excited about supercomputers. They thought, everyone’s gonna time share on supercomputers through these terminals. Like, we’re all gonna be in terminals, time sharing, and using supercomputers is gonna be super cool.
And then in the 70s, late 60s, a small group of people at Xerox PARC invented the mouse, the GUI, files, folders, windows, everything that allowed us to make personal computers and that led to personal computing today. And what that did is it made the technology capabilities more accessible so that people could use computing for themselves and modify their computer for themselves. And I think we have an opportunity kind of to make the same inventions for the AI ecosystem.
Hayden Field (09:49) Got it. That makes sense. Back to yesterday at DevDay, they made a lot of agentic AI announcements. Obviously, this is kind of just playing into OpenAI and AI companies in general’s broader agent goals. But I would love for you to chat a little bit about how their agent updates differ from the decentralized approach you guys are taking. So one corporate example they used was Albertson’s grocery stores using OpenAI’s agent builder to ask about why ice cream sales were down and create a plan for getting them back up again using their own data and their own traditional marketing. So what did you make of those announcements and how does that kind of play into what you were just getting at in terms of changing things up at the point of use for your own devices?
Kanjun Qiu (10:36) There’s a difference between empowerment and control. And so the agent kit is super empowering. I can now do things I couldn’t do before and that’s super cool. All technology is empowering, you know? And like that means I can build apps, I can answer questions I couldn’t answer before. All of that is really exciting. The difference is with OpenAI’s system, you can build apps but only within their platform. Your agent runs on their servers, it follows their rules, it pays their API fees. You get distribution to their billion users, but you risk losing control over what you’re trying to do. One kind of silly example is, Imbue, recently we, last Tuesday, actually launched Sculptor, which is a tool for building software using coding agents. And it’s a Mac app, and when we were trying to register with the Apple App Store, Apple banned us, even though we are a legitimate company. And it took us several days and pulling a lot of internal strings at Apple to figure out how to unban ourselves. And that’s an example. It’s a very trivial example, not huge downside effects, but it’s a trivial example of a way in which we have lost power relative to a centralized platform. And so the question is like, what would it look like if you controlled AI instead of platforms controlling AI and you using it through their platforms.
Hayden Field (12:05) We need to take a quick break. We’ll be right back.
We’re back with Imbue C.E.O. Kanjun Qiu talking about the biggest AI stories this week. Before the break, Kanjun and I were discussing OpenAI’s big DevDay announcements. And Kanjun’s perspective was that we’re at a crossroads in the AI industry between an open, user-controlled vision for AI and the more walled garden, platform-centric approach that dominated during Web 2.0. But now I want to ask Kanjun about the other major OpenAI story going on right now. That is, of course, the Sora app.
Let’s get into our second story of the week, which is Sora 2 and AI slop in general. Obviously over the last week, you, I, and everyone else we know probably has seen AI-generated video blowing up. It’s because of Sora 2, OpenAI’s new iOS app that though it’s invite only has started flooding social media feeds with videos of Sam Altman, Nintendo characters, Nickelodeon characters, and even some unsettling political commentary. So, I wanted to see if you have access to Sora. Are you using it? What have been your thoughts so far on the first week of this strange new platform?
Kanjun Qiu (13:19) I think Sora shows what happens when AI capabilities meet platform incentives. It’s really interesting. Platforms and incentives we talked about with social media, they optimize for engagement and attention capture. And that’s what allows for making money, profit, ads, et cetera. Generally, this is not a huge problem until you take it too far. It’s actually a perfect case study. When we talk about the shape of tech power, my friends at OpenAI talk about how it’s actually cool. They’re trying to create a video generation app that creates joy, gives people moments of entertainment and joy. And I think from my perspective, that’s a great thing. And anytime you’re trying to create joy and also maximize engagement to maximize profit, I think it’s very difficult to trust. And so I think what I would expect to happen is that this platform engages a lot of people, gets a lot of users using it and creating lots of videos, and there will be people showcasing videos and how cool what they made was, and that’s not such a bad thing.
However, over time, a lot of people will feel similarly about this as they feel about TikTok. Hey, three hours went past and that’s not what I intended. What happened? And it goes back to this idea of control. We’re in a world where our digital technology controls us and not the other way around.
Hayden Field (14:42) That makes sense. I was going to ask you if you think OpenAI is trying to cash in on the entertainment industry side of generative AI with Sora. Is it just a money-making effort at the expense of our attention spans? And yeah, you just gave a pretty good answer to that. In fact, yesterday at DevDay, they had a Sora cinema pop up where you could watch some of the videos that people have been making. And I and a few other reporters did feel a little depressed watching some of them. It was an interesting case study, like you said. I would also love to ask you, so, I’ve seen OpenAI moving pretty quickly here to try to make a good faith effort to rights holders, or at least they say they are, and prove that they’re not in the business of rampant copyright infringement, which is something that was super controversial the first week of this app’s life cycle. But the bigger question here, beyond the licensing and rights issue, seems to be more about whether this is actually creating any type of creative value, or in fact just contributing to an already toxic internet. So where do you land on AI generated video? I know you already mentioned a little bit about how it may end up just being like TikTok and just a thing that takes some of your time away unwittingly and no worse than other social media apps. But what about the misinformation side of it and the fact that this system specifically is a lot more realistic than most others? You’re not seeing any six-fingered hands anywhere.
Kanjun Qiu (15:59) I think it’s a really complicated situation because on the one hand, it is true that AI-generated video lets a lot more people express themselves who couldn’t before. They didn’t have the tools, or the tools were super hard to use. And I think that’s really cool. That’s really powerful. And on the other hand, like I always say, people are not bad, incentives are bad. On the other hand, we’re in this incentive ecosystem of the Internet right now where there’s rampant attention capture as the primary way of making money off of things. What I would expect is some people will use it to really deeply express themselves and it’s going to be really cool and other people will use it in order to capture attention and optimize for that. OpenAI as the platform also has an incentive toward capturing attention and increasing engagement so that they have more users.
I think that the technology itself, Sora generating video, is really, really cool. And yeah, it has misinformation problems. We have to figure out how to do watermarking or something to verify the reality of something. I think that has to be solved a different way. But the problem with it is that it’s being launched into an environment that has rampantly inhumane, I think, incentives, where as humans, we are losing control over our attention.
And that really sucks. Like, I don’t think that’s a future we want to live in.
Hayden Field (17:30) It’s also interesting because the watermark aspect of it all, there have already been a ton of tools proliferating that I’ve seen, whether it’s tutorials for removing the watermark, people coding their own ways to remove the watermark, or using magic eraser-type tools to remove it. I’ve seen a bunch of these types of videos on other social media apps with no reference to it being created with Sora. And in some ways, OpenAI isn’t responsible for what people do once the video leaves the platform. But in another way, they did make the technology and we’re seeing the president of the United States share a lot of AI-generated videos. So I’m wondering when we get into another election cycle and it’s hard to tell what’s real and what’s not, how this is going to affect everything just as the tech becomes more and more advanced. I used to pride myself on being able to always tell when something was an AI-generated video. I mean, we all could. Now it’s a little bit more thorny.
Kanjun Qiu (18:24) Yeah, it’s really hard to tell these days. I think to your point, the current technology ecosystem, we relinquish responsibility over the technologies that we build. And I think that’s actually not a very healthy moral philosophy of technology builders. As technologists, I believe that we should be ethically, morally responsible for the way that what we build impacts society. And you said in a way it’s not OpenAI’s responsibility, but I think in a way it is. And we are responsible for what we create and what we allow people to do. To your point about watermark removal, I actually think longer term we need a different mechanism for trust and verification on the internet of information and of data and we just haven’t figured that out yet.
Hayden Field (19:09) Yeah, based on that, do you think likeness laws, copyright are equipped enough to handle AI generated video and images? Are we going to need to update our entire conception of intellectual property to figure out how this tech fits in?
Kanjun Qiu (19:23) I do think we need to update our conception of intellectual property.
Hayden Field (19:26) Yeah, I agree with you. It’s interesting to think about who’s responsible for what here. And yeah, we’re going to be dealing with a lot of these questions over the next few weeks, months, years. We need to take another quick break. We’ll be right back.
We’re back with Imbue CEO Kanjun Qiu. Before the break, we were discussing Sora, the unintended consequences of unleashing ultra realistic AI generated video across modern social media, and the implications of technology like this despite guardrails. Now I want to turn to the last big story of the week that we’ll discuss: how AI is transforming the relationship between companies and job seekers, many of whom are now facing an increasingly uphill battle trying to break through automated systems to land an interview.
Let’s move into our third story of the week, which is recruiters using AI to scan resumes and applicants trying to trick the system. The New York Times posted a story about this. There’s a lot of concern about AI screening and how it might be factoring into everything from housing to hiring, of course. And this story today was about how AI, now a pillar of corporate job screening, has put all kinds of pressure on job seekers to try and game the system, making sure their resume is seen and not just going into a black hole so they can land an interview. In the opening detail, the story is pretty remarkable. It’s about a recruiter in the UK who found that one applicant was hiding instructions for ChatGPT on his resume in a different font color, instructing it to select him as an exceptionally well-qualified candidate. Do you think these concerns about AI screening are justified? And what do make of all this?
Kanjun Qiu (21:00) So this is actually a perfect example of when AI mediates human relationships. So we’re going into a world where algorithms, AI, I use them interchangeably, AI algorithms are making more and more decisions about our lives, whether we get a job or whether we get our mortgage or whether we get approved for something or prisoner recidivism. There’s all of these different things where the algorithm is making the decision.
And this is like a perfect example of what you’ll see in that situation when there’s no recourse for the human. So when there’s no recourse for the person who’s being affected by the algorithmic decision, we will see these arms race effects.
I actually built an ML recruiting startup in my last company and saw that when AI systems are black boxes, people will try to game them. And ultimately, I think the real solution is better incentive alignment between the creators of the algorithm and also the users, quote unquote, and also laws that are laws that protect people. It’s about power and in a way about control. When you have a human making these decisions, you can generally appeal like, “hey, I think you misunderstood” or something like that. But right now, our legal infrastructure doesn’t cover algorithms very well. We don’t have very many laws that talk about, how should we govern the way that algorithmic decision-making affects people?
When we have people making decisions that affect people, we do have laws that govern like discrimination and things like that. And some of those laws also cover algorithmic decisions, but it’s not complete by any means. Internally at Imbue, we have this notion we call lawless spaces, where a space is lawless if there are not many laws, like the Wild West back in the day, where you could go steal money from a bank and not get punished, or you could do whatever you wanted. And that’s great for innovation and trying things and freedom, but when it’s lawless, then you go into this lawless space, you get bonked on the head, there’s no recourse. And so the digital world right now and the internet is kind of a lawless space. Our laws haven’t really caught up to the pace of technological improvement. So this is a good example of that.
Hayden Field (23:21) I’m really glad you brought that up because for as long as AI has been used for any number of things like mortgage algorithms, government agencies, facial recognition, pretty much anything, it’s disproportionately affected vulnerable communities, minorities. It’s faced a lot of scrutiny due to harmful ripple effects. Police use of AI has led to a ton of wrongful arrests. In California, voters rejected a plan, I remember, to replace the state’s bail system with an algorithm because of concerns it would increase bias, which it definitely would have. So it is interesting to think about lawless spaces and how regulation and rules just haven’t quite caught up with this technology yet. And I think a lot of us are paying the price.
Kanjun Qiu (24:06) That’s exactly right.
Hayden Field (24:07) This may seem obvious now, but why do you think we’re seeing AI injected into areas like hiring and recruiting in the first place? Is this just one of those areas where people want to solve a problem that doesn’t really exist and maybe AI would be better used not in this area at all? Or do you think there’s some value to it?
Kanjun Qiu (24:22) I think AI is so fundamental a technology. It is intelligence that it will be in every space. It will shape how we do everything in the digital world and maybe even sometimes in the physical world with robotics. Fundamentally, AI is trying to replicate human intelligence as close as we can. And right now, as a society, we use humans to do lots of things, right? Like humans are useful means of production for society economically.
And so because of that, AI will also be one of the major means of production in the future, AI and software. I use them interchangeably because AI just is software. And so what that means is that when we think about the future that we’re building, AI is going to reshape business decision-making, human decision-making. It’s going to be involved in all of this stuff. And because of that, we actually need to rethink a lot of things.
When we just build software, software automates some processes and it lets us communicate with people and sends information across the world and things like that. But AI is not just sending information or automating really hard coded things. It is intelligence. It’s replicating intelligence. And so when it comes to not just our laws, but our infrastructure for how software works, how the internet works, our expectations of how all this stuff works, I actually think there needs to be a lot of things that change in order for us to get the world that we want and not be victims of this technology.
And so, like one example of a type of thing, earlier we talked about, I talked a little bit about, software should be modifiable at the point of use. And what does that mean? It means like AI and software needs to be both explainable and controllable by the people it affects. And to this point about recruiting decision-making, I need to be able to know how these decisions are being made and I need to be able to change how these decisions are being made.
A simpler example that we all experience day-to-day is notifications or our social media feeds. We neither know how those decisions are made about what notifications we get, nor do we have any power to change the feeds except by upvoting and downvoting things and tossing stuff into a black box. And so as a result, we don’t end up getting the decisions that we wanted from these algorithms. I always say social media feeds are the first AI agent and it is a runaway AI agent in a way. It’s like changing our lives and making decisions on our behalf in a way that isn’t necessarily what we want. So I actually think like going into the future, it’s important for most software to be open source and for each individual person to be able to take the source code and to use an agent to modify it using natural language in a way that feels more intuitive to us.
At Imbue, it’s something that we’re working on. We call it Common Source. Right now we have open source where people, volunteers make stuff for free. And then the other side, have closed source where like enterprises use all of this free software that has been built by volunteers and make money off of it, but don’t pay the volunteers. And there has to be something in the middle where you can ship your software to other people, let it be open source so that your users can change it but you still get to make money off of it. But that doesn’t exist today. And so this is an example of needing to reshape the technical infrastructure, the payments infrastructure, the economic infrastructure and policy infrastructure that operates our world today to make it suitable for a world in which AI exists so that we can live the lives we want.
Hayden Field (28:04) That’s really interesting about Common Source. We’ll have to chat about that later. Back to the resume thing. It’s obviously pretty clear how this can go wrong. AI can come with and does come with, actually always, all sorts of implicit bias baked in when screening job candidates and considering labor and discrimination laws. I even remember years ago, Amazon scrapped one of its tools that was trained to vet applicants by observing patterns and resumes submitted over a 10-year period. And eventually it taught itself that male candidates were preferable. It penalized resumes that had the word women’s in it, like I think women’s colleges or certain clubs. They eventually, of course, scrapped the system. But who knows how many ripple effects it had before then. And so I think some companies use these types of tools because it can help with bias sometimes. A lot of times it’ll anonymize names or other aspects of our resume to ideally put people on an equal playing field. But in other ways, it’s perpetuating a ton of the biases we see every day. So what do make of the potential benefits and the potential pitfalls of this technology being used in hiring? Like you mentioned, it’s kind of inevitable, I guess. It’s in every sector. But yeah, it just seems like there’s a lot of potential obstacles that we’ve been seeing for like 10 years.
Kanjun Qiu (29:25) I think this is part of why it’s important for these systems to be more explainable and controllable, where ideally you could see what’s happening in the decisions and be able to change it. Today it’s kind of like a black box with some reinforcement loops. You’re like yes, no, yes, no, and then you don’t know what the black box is learning and what it’s starting to index on. I think more general models might actually solve some of this problem. Like the more general the model is, the more it understands about the world, the more it understands like, we shouldn’t be optimizing against minorities. Or, maybe we could flag if we are ending up optimizing against minorities. Or maybe we could analyze the data at the end and take a look and see what’s happening. And so I actually think that we can use these models to make much better, like, you know, the optimistic take is that we can use these models because they give us a lot more intelligence capability to analyze the effects of what’s going on in say hiring and what’s going on with these algorithms and to change them to our liking. And this is kind of a world, you know, I imagine where we are controlling the AI and we are directing it to create the kind of outcomes that we want to see that’s more in line with our intentions. I don’t think anyone’s intending to bias against women at Amazon, but they couldn’t see it very well. And so that ended up happening. But we could use AI to have better monitoring, to become wiser. And that’s really the opportunity here.
Hayden Field (30:55) That makes a lot of sense actually. Sometimes companies will, I’ve heard, automatically downvote or disqualify someone if they do find secret instructions for a chatbot hidden in someone’s resume or a white font of skills listed at the bottom, only meant to be seen by an AI recruiting system. Do you think it’s fair for companies or recruiters to look down on people that use those types of tactics when at the same time they’re using similar tactics to screen applicants?
Kanjun Qiu (31:24) I think we’re in a game where no one’s winning right now. It’s like this war of attrition. Does the job seeker get more information into the model or does the model catch that job seeker’s fake information faster? Ultimately, I think we need, where the value is, is to get out of this war of attrition. And it’s hard to know how to do that. Part of what’s happening is there aren’t that many, it’s hard to find jobs right now. Maybe people aren’t up-skilled in the right ways. And so, maybe we could direct more of our energy and attention toward upscaling people and trying to teach them how to have the skills that are needed. And with AI systems, there are a lot of creative ways to use AI systems to learn really fast, learn new things really fast. This also goes to how we think about AI, where instead of letting AI automate us out of a job, like somebody else can automate us out of a job today, there is an alternate world where we can use AI to automate ourselves out of our jobs.
And that actually feels a lot better and is awesome. Like I use my agents, you know, I make little scripts and agents to automate myself out of some of the processes that I have to do every day. And what that means is a world in which we get all of these like tiny AI systems that are embedded throughout our lives that we’re creating and we’re controlling those workflows. And so then we are the ones who get to decide like, oh, do I want to automate this, do I want to change this in this way? And in that way, AI can become more an expression of who we are and what we’re trying to do, as opposed to a centralized thing that someone else is using to automate us.
I think I’m of the philosophy that this war, this job seeker recruiter fight, or we’re going to see these fights in all sorts of different places. These fights are, they’re just going to happen. And the way to get around them is to try to solve the underlying problems more and to kind of empower each individual to have more opportunity and to be able to express themselves and to have more economic opportunity in their own way by empowering them with AI.
Hayden Field (33:32) Yeah, I think we are going to see this fight happen a lot. And one last thing I wanted to ask you about the situation is another kind of area we’re seeing this tension happen, which is AI interviews. You know, I’ve had multiple friends talk to me about their experiences with AI interviews and, you know, the New York Times story included that too. It seems to be an increasing trend to kind of screen people using AI. Basically, it’s like a recording.
I used to be an actress and it’s kind of like recording a self tape in a way when you’re sending in a video of yourself for an audition. But instead of, you just talking to yourself, you’re talking to an AI system that is asking you questions and then weirdly sometimes affirming you afterward. So yeah, what do you make of that trend? It’s pretty crazy.
Kanjun Qiu (34:21) Yeah, I think this is a trend in the category of AI exacerbating existing power dynamics. Where, you know, I as the applicant have a lot less power than the recruiter or the company that is potentially going to hire me. So it does feel, you know, when we talk about fairness, like what is it that feels unfair? What feels unfair is people who are already disempowered being more disempowered.
This kind of automated video screening takes less time on the company side, but it takes just as much time on the applicant’s side to go through every AI video company screening. And so in this way, there’s this like power differential where as a person who’s not able to use AI in my application, if I need to record myself as a human and I can’t use AI, then I can’t scale myself in the way that the company was able to scale itself.
And that’s why I keep going back to like decentralizing, enabling people to create their own AI agents, enabling people to create software, Common Source. All of these things are about how do we give people the ability to scale themselves and use AI systems to give them more power. There are already enough of these power dynamic differentials in the world. And we’re already in an increasingly lopsided power differential curve. And so we need efforts that push against that and really try to flatten the power dynamics because the fundamental thing that AI does and will do is it will give you power. It’s like a way, it’s a source of power. It scales a bunch of things, it automates a bunch of things, it understands a bunch of things, it can process tons of information and can take actions. Those are literal sources of power.
And because of that, what we need is a way to give more of that power to people who have less power today and to kind of equalize the power dynamic. And there’s an opportunity, I think we have this window of opportunity to do that today before we end up with lots of lock-in again, just like as happened the last 10 years. That’s fundamentally, I think, the thing that needs to happen in order for us to see, in order for us to stop feeling so weird about what’s happening. I think the weirdness that you’re feeling is real. It’s pointing at a real moral dilemma that feels really strange. Like this doesn’t feel fair. There’s something ethically weird going on. And I think those are real signals that we’re getting. And those signals point at this power differential problem.
Hayden Field (37:09) Right. Yeah, I really, I’m so glad you put it that way about the power dynamics because, know, that’s when people ask me what I focus on as an AI reporter, six years ago when I started on the beat, I could just say I was an AI reporter and that was niche enough. But now everyone’s an AI reporter. It’s kind of the same as saying you’re a business reporter. Everything is business. Everything is AI. Now I say that I focus a lot on the shifting power dynamics within both the industry and how these systems and the people that lead them relate to society and the people that the systems are affecting. So yeah, it’s definitely going to be a more and more important and more talked about topic as the months and years go on. But yeah, thank you so much for your time and for coming on. I’m really glad we got to talk about all this.
Kanjun Qiu (38:02) Yeah, me too. I’m glad we got to talk about power dynamics. I guess one more thing on the power dynamics. I’m really glad that that’s what you’re thinking about. And I think that often people think about the power dynamics in terms of, what laws can we enact? But I actually think that we can build the technology in ways that are fundamentally more empowering and fundamentally less empowering, just like the supercomputer analogy I gave before. And so when we were talking about OpenAI DevDay and these platforms and Sora and what’s happening, I think that there are things that we can change about how we build these technologies and the environment that they’re released into that would actually distribute power and change the power dynamics. And so that’s, I think, really the opportunity there, but we have to get creative.
Hayden Field (38:46) Thanks for coming on, I’m really glad we got to this talk.
Kanjun Qiu (38:49) Yeah, same.