ALEX BLANIA: ‘IN THE NEXT YEARS, THOUSANDS OF HUMANS WILL FALL DEEPLY IN LOVE WITH AN AI’
The CEO of Tools for Humanity, the company behind Worldcoin, on technology and ethics, Artificial Intelligence and the use of private data.
On the cusp of the age of Artificial Intelligence, Alex Blania is leading a controversial project aimed at verifying humanness in a world that is increasingly digital.
The tech entrepreneur is the CEO of Tools for Humanity, the company behind Worldcoin, a cryptocurrency project that pays humans for a scan of their iris in order to create a digital passport of sorts.
The company, co-founded by Chatgpt-maker Openai’s chief Sam Altman and Max Novendstrn, promises a digital utopia but has faced accusations it is taking advantage of citizens from low-income countries who hand over precious biometric data by allowing the scanning of their eyes via Worldcoin’s futuristic orbs. And while they are under scrutiny in several countries – including Argentina where they’ve scanned at least 500,000 eyeballs – Blania defends the project as a noble attempt at solving the complex problem of digital identity as the Internet is becoming overrun by Ai-fuelled programmes and bots.
Blania hopes the Worldcoin protocol will be used to create a better digital ecosystem where cryptographic verification could be one of the few ways to build trust online. He’s also optimistic that AI will generate a productivity boom that will allow humans to work less, in part justifying the need for a universal basic income. Yet serious questions remain about the handing over of one’s unique personal information to a group of San Francisco-based techies.
There have been ethical issues raised regarding the use of facial recognition and now you are using it to develop Worldid. Why do you think this is?
Well, I do think everything that has the potential to limit, either privacy or freedom, should be considered very carefully, and very critically. So what we really focused on when we started working on [Worldcoin] was how can we create something that actually gives all the power to individuals using a lot of cryptography around it in a way that you actually don’t need to rely on a third party. You, as an individual, can decide what you want to share, what you don’t want to share and how you want to use it.
One of the things that makes this explicit is that your face is being scanned and being used, as opposed to typing in your password or giving your ID, and even your biometric data to these major companies as we’ve been doing for years. Why do you think you get so much pushback?
What do you think?
I think that it’s more explicit, whenyourorbsscansomeone’s eyes, as opposed to just scanning your face on your iphone.
I believe the Internet itself will actually change quite meaningfully, and this notion of verifying who is a human on the Internet will turn out to be critical infrastructure ... I think it will be very important to protect our democracy and the Internet. On social networks we interact with each other, we share opinions, as we do on digital media, and I think they rely on the notion that who we interact with is an actual person.
I do believe – and I’ve thought about this for a long time – is [that] the only way to solve it will be some form of biometrics.
When we started working on this, we thought about how it should actually be and came to the conclusion that it should be an open protocol that is verifiable and not controlled by any of the big tech companies or governments. Governments sometimes don’t have the competence to do it, but it’s also something that should be the right of the citizens of any government, it should be in their control. So we tried to design an open protocol that actually can solve the problem on a global scale. Giving users their privacy and their control.
I do think we have to explain it much better than we have in the past. The fact that it’s a kind of an audit, with all the cryptography we use, all those things that we use to actually keep you anonymous and private, I think these things are not easy to understand. And it’s not something that you usually do. Also, the fact that [the iris scanner is] a chrome orb certainly did not help, I think in many ways to make it a little less creepy, but it looks pretty cool!
What generated some of the concern here was the payment. You guys have been talking about “universal income,” and maybe you can associate those two, because offering US$100 in a developing country with widespread poverty generates suspicion.
We believe that AI will change many things about both the economy and society. One of the biggest challenges of our time will be how to make all of this progress available to the whole world. How is this not something that just occurs in San Francisco, and where a couple of big tech companies just get even bigger? How does it actually lift up the world? This was one of the big founding reasons for the project. It might be that in the coming decades we might need something like a universal basic income. I know it’s politically controversial, but I think things will turn out to be quite different than they are today.
We should try new things such as giving everyone some universal access to computers, which will be very important because it may become as critical as water given it will become the way you actually generate a living. This concept of economically giving everyone minimum firepower, access, and control will turn out to be critical ...
Conceptually speaking, we are actually not seeking anything in return. There is no business model where we sell data, all of those things are not true. Rather, it’s just a new digital currency that is created by giving ownership to every human being. There is no trick. There’s nothing hidden. When you sign up, you just get part of that currency.
There’s little oversight, especially from countries like Argentina. How does your governance structure make you different?
Even medium-term, something like Worldcoin will not work if there’s a single company behind it. You’re probably very familiar with Ethereum [the blockchain protocol behind the world’s second largest cryptocurrency, ETH]. Vitalik [Buterin] and a group of people actually started it, but now noone really controls it. It’s a public infrastructure where many people build companies on top of it, people issue stable coins on top of it. It created a whole new wave of innovation.
Worldcoin is the same. At this point, it’s a fairly small project, but if we talk again in a year, it will be an open protocol no matter what I say or do with the project.
Who has the control over the biometric data ultimately?
When you sign up, what happens first is that the device checks that you are actually a person. So that you can’t defraud it there’s sensors in front of it to make sure you are not just a display or a print-out or anything like that. It takes a picture of your face and your eye, which is actually something very common that occurs in many airports around the world. Many governments of the world do it as well, it’s actually not a new technology. It then generates an iris code, which is essentially just an embedding of the information in your eyes, similar to what the iphone does is when you use face ID, then the iris code gets split out in multiple pieces and [it’s] sent to multiple servers that all need to work together to actually compare the uniqueness of the code.
That’s the first important piece of information, there’s no central server. And then second, which I think is the even more important process, is that if the verification that a uniqueness check succeeded didn’t occur, as a person you can do what is called a zero knowledge proof that confirms that you were the person verified before with the orb. This effectively gives you full anonymity.
Even if what I’m saying is not true, or things go wrong or break, or we make a mistake, the only thing that would be absolutely true is that your Worldid would be disconnected from your account.
I think the combination of decentralised computers, zero knowledge proofs, and ultimately a decentralised system is the most private we can get while solving a very important problem.
Getting a little futuristic, there are two main visions as to where AI is taking us. One option is the dystopian future, and the other is the increase in productivity. Where do you think it’s taking us?
I believe the safety issue will be solved. I’m not an expert, I was four years ago but many things have changed in that time. I think the AI issues will be broadly solved even if now it seems like a very scary question. There is a lot of progress occurring in the field, a lot of good things going in good directions coming out … fundamentally it will be a technology of empowerment.
Is it possible for AI to have an identity? I don’t want to use the term consciousness, but will they become “beings”?
Yes. However, we should be very careful to keep a distinction rather than accepting that these things might be considered persons. Even just simply for the reason that humans will fall in love with these things, right? They already have these dating AIS and things like that, in the next few years you will have thousands of people fall deeply in love with an AI, and they will just not accept the idea that these systems should not have personhood. But they’re still no humans and I think that distinction is very important, so we need to build a completely new framework about how to think about machines and these issues, it will be somewhat complicated.
“Conceptually speaking, we are actually not seeking anything in return. There is no business model where we sell data, all of those things are not true. ”