April 7, 2020 xerxes

Roman Yampolskiy is a computer scientist and professor at the University of Louisville. He is specialized in Artificial Intelligence, Security and Safety. In this interview we talk about topics like if artificial intelligence can create consciousness, what the simulation hypothesis is and if we might actually live in a simulation and if we should treat AI just like threats as the corona virus.

1:19 – Introduction of Prof. Roman Yampolskiy
2:13 – (Artificial) Intelligence and Consciousness
7:58 – When Will We Reach Artificial Superintelligence
8:59 – Which Discussions Should We Be Having About Artificial Intelligence
10:08 – The Effect of Quantum Computing on AI
11:17 – AI and Security
13:53 – AI and Blockchain and Blueprint for Skynet
17:03 – AI and Dystopian Scenarios
19:17 – What Needs to Happen so Decision Makers Start to Think About Limiting and Controlling AI
20:17 – Which Paradigms Need to Be Challenged in AI
27:11 – Solutions for the Future
31:28 – Utopian Solutions for the Future and the Simulation Hypothesis
34:49 – His Current Research
35:39 – Impact and Legacy

Video Excerpts From the Podcast

Defining (Artificial) Intelligence

Which Questions Are We Not

Asking About AI

Who Should Make the Major

Decisions About the Future of AI

Can AI Have Consciousness

Can We Turn Off Artificial


Is AI a Tool or an Agent 


When Will AI Become


Should We Have the Same

Discussions About AI as We Have

About the Corona Virus

Solutions and Simulation


Transcript of the Interview with Roman Yampolskiy
This text has been auto-transcripted. Please excuse mistakes.

Xerxes Voshmgir: Welcome to Challenging ParadigmX. My name is  Xerxes Voshmgir and I’m the host of this podcast. In my podcast, I’m interviewing people who like to challenge the status quo. In today’s edition of Challenging ParadigmX, we are talking about topics like: Can artificial intelligence create consciousness? Do we live in a simulation?

Meaning, are we actually in the game rather than in the real world? Should we treat artificial super-intelligence just like threats like the Coronavirus? And: Can intelligence be weaponized similarly to biological and chemical weapons? Today, my guest is professor Roman Yampolskiy. He’s professor in computer science and is specialized in artificial intelligence, safety and cybersecurity. He has written numerous books and papers on this topic and if you are interested in these type of topics,  stay tuned.

Xerxes Voshmgir: Hi, here’s Xerxes, and today I’m here with professor Roman Yampolskiy. I’m very happy that you’re here, Roman.

Roman Yampolskiy: Thanks a lot for inviting me. 

Xerxes Voshmgir: Yes, thank you.  let’s just start off with peace. Introduce yourself. Who are you and what do you do? 

Roman Yampolskiy: So I am a professor, do research in computer science more specifically on artificial intelligence, safety, and security, and work at a university. I teach and do research.

Xerxes Voshmgir: Please tell us why. What’s the reason? Why do you do what you do? 

Roman Yampolskiy: Well, I always wanted to work on cutting edge technology and something important. I don’t think there is anything more important than creating artificial minds and making sure they behave.

Xerxes Voshmgir: Artificial intelligence now is a very hot topic in general.  you’ve been working for a long time in artificial intelligence.  there is one aspect that I always talk about the term “artificial”. Do you think personally is the right term to use the term “artificial” artificial intelligence, what we call artificial intelligence?

Roman Yampolskiy: I don’t think it matters what we call it, it’s just a label and then artificial just means that it’s made. It’s not naturally occurring. But you can call it whatever else you want. Wouldn’t change a thing. 

Xerxes Voshmgir: But maybe or, let’s look at it from a different perspective. You’ve talked about this in the past also, when we look at the term intelligence and then want to differentiate between, let’s say human intelligence and what we call artificial intelligence. I personally think they as it’s true, it’s, it’s just a label, the one hand. But at the other hand though, isn’t the term artificial misleading when it comes to. How we define intelligence and what artificial intelligence really is. 

Roman Yampolskiy: Well, intelligence has many definitions. One of the best I encountered talks about accomplishing goals in different environments. So whatever that is, human or artificial agent, I don’t think it makes a difference in terms of what it is we are observing and measuring when it comes to intelligence.

Xerxes Voshmgir: The perspective I’m coming from is basically that when we talk about artificial intelligence in this respect, I feel like we are putting artificial intelligence a little bit down in the sense of when it comes to the topic of consciousness, for example, that when something is artificial, it cannot have consciousness, for example. So this is why I feel like maybe artificial is, is really. You can say it’s just a label, but I think it’s really degrading, what artificial intelligence could really be. And also when it comes, for example, to simulation theory, then the question is why is our intelligence not supposed to be artificial You see what I’m saying? 

Roman Yampolskiy: Right, so that makes the whole subject a lot more complicated. You go beyond intelligence to things like consciousness, which we are not very good at detecting, measuring or. In any way producing. So I think we could have potentially a very capable intelligence, which was not conscious. That would not be a contradiction. OK, so I don’t think at this point there is anyone in terms of artificial agents who would get offended by being called something less. They just don’t have that, that capability yet. 

Xerxes Voshmgir: I don’t think in the sense of necessarily of being offended, but in the sense of putting it into the right perspective and giving it the right value in the sense of where it’s supposed to be. And so, this is the one part, the other part why I’m asking in this respect is a, I personally, I don’t know how you feel about it. I personally feel like that the question, if artificial intelligence can have consciousness or not is actually perhaps one of the main questions or major questions, because I feel like if artificial intelligence would be able to create consciousness, then it would change quite a lot of things, basically. in the sense of the question, what does it mean to be human? And then what we call artificial intelligence and how artificial is, it is not just a different type of intelligence than human or biological intelligence. You see where I’m coming from? 

Roman Yampolskiy: So I agree. It’s a very important question, fundamental question about consciousness. And I have done some work showing that perhaps consciousness is a side effect of intelligence. You cannot avoid it. So it’s very possible that our machines, in fact do have some rudimentary internal qualia and we’ll have super-intelligence, with super-consciousness as a side effect. So, a lot of what I do is not just about narrow systems we have today, but about futuristic systems. And those are super-intelligent systems, hard to show anymore  respect then calling something super-intelligent compared to us  barely intelligent humans. So, I think there is a possibility that they will be greater than us and more conscious and have multiple streams of consciousness and deeper experiences.

Xerxes Voshmgir:  If true, what do you think this would mean for us. 

Roman Yampolskiy: In what sense? So there are other conscious beings out there. It doesn’t mean anything for me. What, what are you asking about?

Xerxes Voshmgir: Oh, as far as as humanity mean, if there is an intelligence that is perhaps way more intelligent than us, and also has consciousness. Do you have any speculations what that would mean for humanity? 

Roman Yampolskiy: So if it’s super intelligent, our concerns about control, who is in charge, who’s making decisions, who is a surviving. whatever that intelligence is conscious or not is not as important. Let’s say it’s a very malevolent intelligence, and you have this Terminator chasing you, trying to kill you. I don’t think you care if it feels anything or not. It’s the same outcome. with low level AIs it matters because if they are conscious, we cannot exploite them. We cannot use them in certain ways that will be unethical. But, again, if it’s super-intelligent systems we’re talking about. I think their intelligence dominates the conversation over their internal states.

Xerxes Voshmgir: How realistic do you think it is on which time span for us to reach to that level of super-intelligence? 

Roman Yampolskiy: Well, I think it’s very realistic. We’ll definitely get there. As soon as we get to human level because of how computers operate in terms of memory and access to knowledge, it immediately becomes super-intelligent in certain sense. As to timelines.  I’ve seen predictions from seven years to hundreds of years. I don’t know for sure. 2045 is a number a lot of people come up with based on different data streams. 

Xerxes Voshmgir: but  from your experience from your research, and you’re an expert in this field for many years  what’s the time span that you feel like is realistic? 

Roman Yampolskiy:I would be surprised if by 2045 we didn’t get there. 

Xerxes Voshmgir:  If we wouldn’t or if we would. 

Roman Yampolskiy: Yeah. I would be surprised if we didn’t have it by that point. 

Xerxes Voshmgir: Okay, tell me what you feel like, other discussions in the field of artificial intelligence and future technology that  we should more have in the public that we don’t have. 

Roman Yampolskiy: Well, we rarely discuss, not just: “Can we do it?” but “Should we do it?“ Okay? So, everyone’s working on making the systems more capable, getting there as fast as possible. But so many people around the world never had a chance to understand what the issue is or consent to it. And definitely this technology will impact everyone. So, we’re essentially performing an experiment and billions of people. We starting it already with children today, we give them iPads. “Oh, let’s see what happens in 20 years. If you spend a couple months a year sitting on an iPad. We’ll find out.” And likewise, if we do develop this more capable intelligence, maybe it will be controlled and beneficial, maybe not. But we haven’t had this conversation with the general public.

Xerxes Voshmgir: What do you think will be the effect of quantum computing? What’s your perception? How far we are there and how will perhaps change or  accelerates, the research and the development in artificial intelligence. 

Roman Yampolskiy: I’m not sure we need quantum computing to get there. It might speed up things. If computers are a limiting factor. I think most impact will be in areas like cryptography where a lot of existing protocols will be broken. A lot of private information will be released. cryptocurrencies will be impacted. So I think short term, that’s where most of the acts will happen. 

Xerxes Voshmgir: So you think it’s mostly a security issue? Less about the development of artificial intelligence where quantum computing. 

Roman Yampolskiy: As far as I can tell, I haven’t seen an exact, learning algorithm, which would heavily depend on quantum computing in order to perform. Okay. But with quantum computing, it becomes possible to do integer factorization and then it may be problematic at some point.

Xerxes Voshmgir: And one of your main fields of studies – artificial intelligence and security – we already talked  a little bit about this. What else do you think I would like apart from quantum computing now is  important in this field of security and artificial intelligence. 

Roman Yampolskiy: Specifically with security, we don’t have much at all. We always work on developing a product, releasing it, but, rarely consider how insiders or outsiders can hack it, manipulate data. We’re just starting to see a little bit with some of driving cars, but in general, AI researchers never considered safety and security questions and many still don’t.

Xerxes Voshmgir: Okay. But what do you think it would mean for our future? What it would mean for , not just research, but the actual applications? Well, what’s your take on this? 

Roman Yampolskiy: Yeah. If you release a product which has no security and you don’t consider what can happen, we are already starting to see, even with very narrow systems. For example, there is a famous Microsoft chat bot which was released and, they didn’t consider a possibility of users manipulating its, learning and feeding it bad data. It seems obvious to consider for something like that. But the system was learning from whoever was feeding it information similarly with self driving cars, we’ve seen examples of them being hacked. it makes it possible to take over controls of the car, kill a person, kidnap a person. those are not considered at the time the design of the system is introduced. 

Xerxes Voshmgir: What do you see as solution to that problem? 

Roman Yampolskiy: Well, we have to include security experts on, an all projects. Just like when we do basic software development, cyber security plays a larger and larger role. We have to do similar things with the intelligence systems. 

Xerxes Voshmgir: And do you have the feeling that the awareness for this issue rises in companie and in politics? 

Roman Yampolskiy: Usually after they have an accident, they start to go: “Oh now, now it’s time to invest in it! Yes.” 

Xerxes Voshmgir: But this of course doesn’t go at the rates that would match the challenges that we have with artificial intelligence, for example. Does it. 

Roman Yampolskiy: It seems to work with narrow AIs. Like, in general, all systems have failed. We had cyber security breaches. Data was stolen, but usually it’s just a financial loss. You get a second chance, third chance. The concern is that with super-intelliget systems, you don’t get a second chance. If you don’t get it right the first time, it’s too late.

Xerxes Voshmgir: And also, I mean, when we look at, for example,  at the convergence between blockchain and artificial intelligence, even if it’s not super-intelligent, there are also  you don’t have a second chance, do you. 

Roman Yampolskiy: Right.  We saw examples with smart contracts were a mistake in a smart contract led to theft of hundreds of millions of dollars worth of value. So that’s a great example where the systems are not very intelligent. They have tiny IQs, five points or something like that. But, they have to control human users. And that’s kind of a similar challenge of us trying to control intelligence systems. 

Xerxes Voshmgir: I mean, this is what you talk about now is on the level of financial loss, but  of course you could program blockchains that do not have a lot to do with  cryptocurrencies but we’ve creating other types of values. So once you program blockchains and combine them with artificial intelligence and especially super-intelligence basically, if you don’t consider a lot of things – and security is one of the main things –  it’s not easy just to change the blockchain. How do you feel about some people saying, that the combination of blockchain and artificial intelligence being actually a blueprint for Skynet.

Roman Yampolskiy: So blockchain at the end is just a glorified database. I’m not sure what magical abilities it would give to AI. Almost all of it can be done with our technologies bypassing this approach altogether. Blockchain allows some decentralization, but with a single AI system, that’s not a big issue. 

Xerxes Voshmgir: Yeah. I do understand that in this point to agree with you, but what I mean is like a blockchain, like Bitcoin, you cannot switch off.  So the  Bitcoin blockchain doesn’t necessarily need  AI but other blockchain’s problem in this respect also could be now the benefit of course. But, when we talk about dystopian scenarios, basically,  I mean, if you don’t switch off, energy supply, you cannot switch off a blockchain. And then maybe artificial intelligence doesn’t necessarily need this decentralized database, to work. But my question comes from the perspective that if you combine blockchain  and artificial super-intelligence, basically you cannot switch it off other than you switch off all power all around the world.

Roman Yampolskiy: Correct, but this is exactly how computer viruses work, right? You can’t turn off a virus, you can turn off internet if everyone chose to do it, but it’s not some magical additional protection level. Either you have blockchain software installed and running or you delete it. Those are the two options. And it’s the same with power supply or internet providers, in general, based on some research we’ve done other’s have done, you’re not going to get a chance to turn off super-intelligence, if it’s smarter than you it will predict such actions and strike first, essentially. 

Xerxes Voshmgir: When it comes to artificial super-intelligence   – you used to word strike now and I before use that metaphor of Skynet and this is of course, something that many people are concerned about. Do you think it’s really necessary that  we as humanity are on fear of artificial intelligence, or do you think it’s just like Hollywood’s scenario type of commercial movies that have influenced us in our thinking. 

Roman Yampolskiy: Well, you have to be realistic about it. It’s not helpful to panic and be pessimistic or be optimistic and think, “Oh, it’s so smart. It’s going to be great.” You have to realistically look at those systems. Do they have some sort of a control mechanism? What are they capable of doing? Are we capable of controlling that behavior, predicting it, explaining it, and the answer so far to almost all of those questions is: “No, we don’t have this technology today.“ We cannot control AI at current levels really well if it’s not deterministic, if it’s making novel decisions, if it’s learning, if it’s operating in new domains and as it becomes more capable, our capabilities go down in terms of control. That’s just the state of the art today. Hopefully it will change, but it doesn’t look too promising.

Xerxes Voshmgir: Okay. What does it mean for us? Because it doesn’t look too promising. What’s your… 

Roman Yampolskiy: It goes back to that question we brought up before. Should we be going forward with this? Should we put limits on, capabilities? The system can have resources. It can have. maybe it’s not possible to limit it that way. We are right now experiencing some coronavirus outbreak, and this is exactly the same debate people are having. Should we limit travel, limit meetings, limit, our economic, capabilities to control it? And there is debate in some countries, it’s very easy to say, okay, everyone stay home. And it works for them. In other countries, it may not work, but in a way, this, intelligence is a different type of, malevolent technology. We understand biological weapons, we understand chemical weapons, but intelligence can be weaponized just as well. And we don’t have the same attitude towards it. Everyone goes: “Oh it’s  just software. You know what can happen?” But it could be much worse.

Xerxes Voshmgir: What do you think needs to happen that people, politicians, entrepreneurs, businessmen of the biggest companies do start this conversation or do start to think about limiting, controlling, regulating artificial intelligence. 

Roman Yampolskiy: Well, there is an interesting problem that people who have the power to do something, people in control of those major companies, they have very strong financial interest to go forward. And it’s very hard to go against your own interests no matter how smart you are, even if you don’t understand the problem, it’s still very hard to go: “Well, no, we cannot do this. This is not a good idea.” So it would be ideal if we could remove people making those decisions and people benefiting from certain outcomes. so they’re not, combined. You’ll have freedom to make a decision on. It’s not financial impact in you personally.

Xerxes Voshmgir: Tell me, when it comes to, especially artificial intelligence, what do you think are the paradigms that need to be challenged? Todays paradigms in artificial intelligence and in research in how it is applied. You’ve talked a bit about it, but you have additional thoughts. 

Roman Yampolskiy: So, almost everyone  thinks that’s a tool. Whereas more and more, we’re turning to the agent model of AI where it has independent capabilities, and that’s a very different type of collaboration and a different type of adversary. Okay. Most people don’t see it as such, but, once we get to AGI superintelligence levels, it will not just be a tool for us to use. So don’t, think of it as a calculator or a tax preparation software. And think of it as alien intelligence from another planet. If tomorrow aliens come and they have the  technology to travel, so they smarter than us want to be more experienced, how would that interaction go? Can you guarantee that a, it’s going to be beneficial for both ends. This is the type of analysis you can do today. And, basically decide, do we want them to come, let’s say we have to invite them before we come. We want to invite them. If you look at history of, human civilizations, different, islands, Americas versus Europeans, and so on. Anytime there was  kind of penetration from one group to the other the one with more developed technology usually exterminated, wiped out, or did something horrible to the less advanced, technological civilizations. So it seems to be a very common pattern. It doesn’t have to be that way, but so far we don’t have any technology to guarantee that it’s not going to be that way.

Xerxes Voshmgir: I believe that perhaps there is a technology, but it’s very basic and it’s very human technology. And that’s, basically, the dogmas that we believe in and in this way, the dogmas, the rules that we program into artificial intelligence. So what I mean by that is when we look at our algorithms as human beings, basically, what we read in different books of different religions , there is , like for example, the 10 commandments. And you could say that these are  is the code with which we humans are being coded or were coded and even with people are, who are, not believing in God or in religion, spirituality still in the States most people are Christian, so , they grow up in a Christian  society and they tends to  have the Christian mindset integrated into them. And this is what,  I mean by,  the dogmas, what I mean by the algorithms we are programmed by. Also, when we look at, neuroscience there’s discussion of where free will or not but , we  operates at least semiautomatic. So when we take this comparison now and look at artificial intelligence and look at  the example of AlphaGo Zero where  the artificial intelligence only had the rules programmed –  of the game now , not no rules of  ethics and so on and so forth. Of course. , but the rest was a black box and , AlphaGo Zero was able to teach itself the game very quickly. So when we look at this from this perspective, what’s your take, do you think if we program into artificial intelligence  certain ethical rules if they would make a major difference. 

Roman Yampolskiy: So first we’re going to observe that people, despite being mostly religious around the world, are completely unsafe. There is no safe people and how dangerous they are is usually in proportion to how much power they have, what they can get away with. Power corrupts. If you give someone absolute dictatorship powers, they become very, very dangerous. No exception to that rule that I know of.  We also don’t really agree on those ethical principles. So that’s why we have worked against different religions. Because people say: “Yeah, your rules are evil. My rules are better. Let’s fight over it!” So we as humanity, with same biological code, same everything, don’t agree. And what codes should be.  The classic example with AI is the three rules of robotics, right? You can tell them don’t harm humans, do this and that. It always fails. Those are contradictory rules. You define the rules and you’re talking about a intelligent lawyer. You can give it a thousand ruls, a million ruls. It will still find a loophole to do as much damage as it wants. So this doesn’t seem to work. A, you cannot model a safe agent,  safe agents. B, we don’t agree on the actual rules to follow and C, even if they managed to come up with a list of rules, it will still find a loophole to bypass them. So this doesn’t seem to work. 

Xerxes Voshmgir: Okay. But this  would be a perspective of, that the artificial intelligence would want to find a loop to do harm.

Roman Yampolskiy: Not necessarily. A lot of it is miscommunication and undesired side effects. So simple plastic examples. you have a system which wants to do good, but just is not very aligned with human common sense. And I say something like, well, I don’t want any people to have Corona virus in the world. Well, one way to do it is to kill everyone who has coronavirus. Yes, I see what you’re saying. And it follows the rules. You’re doing what is ordered to you. You accomplishing exactly what is desirable and just you don’t have common sense to understand. No, that’s not what we had in mind. 

Xerxes Voshmgir: Yes, I see what you’re saying. 

Roman Yampolskiy: And there are, for any example you can give me where this is the rule, I can find a similar, maybe silly side effect, which you would never think about because you are human. You understand that’s not what I meant, but a machine just looks for: “What’s the easiest way to accomplish this?”

Xerxes Voshmgir: I perfectly understand the what you’re saying . And then when we come back to the point we talked about in the beginning, if  artificial intelligence would be able to create consciousness, real consciousness would that make a difference? You think?

Roman Yampolskiy: So, as I said, it has some impact on the robot rights and things like that. We cannot torture it and exploited, assuming we are ethical beings ourselves. It may in a way be dangerous for us because if it understands those internal states of painful suffering quality, it may become much better inflicking pain and suffering. It can do a lot of self experimentation to determine, well, what’s the worst thing I can do to you? So in a way it could be actually quite bad if it has this internal access.

Xerxes Voshmgir: When it comes to solutions for the futu re, when it comes to artificial intelligence, specifically for healthy evolution, or maybe a revolution for the future to develop towards a healthy future, a humanistic future, a future friendly for humans. What do you think needs to happen on a major scale?

Roman Yampolskiy: So, it looks like we would need a little more time to think about it and have a lot more people thinking about it. Right now we’re talking about dozens, maybe. We need this to be a major effort and maybe getting some more time, some sort of moratorium, and certain types of technology may buy us some time. I’m not sure it’s possible or enforceable, but in principle, that would be a good thing. 

Xerxes Voshmgir: Okay, so you feel like this needs to happen on the level of the United Nations. Or what institution you think is a would be the proper institution, the proper level for something like this to happen. 

Roman Yampolskiy: It’s, it’s happening at a lot of global organizations, UN is one World Government Summit, European Union, all of them have panels centered AI, AI ethics, AI governance but a lot of those things are so high level, they don’t actually provide technological solutions. So you get this assurance of: “We just signed this document saying, AI should be good for humanity.” But it has no impact. It’s just a symbolic gesture. 

Xerxes Voshmgir: If the question is really to have something, as you said, that does have impact, what do you think needs to happen? 

Roman Yampolskiy: So, like with the pandemic we experiencing, it seems like bad things happen. People start to realize: “Oh, worse things can happen.” So in a way, some of those very innocent accidents we have with AI are very good and that they starting to open people’s minds to possibility. Well, this silly chat bot can cause this much damage and this self driving car can cause with much what happens when the. defense AI would control of nuclear weapons has an accident. What happens when a AI designed for developing biological , agents, novel biological entities has an accident and that’d be bad for us. And so slowly people start considering something they never considered before. For many years it was not a part of a debate. Whatever. Your technology will have negative impact. The question was who can do it first? Who can get higher  levels? That’s all it mattered.

Xerxes Voshmgir: What do you think is a fairly realistic scenario that could happen on this level of institutions that , is not only just theory, but actually also practical for people who work in the field of artificial intelligence? So. What would be the necessary evolution or revolution for us to move forward in a healthy way. 

Roman Yampolskiy: So, it would help the fish shifted our funding and resources from developing and specifically militaristic, weaponized AI to working more on safety and security issues. At least for a while. Okay. It may help. It doesn’t mean that there is an actual solution. Every solution I’ve seen so far has significant problems, and the ones which are problem free are so futuristic. If I told you it would be like science fiction. It wouldn’t  be accepted by politiciansa and by regular people. So. First, it’s important to figure out is there a solution? Maybe the problem is not solvable. That’s quite possible, and that’s part of my research. I’m trying to understand, can we solve the problem? How does the solution look is a desirable, but it’s ongoing research. We don’t have solutions. We don’t have many answers. Okay.

Xerxes Voshmgir: And when it comes to the , utopean scenarios, , would you like to elaborate on that? 

Roman Yampolskiy: Sure, so it seems the main problem is we don’t agree: What you want, what I want, what 8 billion people want is very different. You kind of mentioned the simulation hypothesis. If we do manage to control the super-intelligent entity perhaps all of us can get individual virtual universes where whatever you want happens in yours, you can have our agents on it. You can be alone, you can be king, you can be slave. It’s up to you. Point is: It’s customized and you don’t have to negotiate with anyone. If you want to, you can, you can bring your friends, family. But the point is: We don’t have to give up what we consider important to satisfy someone else. That’s the only solution I found so far, which allows for this merger of human volumes. Everything else creates conflict. At best, you have to sacrifice. Somebody is atheist, somebody is religious, somebody vegan, somebody likes barbecue. You cannot get all of them to follow the same rules and be happy. So if you create separate entities, separate universes, personalized universes for everyone, it becomes a possibility. And if you think we’ll live in a simulation, perhaps this is already some experiment by another civilization that direction. 

Xerxes Voshmgir: So maybe we go into simulation theory a little bit more in a second , but like : Wouldn’t be like a pre stage of  what you talked about now be if we just create virtual reality worlds where people can just  basically live and create whatever they like in a virtual reality with the technological  capabilities that you already have right now.

Roman Yampolskiy: Correct. And that’s what happens. You’re right. You have virtual reality, and the final step in it is it’s not populated with everyone else in your community, and you don’t have to compromise right now. If you go to something like Second Life, there are other agents and you just transfer for existing problems to that virtual environment. You can still have the same argument. It doesn’t matter what it is. If it’s your universe, you decide what happens in it. You have complete control. You become godlike.

Xerxes Voshmgir: And , I mean, I’m sure many people that listen to this podcast, they don’t know about simulation theory. Would you like to elaborate a little bit on simulation theory? 

Roman Yampolskiy: Sure,  so it’s a kind of scientific way of saying we live in a created world. So religion has been saying it for a while. This is a statistical argument. If we continue developing better and better video games, virtual worlds, at some point they will have very smart AIs in them, concious AIs and graphics will be amazing. And if many people play many video games, many such virtual worlds, how do you know if you are in the real world or in one of millions of those games, statistically it’s more likely that you are in the game. And that’s the argument that in fact, in a future we’ll do this. And we are right now living in such a simulation. We are agents in that very realistic looking world.

Xerxes Voshmgir: What are you currently working at?  what is your current research that you’re doing and which direction is it heading. 

Roman Yampolskiy: All right. So as I mentioned, I’m trying to understand what we can do, what is possible. And so I’m looking at different limits and possibility results in AI safety. , but results like unpredictability of behavior of superintelligence systems. We cannot know what they’re going to do. Unexplainability, they are black boxes, even if they make a good decision, we don’t understand how that decision has been reached. And of course the fundamental questions, if: “Is controll possible, can we control them? And what does it mean? Is it direct control? Is it a, some sort of implicit control and whatever limits to that.”

Xerxes Voshmgir: When you look into the future of your own life, and if you look back then, from your deathbed, basically  if you would look at your legacy , which impact do you want to have had  in this lifetime? On the people around you, on humanity with your work? 

Roman Yampolskiy: Well, I’m just in local context for my family. I want to provide better future for my kids. So if I can create a world where we have beneficial intelligence systems helping us cure diseases, discover new scientific breakthroughs, it will be a great success as opposed to a malevolent super-intelligence, which does nobody knows what. So hopefully I can look back and say: “Okay, I helped to steer research in the right direction.” 

Xerxes Voshmgir: Okay. Thank you for the interview. It was great having you  , it was very interesting what your research is on and what you talk about. And , maybe you’ll be back again at some stage.

Roman Yampolskiy: Thank you so much. 

Xerxes Voshmgir: Thank you for staying tuned for this edition of challenging ParadigmX this week with professor Roman Yampolskiy. So if you enjoyed this episode please feel free to share it. So Roman’s message gets spread even further. And if you like my podcast, feel free to subscribe. And write me a review or send me a message with any questions. So if you have ideas or anything else, I’m happy to hear from you. So I wish you a good week and see you next week. Ciao.



We welcome you to contact us for more information
about any of our services.

Let’s Talk About A Keynote At Your Event

Xerxes offers keynotes on future technology and the future of humanity as well as tailor-made keynotes on request.

Let’s Talk About Your Future Projects

Xerxes serves as a sounding board, catalyst and innovator for your future projects.

Let’s Talk About Your Strategy

Xerxes and his team offer strategy consulting, strategic PR and organization development services.