[00:00:00.99] SAMIK CHANDARANA: So I recognize a lot of people in this room. But for those of you that don't know me, I'm Samik Chandarana. I'm part of the team that looks after the applied ML and AI. And I'm obviously very lucky to have Manuela Veloso, who I'm joined at the hip with on the AI research side.
[00:00:15.99] So today, we have the fourth lecture in the series that Manuela has been organizing. When Manuela joined the firm, we were very excited because we wanted to make sure that on top of using her own mind, we actually delved into her network. And part of her network today, we're very lucky to welcome Professor Wooldridge from Oxford. Manuela will tell you a little bit about him, and then we get on and hear from the man himself.
[00:00:43.28] MANUELA VELOSO: Perfect. Thanks all for coming. And we are trying to put some more chairs. But also, welcome to everyone who is online. So it's a pleasure to be here and, actually, to have our first distinguished lecture hosted in London. So the other three were hosted in New York. We are planning on coming every other time, actually, to London and host it here. So thanks all for coming.
[00:01:10.21] It's a great pleasure for me to introduce without further ado the speaker today, who is Professor Michael Wooldridge. And I have known-- where's Mike?
[00:01:21.73] Oh, right here. I was expecting you to be there. I'm like, wait, he's up here. I've known Mike for, I believe, more than 20 years. We have been faculty in an area of AI which we call multi-agent systems, multi-agent learning. And Professor Wooldridge will tell you more about this.
[00:01:43.41] But we have been partners in crime in this particular kind of area of research. In fact we co-edited a book together, where we have our own book on AI. And Professor Wooldridge has a distinguished career of researching AI. He's the current head of the Department of Computer Science at Oxford University.
[00:02:07.53] And he was also President of the European Association for Artificial Intelligence, as well as the International Joint Association, what we call ECCAI Conference on AI has its own board. And Professor Wooldridge was also the president there.
[00:02:24.45] And then, just so you know, in academia, there are these honors which are associations for which we can be fellows. And the main associations of our areas are the ACM, which is this Association for Computing Machinery, AAAI, which is the Association for the Advancement of AI and the European, also, Association for the Advancement of AI. And Professor Wooldridge is a fellow of these three institutions. In addition, he has published more than 500 articles.
[00:02:59.85] And it's a great pleasure and, in fact, a great honor to welcome him to JP Morgan. And so please, we'll let that Professor Wooldridge speak for as much as he wants-- hopefully, within time-- and then, we'll have a session of questions. So I would ask you if you could hold the questions to the end of the lecture would be great, unless you have a real clarification question, and then you can ask. But otherwise, let's please welcome Professor Wooldridge and we are going to listen on Siri. Let me introduce you, Siri.
[00:03:40.47] MICHAEL WOOLDRIDGE: Thank you, Manuela. Thank you, colleagues of Manuela for doing me the honor of inviting me. I'm used to hanging out in Oxford, and when you first get to Oxford, there's all these grand buildings, these ancient medieval buildings. It's very intimidating. After a while, you get used to it. And you forget when people come in how intimidating it is. And then I got off the train this morning at Canary Wharf, and it's like, oh, my god.
[00:04:03.06] So I'm trying to cope with that. I'm very grateful to have the opportunity to speak. It's very exciting to see an organization like JP Morgan start to take AI so seriously. I think it's going to be incredibly exciting to see what Manuela achieves. Everybody in the field knows that Manuela has boundless energy, boundless determination, boundless enthusiasm, which she communicates to everybody around her. So I think you've made a great hire. And there are exciting times for AI at JP Morgan ahead. And I look forward to seeing what's happening there.
[00:04:35.34] So before I begin, a small confession. This morning, I left Oxford at about 7:15. And as I was leaving, my wife said, you're not wearing a tie. I don't normally wear a tie. Actually, this is me looking smart, I have to say. And she said, you're going to JP Morgan. Everybody is going to be wearing a tie. And I look around the audience, and I see maybe two-- two-- two people wearing. You must be the bosses.
[00:05:00.69] AUDIENCE: [INAUDIBLE]
[00:05:02.78] MICHAEL WOOLDRIDGE: OK. So let me get started. So this is a talk about artificial intelligence. But I have to begin with a confession. There is no deep learning in this talk whatsoever, OK? So sorry if that comes as a shock to some of you.
[00:05:18.34] But actually, the truth is, AI is a very broad church. And deep learning is kind of the poster boy or poster girl for the successes of AI at the moment. But it is one thread in a very rich tapestry of AI research, which is now delivering really impressive results. It isn't all just about deep learning.
[00:05:38.49] And in fact, if you just remember one message from what I'm talking about today, just take that one away, all right? It's not all about deep learning. There is an awful lot more going on than that.
[00:05:48.21] What I want to talk about particularly, and it is an area, my own field, which Manuela has also worked in, a field called multi-agent systems. I've never really done anything else. This is what I've been doing since around about 1989 when it was an idea, right, that we thought would come to fruition. And it's an idea, I believe, which now has come to fruition.
[00:06:09.34] And I give you a little bit of a flavor about how that happened. So OK. So I'm going to start off by motivating the idea of multi-agent systems, what are multi-agent systems.
[00:06:33.15] It worked a minute ago.
[00:06:34.54] AUDIENCE: [INAUDIBLE] AV. Oh. Sound?
[00:06:38.91] MICHAEL WOOLDRIDGE: OK, brilliant. Thank you. So I'm going to start off by motivating the idea of multi-agent systems and give you an idea of where this idea came from. And I'll show you a little video, which Manuela will have seen a million times before. Many of you will also have seen it but many of you won't.
[00:06:52.22] And what's remarkable about this video, which was made in the 1980s, is how clearly it anticipated a bunch of things which we're now seeing happening. So I'll show you that video. And that will lead me in to discuss the idea of multi-agent systems.
[00:07:07.49] And then, the bulk of my talk-- so this is the high-level motivational stuff about why we're doing what we're doing and why this is a natural thing to be doing and what we might want to be doing. In the middle part of the talk, there's some more technical material. And the more technical material is really to do with an issue which arises in multi-agent systems, which is that they are inherently unstable things.
[00:07:28.77] We build things which are inherently unstable. We need to find ways of understanding and managing their dynamics. And so the technical part of the talk is about understanding the equilibria of multi-agent systems and the tools that we've developed that enable you to do that.
[00:07:45.25] So I'm going to talk about two different approaches there. The first is an approach based on ideas in game theory. And game theory is an area of economics which has to do with strategic interaction. And I'll show you how you can apply game theoretic ideas to understand the dynamics of multi-agent systems, what behaviors those multi-agent systems might exhibit.
[00:08:05.98] And that's one approach, which has some advantages and disadvantages. I'll show an alternative approach very, very briefly, an approach called based on agent-based simulation, where instead of trying to formally analyze the equilibria of the system at hand, what we try and do is just simulate the system in its entirety. And these are both ideas, complementary ideas-- they're not in competition because they deal with quite different types of systems-- but both ideas whose time has come.
[00:08:36.00] And then I'll wrap up with some conclusions. So let's start by motivating multi-agent systems. So I wrote a textbook on multi-agent systems. It's appeared in a couple of editions now. And the first edition I wrote in around about 2001.
[00:08:50.88] And the very first page of that first edition essentially had this slide on it. It said "what is the future of computing?" And the future of computing, I reckoned in the year 2000, was the following. The future of computing was to do with ubiquity, interconnection, delegation, human orientation, and intelligence.
[00:09:09.22] So what do I mean by those things? Sorry. I should have added, by the way, I think, looking back, I think I was absolutely right. I don't know very much in life. I don't know very much in life, but what I know is that, actually, this is how things turned out, and this is how things will turn out for the foreseeable future in computing.
[00:09:28.56] So let me explain what's going on. Ubiquity is just to do with Moore's law, right? Moore's law, the number of transistors that you can fit on a chip or whatever, basically, what it means is computer processing power just gets exponentially cheaper year on year. The devices, the computer processes that are used to drive computers get smaller, have lower power requirements, are more powerful year on year.
[00:09:52.53] And all that means is that you can put computers into places and in devices that you couldn't have imagined. So ubiquity just means putting computer processing power everywhere. And that's made possible by Moore's law.
[00:10:05.71] So a neat example of this-- a magazine, a home computer magazine, which costs 5 pounds to buy in the UK recently handed out on the front cover a Raspberry Pi computer-- free. You got it free with the magazine, a computer which, 10 years ago, would have had the power of a typical desktop computer of the day. So that's ubiquity. We can put computer processing power everywhere. Any device that we might care to build, we can augment it with computer processing power.
[00:10:32.79] And the second aspect of that is interconnection. These devices are connected, right? When I was studying computer science as an undergraduate in the 1980s, I did a course on networks. And the guy that was teaching this course told us, look, these networked systems, these distributed systems, they're really hard to build. But don't worry, because you'll probably never make one, right?
[00:10:53.61] I mean, no lecturer has ever been more wrong than that lecturer was on that day, right? Because we now realize-- and this was a change in the way that people thought about computing-- we now realize that, actually, networked systems, distributed systems and interconnection, communicating systems, these are the norm. These aren't the exception. And that's resulted in a fundamental shift in the way that people think about computing.
[00:11:20.05] OK. Then the third trend is a bit more subtle. It's towards delegation. So what do I mean by delegation? Well, some extreme examples of delegation are, again, if I go back to the mid 1980s, Airbus were talking about fly-by-wire aircraft getting rid of the human in the loop. And the aircraft onboard computers would actually have control and, in some circumstances, could overrule the instructions of human pilots.
[00:11:46.08] And a lot of people were outraged at this. They thought this was absolutely the end of civilization, that machines were taking over that. Well, some things went wrong. But actually, the truth is, it ended up being a really good thing.
[00:11:57.60] What we're now seeing, the shift towards driverless cars, whether or not full, level five autonomy that you jump in the car and just state your destination, that's some time away. But nevertheless, there are all sorts of features-- smart cruise control features on cars-- which we are delegating the task of driving the car to.
[00:12:18.13] But those are extreme examples of a much wider phenomenon. We hand over control of ever more of our lives and, very relevant for JP Morgan, ever more of our businesses to computers. We are happy for them to make decisions on our behalf, OK? And that's a trend towards delegation.
[00:12:38.64] Human orientation-- what do I mean by that? When Alan Turing arrived in Manchester in the end of the 1940s to program the first computer-- and the first programmable computer, stored-memory computer, I think was their chief claim to fame, the Manchester Baby-- he actually had a bunch of switches on the front of it.
[00:12:56.37] And you had to flip a switch this way to put a one in this particular memory location and then flip it the other way to put a zero there. To program that computer, you had to understand it right down at the level of-- and it wasn't even transistors then-- at the level of the valves that were in this machine. Nothing was hidden from you.
[00:13:13.95] Well, this was an incredibly unproductive way of programming. People were not very good at programming computers in that way. And if you learn to program on Manchester's machine, you couldn't have gone to Oxford and programmed Oxford's machine because they had a completely different architecture.
[00:13:27.60] The trend since then, firstly, with the arrival of high-level programming languages-- Fortran and COBOL in the 1950s, then on towards languages like ALGOL, Pascal and C-- meant that you could learn to program on one machine and transfer your skills to another. But the key point is what those languages present you with is ever more human-oriented ways of thinking about programming.
[00:13:52.72] Object-oriented programming, which is a state of the art-- I'm sure there are some programmers in the room, and you probably do object-oriented programming-- takes its name from the idea that it was inspired by the way that we interact with objects like this clicker in the real world. And it was supposed to reflect that. It's a human-oriented way of thinking about computing. And interfaces-- human-computer interfaces-- will get ever more human-oriented, and we'll see that in a moment.
[00:14:19.33] And then, the final one was intelligence. Now, intelligence, here, I mean two things. All I mean is, really, the scope of the things that we get computers to do for us continually expands. Year on year, computers can do a wider range of tasks than they could do previously.
[00:14:38.37] Now, there's a very weak sense of intelligence, which is just that, actually, it's just decision-making capability. But actually, what we've witnessed over the last decade is an explosion of intelligence in the AI sense, that now, we're seeing computers that are capable of a much richer range of tasks than we could have imagined when I wrote the first edition of this book.
[00:15:01.41] So I say the future of computing, with absolute certainty, I think, lies in that space. It is towards ever more ubiquity, ever more interconnection, ever more delegation, ever more human orientation, and ever more intelligence. Now, that's still a wide space, and that gives us a huge range of possibilities for where we might end up. But there are a number of other trends in computing, each of which picks up on different aspects of these trends.
[00:15:32.05] So if you look at the trend towards interconnection and intelligence, that takes you towards a thing called the Semantic Web. So after the World Wide Web was developed, the idea of the Semantic Web was putting intelligence on the web, so having smart web browsers so that, for example, if you did a search for weather in Canary Wharf, right, that your browser would be smart enough to realize that if it couldn't find a website which referred to weather in Canary Wharf, that weather in East London, or just weather in London, would be a reasonable proxy for that, involving some reasoning, the kind of common sense reasoning that you will do, but your web browser doesn't. And that's the Semantic Web. And that's been, historically, over the last 20 years, a big tradition in AI-- adding AI to the web.
[00:16:15.93] Peer-to-peer-- don't hear too much about it these days, but 15 years ago, it was all the thing. Peer-to-peer is just one aspects of this ubiquity and interconnection. Similarly, cloud computing, the Internet of Things.
[00:16:28.62] The Internet of Things is just the idea that all our devices-- our toaster, our fridge, our television-- are all connected to the web. It's all one, big interconnected mass. Right now, what you might want to do with that, I don't know. But the point is it's a really exciting potential.
[00:16:44.57] But where I want to go is just pick up on this last manifestation of these trends. And this is the trend towards what we'll call agents. And at this point, I'm going to show this video that I referred to. So this is an old video, OK? It's a video that came from Apple computers. And so you have to set the scene.
[00:17:04.93] This is the late 1980s. John Sculley was then CEO of Apple. He was the guy that evicted Steve Jobs, one of the famous business decisions of all time. They'd just released the Mac. The Mac was being a smash hit. But John Sculley was already worrying about what would come after the Mac.
[00:17:23.36] And the innovation on the Mac was the user interface. It was suddenly a human-oriented interface. It was an interface that people could use without specialist training about interfaces. And so to think about what would come next, they came up with this video, which is called Knowledge Navigator.
[00:17:44.86] [VIDEO PLAYBACK]
[00:17:45.86] [MUSIC PLAYING]
[00:18:11.76] MICHAEL WOOLDRIDGE: iPad.
[00:18:14.25] - You have three messages. Your graduate research team in Guatemala, just checking in, Robert Jordan, a second-semester junior requesting a second extension on his term paper, and your mother reminding you about your father's surprise birthday party next Sunday.
[00:18:31.55] Today, you have a faculty lunch at 12 o'clock. You need to take Kathy to the airport by 2:00. You have a lecture at 4:15 on deforestation in the Amazon rainforest.
[00:18:43.48] - Right. Let me see the lecture notes from last semester. No, that's not enough. I need to review more recent literature. Pull up all the new articles I haven't read yet.
[00:18:59.90] - Journal articles only?
[00:19:01.82] - Mm-hmm, fine.
[00:19:03.02] - Your friend, Jill Gilbert, has published an article about deforestation in the Amazon and its effects on rainfall in the sub-Sahara. It also covers drought's effect on food production in Africa and increasing imports of food.
[00:19:18.19] - Contact Jill.
[00:19:20.20] - I'm sorry. She's not available right now. I left a message that you had called.
[00:19:25.16] - OK. Let's see. There's an article about five years ago-- Dr. Flemson or something. He really disagreed with the direction of Jill's research.
[00:19:36.76] - John Fleming of Uppsala University. He published in the Journal of Earth Science of July 20 of 2006.
[00:19:44.58] - Yes, that's it. He was challenging Jill's projection in the amount of carbon dioxide being released to the atmosphere through deforestation. I'd like to recheck his figures.
[00:19:54.91] - Here is the rate of deforestation he predicted.
[00:19:57.58] - Mm-hmm. And what happened? Mm. He was really off. Get me the university research network. Show only universities with geography nodes. Show Brazil. Copy the last 30 years at this location at one-month intervals.
[00:20:31.71] - Excuse me. Jill Gilbert is calling back.
[00:20:34.89] - Great. Put her through.
[00:20:36.84] - Hi, Mike. What's up?
[00:20:38.19] - Jill, thanks for getting back to me. Well, I guess that new grant of yours hasn't dampened your literary abilities. Rumor has it that you've just put out the definitive article on deforestation.
[00:20:48.69] - Aha. Is this one of your typical last-minute panics for lecture material?
[00:20:54.18] - No, no, no, no, no. That's not until, um--
[00:20:58.41] - 4:15.
[00:21:01.50] - Well, it's about the effects that reducing the size of the Amazon rainforest can have outside of Brazil. I was wondering, um-- it's not really necessary, but--
[00:21:11.01] - Mm, yes?
[00:21:13.77] - It would be great if you were available to make a few comments-- nothing formal. After my talk, you would come up on the big screen, discuss your article, and then answer some questions from the class.
[00:21:24.24] - And bail you out again? Well, I think I could squeeze that in. You know, I have a simulation that shows the spread of the Sahara over the last 20 years. Here, let me show you.
[00:21:40.32] - Nice. Very nice. I've got some maps of the Amazon area during the same time. Let's put these together.
[00:21:58.62] - Great. I'd like to have a copy of that for myself.
[00:22:02.67] - What happens if we bring down the logging rate to 100,000 acres per year? Hmm. Interesting. I can definitely use this. Thanks for your time, Jill. I really appreciate it.
[00:22:23.90] - No problem. But next time I'm in Berkeley, you're buying the dinner.
[00:22:28.63] - Dinner, right.
[00:22:29.64] - See you at 4:15.
[00:22:31.20] - Bye-bye.
[00:22:33.31] [MUSIC PLAYING]
[00:22:36.66] - While you were busy, your mother called again to remind you to pick up the birthday cake.
[00:22:41.16] - Fine, fine, fine. Print this article before I go.
[00:22:44.97] - Now printing.
[00:22:46.70] - OK. I'm going to lunch now. If Kathy calls, tell her I'll be there at 2 o'clock. Also, find out if I can set up a meeting tomorrow morning with, um, Tom Lee.
[00:22:56.99] [END PLAYBACK]
[00:22:57.91] MICHAEL WOOLDRIDGE: OK. So he's a professor at Berkeley, apparently. They have a somewhat more relaxed life than I would imagine.
[00:23:07.48] So what's remarkable about our video is quite a number of things that are anticipated. So number one you saw, right? There was an iPad, a 1980s iPad, but clearly, it was a tablet computer. There were no tablet computers, and they were not on the horizon at the time. But that was clearly the way they thought it was going. It had a little selfie camera. I don't know if you-- well, actually, quite a big selfie camera. So they anticipated that.
[00:23:29.65] Other stuff that's interesting is-- and this is before the internet was a big thing-- they anticipated a web search or something like it, that they were already thinking about the devices that people would have in their homes being connected in that way. They picked up on the idea of visualization. Visualization is an area that's grown the way that he's visualizing that data and putting together those different data sources to be able to visualize it in neat ways. That's been a huge growth area over the last 20 years.
[00:24:01.66] But the thing that we picked up on, the thing that drove my community, is the idea of an agent. The thing that he was interacting with on the tablet screen was not a person. It was an animated piece of software. Actually, there's an interesting story there that this animated piece of software, clearly, the idea was that they wanted this to look as lifelike as possible.
[00:24:24.31] And actually, the received wisdom these days is if you're doing something like that, you really don't want it to look as lifelike as possible. Because you don't want to mislead people into thinking that they're talking to another human being. You've got to be explicit, you've got to explicitly show them that they're talking to a piece of software.
[00:24:42.05] So what my community picks up on is that notion of an agent. And what we saw is the idea that, instead of interacting like the 1984 Mac screen, where you went to Microsoft Word, and you went on a menu, and you dragged down that menu to select some item where everything happened because you made it happen, right, where the software was a passive servant, right-- it was only doing the things explicitly that you told it to-- that there was a shift, that instead, the software would become a cooperative assistant, something that was actively working with you, taking the initiative to work with you on the problems that you were working on, OK, so the fundamentally different shift away from this idea of software being something that you do stuff to to something that works with you in the same way that a good human personal assistant would work with you, OK? And that's exactly the metaphor that they had there for their agent.
[00:25:47.32] So that was the idea. That's the kind of vision that launched the agents community at the beginning of the 1990s was that vision. So just remember that video was made at a time when Ronald Reagan was president in the United States. Margaret Thatcher was prime minister here. Nigel Lawson was Chancellor of the Exchequer in the UK. That's how old it was. But actually, a lot of what they predicted was pretty much bang on the nail. It's an impressive vision.
[00:26:12.71] OK. So the first research on agents started in the late '80s and early '90s. And a lot of the thrust of the work at the time was about building specific applications like software that would help you read your email-- so something which would help you prioritize your email-- software that would help you browse the web-- anticipate which link, for example, you were going to follow next and proactively help you with the tasks that you're working on.
[00:26:40.30] But it took 20 years from that video before we actually saw the first commercial agents really start to take off. And the one that grabbed my attention at the time, because I knew some of the people involved and so did Manuela, I think. The people involved in Siri were working at Stanford Research Institute in the US. And where they came from is exactly this work on agents-- software agents-- that we were doing in the 1990s.
[00:27:07.69] And then, we've seen, of course, a flurry of others-- so Alexa on Amazon, Cortana on Microsoft, Bixby on Samsung. I've never actually seen that one. And there are a whole host of other software agents, OK? And those software agents are embodying exactly those ideas, that idea of human orientation, moving away from machine-oriented views towards human-oriented views, presenting you with an interface which you can understand and relate to through your experience of the everyday world.
[00:27:37.90] And the most important aspect of that in that video is communicating with just very natural language, right, communicating in English, which isn't nuanced, which isn't in some kind of strict subset of English. It's not some special artificial language. You're just talking as you would to a human assistant.
[00:27:58.24] OK. So why did this idea of agents actually take off? Well, it's no accident that Siri was released when the iPhone got sufficiently powerful that it could actually cope with the software. There's an awful lot of very smart AI under the hood on Siri. And understanding spoken language requires a lot of processor time, right? So it could only happen when we had sufficiently powerful computers. In other words, it was the ubiquity. It was Moore's law took us there and made that feasible.
[00:28:35.22] So advances in AI also made competent voice understanding. And speech understanding made that possible, OK? It couldn't have happened in the 1980s because we just didn't have the compute resources available. We didn't have the data sets that we now have available to train speech understanding programs and so on.
[00:28:56.49] But probably the most important is that the supercomputer that you all carry around with you in your pocket, right, the smartphone that you have in your pocket-- I mean, we call it a phone. That's the dumbest thing it does, right? That's the most trivial of things that it actually does.
[00:29:10.71] It's a supercomputer. It's equipped with incredibly powerful processors, massive amounts of memory. It knows where you are. It can hear its environment. It can sense movements. And it's connected to the internet. And it's the fact that it has all that stuff which has made these agents in the way that they envisaged back in the 1980s, has made them now possible.
[00:29:33.24] So the agent-based interface, whether or not it's realized in the way that Apple envisaged in that video, right, the agent-based interface, the idea that you interact with some software through that kind of human-oriented interaction, is just inevitable. It is the future of the computing because there is no alternative, right? If you think about your smart home and, in the future, all homes will be smart homes, there isn't really any other feasible way to manage it other than through a kind of agent-based interface. It's got to happen, right?
[00:30:09.86] If you think about a sector like banking, right, where you download an app which you interact to manage your accounts, where that's going, inevitably, is that that app is going to be more and more like an agent, somebody that's working with you to help you manage your accounts, to help you manage money, not just something which is doing something dumbly when you tell it to, but actually something which is actively helping you to manage your finances.
[00:30:39.00] Now, rich agent interfaces-- really rich agent interfaces-- are still some time away, OK? And by rich agent interfaces, I mean that they're still very limited in the kinds of language that they can understand. You don't have to dig very deep to understand the limitations of Siri, in particular, actually. But even the better ones, you don't have to dig very deep to understand their limitations.
[00:31:02.63] But what I want to now dig into is one aspect of this which has really been ignored. So if I say to Siri, Siri, book an appointment to see Manuela next week, why would Siri phone up Manuela herself? Why wouldn't my Siri just talk to her Siri to arrange this?
[00:31:21.89] That's what a PA would do, right? They wouldn't go straight to the boss. They would go to the other assistant. In other words, my field is concerned with what happens when these agents can talk to each other directly, OK? If I want to book a restaurant, why would my Siri phone up the restaurant?
[00:31:41.78] There was this famous example-- I forget whose software it was-- that did exactly this-- it might well have been Apple-- about phoning up a restaurant to book a table you may have seen in the news last October or so. But why would they do that? Why wouldn't they just go direct to the agent at the other site? It just makes perfect sense, all right?
[00:32:00.23] We were discussing over lunch that one of the frustrations in my life, and I'm sure many of yours, as well, is diary management. I spend crazy amounts of time juggling meetings and trying to find suitable things. Why don't we have agents that can do that?
[00:32:11.96] This is not, actually, AI rocket science. It ought to be feasible to have such things now. And there, why wouldn't my Siri just talk to your Siri to arrange this? That is the vision of multi-agent systems, right? Multi-agent systems are just systems with more than one agent.
[00:32:29.45] So if we're going to build multi-agent systems, what do they need to do? Well, your agents need to be able to talk to me, right? My Siri needs to be able to talk to me, but it also needs to be able to interact with other agents. And I don't just mean the plumbing, the pipes down which data is sent. I mean the richer social skills that we have.
[00:32:48.84] So for example, my Siri and Manuela's Siri need to be able to share knowledge. My Siri and my wife's Siri need to share skills and abilities and expertise. If I've acquired some expertise in something, I want my Siri to be able to share it with my wife.
[00:33:04.79] Actually, in neural nets, this is called transfer learning. It's very difficult to extract expertise out of one neural network and put it into another. It's a big research area at the moment.
[00:33:15.08] How can agents work together to solve problems, coordinate with other agents, or, really excitingly, negotiate? Just something as simple as booking a meeting is a process of negotiation. I have my preferences. I don't like meetings before 9:00 in the morning. I like to keep my lunchtimes free. But maybe you have different preferences. How are our agents going to reach agreement? They need to be able to negotiate with each other.
[00:33:41.91] All of these things have been big research areas over the last 20 years in multi-agent systems. And we're beginning to see the fruits of that research make it out into the real world. So just one example. If you've booked an Uber recently-- well, firstly, shame on you because they're not nice people-- but secondly, what happens when you book a ride, that process of allocating somebody to pick you up and do your transport, right, that process, that basic protocol, is a protocol called the contract net protocol.
[00:34:18.08] The process through which that happens is a protocol that was designed within the multi-agent systems community. And it has a ton of other applications out there right now, as well. It's probably the most implemented cooperative problem-solving process, allowing you to allocate tasks to people in a way that everybody is happy with.
[00:34:36.66] So all of these things are active areas of research. If I want my Siri to talk to your Siri, my Siri and your Siri need social skills, the same kind of social skills that we all have-- cooperation, coordination, negotiation, the ability to solve problems cooperatively, OK?
[00:34:57.32] So the debate about general AI, the grand dream about AI, has sort of kicked off again recently because of these advances. I'm not a big believer that that's going to happen. Well, I'm not a believer at all that that's going to happen any time soon.
[00:35:10.85] And nor do I envisage that an agent will have these skills in a very, very general sense. But for specific applications, like meeting booking, right, negotiation skills for meeting booking, protocols that will allow agents to book meetings-- taking into account everybody's preferences so that, for example, I can prove in a mathematical sense that my agent is not going to get ripped off, right, it's not going to end up with a bad deal, that we end up with an outcome which is fair to all of us-- these are all big areas in the multi-agent systems community that we've made a lot of progress with.
[00:35:47.32] So I want to emphasize multi-agent systems are used today. There was an article by a colleague of ours called Jim Hendler-- I don't know if you remember it, Manuela-- about 15 years ago. And his article was called "Where Are All the Agents?" And he said, well, we've been working on these agents for 10 years, but, actually, I don't see them.
[00:36:04.73] Well, I'd love to have Jim here today because, of course, firstly, we've all got an agent with us, right? You've all got a smartphone in your pocket. There are hundreds of millions of software agents out there in the real world and not just agents that interact with people. There are multi-agent systems.
[00:36:20.48] So high-frequency trading algorithms, in particular, are exactly that. These are algorithms to which we have delegated the task of doing trading. People are out of the loop-- completely out of the loop. And they couldn't be in the loop because the timescales on which high-frequency trading algorithms operate are way, way, way too small for people to be able to deal with in any kind of sense at all.
[00:36:44.76] But here's the thing-- so this is going to introduce me to the next part of the talk-- is that when you start to build systems like high-frequency trading algorithms, they start to get unpredictable. They start to have unpredictable dynamics. So here are a couple of examples of this.
[00:37:02.36] So the October 1987 market crash-- so the guys with ties on will remember that. So was it Black Monday or Wednesday? I forget which. Does anybody remember? It was Black something. And what happened? What led us to this October 1987 market crash?
[00:37:20.77] As with all of these things, there was no one cause. But actually, one of the big contributing factors was that the London Stock Exchange and International Stock Exchange had computerized just a couple of years before. I think they called it the Big Bang in London, right? It was when all the stock markets went computerized, and you went from handing people pieces of paper to actually doing trades electronically.
[00:37:43.24] And people built agents to do automated trading. And they gave agents rules like if share price goes down, sell. And you don't have to be an AI genius to see that if every agent has that kind of behavior, then a sudden event like a sudden sharp stock price fall for some reason creates a cascading feedback effect. And that's exactly what happened. It wasn't the only cause but generally accepted that that was one of the key causes of the October '87 stock market crash.
[00:38:14.95] More dramatically, recently-- 6 May 2010-- we were having a general election here. But over in the US, in the middle of the afternoon over a 30-minute period, the markets collapsed. And briefly, more than a trillion dollars was wiped off the Dow Jones Industrial Average. It was the largest one-day drop in the Dow Jones history.
[00:38:39.90] But it only lasted 30 minutes. The markets bounced back. They didn't quite regain, and it wobbled a bit. But actually, they regained their position. And the joke was, of course, if you were having a cup of coffee at the time, you would have missed the whole thing, right?
[00:38:53.64] This was happening on timescales that people simply couldn't comprehend. By the time they understood that something weird was going on, it was already starting to bounce back. So some very strange things happened. So Accenture were trading at a penny a share for a while-- a very, very brief period of time. And Apple shares, bizarrely, were trading at $100,000 each for a very brief period of time.
[00:39:14.71] The scary thing about this is that, of course, now, all these markets are connected, right? We're not operating in isolation. We're operating in a global marketplace. And a nightmare scenario was that you would have a flash crash at the end of the trading day.
[00:39:30.69] And if you hit the trough-- the bottom of the trough-- at the end of the trading day, OK-- you hit the bottom of the trough at the end of the trading day-- nobody knows whether or not this is a real collapse or just a freak phenomenon which is just going to rebound. And then, you've got contagion. It starts to spread over Asia and the rest of the world. And these are very real and very, very scary phenomenon.
[00:39:52.64] So the next point in my talk is if we're going to build multi-agent systems-- and the problem is that people are frantically running ahead to build high-frequency trading algorithms, right, and trying to build them faster and using things like AI sentiment analysis on Twitter to drive the decisions that are being made-- then they are going to be prone to these unpredictable dynamics. We need to be able to understand them. We need to be able to manage them.
[00:40:19.03] And at the moment, management is just hitting a kill switch. It's unplugging the computers, right? That's how these things are managed at the moment. I mean, it's not all it is.
[00:40:27.16] So let me just briefly give you a feel for one of the two approaches that we look at. And the first approach that we look at is what's called formal equilibrium analysis. So this is relevant for systems where there are small numbers of agents. It doesn't work for big systems for various technical reasons. So the alternative technique that I'll talk about in a moment works for big systems.
[00:40:50.07] But for small systems where there are just a handful of interacting agents, what we can do is we can view this as an economic system and start to understand what its equilibria are and what kinds of behaviors the system would show in equilibrium. So to put it another way, what we do is we view a flash crash as a bug in the system, right?
[00:41:12.55] If our system that we have is exhibiting a flash crash or some other undesirable behavior, what we do is we treat it as a bug, and we say, how did this bug come about, and how can we fix that bug? And so the technology that we apply is exactly the technology that's been developed in computer science to deal with bugs. And the most important of these technologies is a technique called model checking.
[00:41:36.92] And so here, I've got a simple illustration of model checking. So the idea is what this thing is here is just a description of a system-- a little bit more technical. I said it was going to get a bit more technical but not too much.
[00:41:47.33] These are the possible states of the system. And then these arrows correspond to the actions that could be performed by the agents in the system. So if the system is currently in this state and some agent does this action, then the system transforms to this state.
[00:42:01.33] And what that gives us is this structure here, which we just call a graph structure. And this is just a model of the possible behaviors of my system. So it could be, for example, that some state down at the bottom here is a bad state, right? This is a flash crash state. And what we want to understand is, how does that flash crash state arise? How can we get to that?
[00:42:22.16] OK. So in model checking, what we do is we use a special language, a language called temporal logic, to express properties of the systems that we want to investigate. So here is a property written in a standard temporal logic. It just says if ever I receive a request, then eventually I send a response. That's what that says. You don't need to worry about the details.
[00:42:42.32] And what the model checker does is it will check whether or not that property holds on some or all of these possible trajectories. So each path that you can take through that graph, right, corresponds to one possible trajectory of our system, right? And imagine that there is some flash crash trajectory where bad things are happening. So a classic example of a query would be something like, eventually, I reach a flash crash. And so what we're asking is, is there some computation in my system that will lead to that flash crash?
[00:43:15.80] So this is, again, a very big body of work. And colleagues of Manuela's at CMU won the Turing Award for their work in developing model checking technology because it's industrial strength. It really works, with all sorts of caveats. You can really use this to analyze systems.
[00:43:34.16] And many model checkers are now available. And really, the reason is for that that, actually, these model checkers are relatively easy to implement, OK? So the algorithmics of these things are really, really quite simple. And actually, you can end up with tools that really, really work if you want to do this.
[00:43:54.17] So the two basic model checking questions, then, are, is there some computation of a system on which my property-- like there is, eventually, a flash crash-- holds, or does that property hold on all computations of the system? Is it inevitable that I'm going to have a flash crash on all the possible trajectories of the system? So those are the two basic questions which are introduced in model checking.
[00:44:15.95] OK. So now, if we turn, instead, to multi-agent systems, the idea is that our agents now, each of them is trying to do the best it can for itself. My Siri is acting for me. Your Siri is acting for you.
[00:44:30.26] And what we now want to ask is, OK, under the assumption that our agents are acting rationally, doing the best that they can for us, then what properties, what trajectories, will our system take, OK? Assuming that your agent is doing the smartest thing it can in pursuit of what you want to do-- like meeting booking-- and mine is doing the smartest thing it can for me, what will happen?
[00:44:56.00] And to cut a long story short, that's the question that we ask in this work, OK? And that's the approach. The approach is that we call is equilibrium checking. It's understanding the equilibria that a system can have.
[00:45:08.76] And the basic analytical concept that we use for this-- what is a rational choice-- is an idea from game theory called Nash equilibrium, named after John Forbes Nash Jr., who just died a couple of years ago. They made a film about him, A Beautiful Mind. The film is terrible, but the book on which it's based is much better.
[00:45:27.65] And he formulated this notion of rational choice in these strategic settings. And the notion of Nash equilibrium is extremely simple. It's the idea that we use in our work for analysis. Suppose all of us are busy. We all have to make a choice, right? You have to make a choice. You have to make a choice. You have to make a choice. We, all of us in this room, make a choice.
[00:45:46.24] It's a Nash equilibrium if, when we look around the room and see what everybody else has done, none of us regrets our choice. We don't wish we'd done something else, yeah? Given that you lot did all your bits, I'm OK with what I did. But similarly, given that we all did our bits, you're OK with what you did. That's a Nash equilibrium.
[00:46:04.81] And what we look at in our system is, suppose our agents make Nash equilibrium choices, then what trajectories will result? So the picture looks very similar. We've got our model of our systems we did in model checking. We've got our claim, like there is eventually a flash crash. But now, we know what the preferences are of the agents in the system. We know what each of them is trying to accomplish. And the question is, can I get a flash crash under the assumption that we all make rational choices? Or is a flash crash inevitable under the assumption that we all make rational choices, yeah?
[00:46:45.84] So that's the work that we look at. And we have a tool that does this. It's available online. So the tool is called EVE for Equilibrium Verification Environment. It's available online at eve.cs.ox.ac.uk.
[00:47:04.69] And what you can do is you can describe a system using a high-level language called reactive modules. It's a programming language, so you should expect to see a programming language. And then, you specify the goals of each of the players. And those goals are specified as temporal logic properties, like the example that I talked about earlier.
[00:47:25.21] And what it will do is it will tell you what properties hold of that system under the assumption that all the component agents make Nash equilibrium choices. So that's what we mean by formal verification. So it's game theoretic verification.
[00:47:41.93] Because what it's doing is it's looking at the system from a game theoretic point of view. It's saying, you're going to do the best for yourself, I'm going to do the best for myself, then what will happen? We're going to make Nash equilibrium choices. What will happen in my system, OK?
[00:47:58.48] OK. So that's a formal approach to understanding equilibrium properties in a precise mathematical sense, right, in the precise mathematical sense of game theory. And it will tell us what properties will hold inevitably under the assumption of rational choice or could possibly happen.
[00:48:18.12] So the idea is, in this setting, the fact that something is possible in principle might not be relevant if it doesn't correspond to rational choices. All you're concerned about is what would happen under the assumption that our agents chose rationally, OK? So that's equilibrium analysis.
[00:48:39.32] OK. So this really only works for small systems. If you've got a handful of agents, it works. For technical reasons to do with game theory, if you've got large numbers of agents, which, of course, you do on the global markets, it doesn't really work. So what do we do instead with large systems?
[00:48:54.56] So the idea, instead, is we've got an alternative approach, which is called agent-based modeling. And to cut a long story short, what agent-based modeling does is it says you simulate the whole system. You build, literally, a model of the economy with all of the decision-makers in that economy modeling, and you model the interactions-- the buying and selling and lending behaviors, all the other stuff that you might want to do-- you model them directly, OK? And then, you run a simulation.
[00:49:25.63] And this is an old idea. But it's possible now, for familiar reasons. Why is it possible? Because we have large data sets that we can get, right? For example, in the finance sector, regulators require that banks and other groups make their data publicly available, or parts of their data, publicly available. And we can scrape that and use that in our simulations. And that's what we do.
[00:49:52.84] And we've got the compute resources to be able to simulate this at scale. So the kind of simulations that we do involve seven million decision-makers, right? And those decision-makers correspond to banks and individuals and so on, and we simulate that at scale.
[00:50:08.84] There are some challenges with this. So when you start doing agent-based simulation, it looks like a beautiful thing to do. Literally, you're modeling each of the decision-makers in the economy as an individual program that's making decisions about what to do. But actually, just getting to meaningful simulations, where it doesn't just wobble up and down crazily, never settling down, that's actually a challenge in itself.
[00:50:31.82] Once you've got simulations, you then discover that what you've done is you've plugged in what are called magic numbers. So to get anything sensible, I had to set this parameter to 13.3, but why, right? 13.3 is a magic number in the simulation. And this is a real problem. Because it just feels arbitrary. We don't want to have to do that.
[00:50:51.26] Calibration is a huge problem. So calibration means if your model tells you, this is going to happen, how do you know that $1 in the model actually corresponds to $1 in the real world? At the moment, that's the cutting edge of agent-based modeling, doing meaningful calibration. And predictably, the way that you do calibration at the moment, the state of the art technique, is to do lots of heavyweight machine learning on your model to try to understand what it's actually doing, and finally, whether you interpret the data that it's providing as quantitative or qualitative.
[00:51:26.00] So my colleague, Doyne Farmer in Oxford, uses the analogy of weather forecasting. If you go back to the 1940s, how did they do weather forecasting? They would look at the pressure over the United Kingdom, the weather patterns, and they would just go back through their records to find something similar and then look what happened the next day.
[00:51:43.85] And simulation of weather systems was widely regarded as something which was impossible for a long time. It eventually became possible when you could get the grain size of what you were modeling down to a sufficiently small area and you had the compute power available to be able to do these simulations at scale. And now, it works. And I think the claim is that we will be able to do similar things with agent-based modeling.
[00:52:09.44] But this is, it's simulation. It's Monte Carlo simulation, which means it involves random numbers, basically, right? You have to do lots of simulations and see the results that you get. And whether you interpret the results literally to give you quantitative data or qualitatively to say this trajectory could happen, you could get a flash crash under these circumstances, that's also at the cutting edge of the debate on agent-based modeling.
[00:52:35.63] OK. So I was, actually, originally planning this talk-- I was going to make this the center point of the talk. But then, I panicked because I'm not a finance person at all. And this work is only possible because we have somebody who works in the finance industry doing this. But this is a quote from his [INAUDIBLE].
[00:52:49.46] So for example, he's looking at the conditions that can give rise to flash crashes. And one of the things that he looks at, for example, is crowding. And this is where everybody is buying into a particular asset. And if that asset becomes distressed, then the concern is that that, then, propagates.
[00:53:05.18] And so the conventional wisdom is if everybody is buying into the same asset, this can be a bad thing because it leads to contagion and propagation of distress. But actually, he's discovered, for example, under some circumstances, this can be a good thing. So these are qualitative things that he's doing here. The next stage is to try to calibrate this.
[00:53:27.46] OK. So to wrap up. So the agent paradigm, it seems to me, it's a 30-year dream for AI, but it's now a reality. We all have agents with us, right? It's not necessarily the case that your Siri is talking to somebody else's Siri.
[00:53:40.14] I think that's an obvious thing, actually, to happen, right? So I genuinely believe that that will happen. And it won't happen in the sense of your Siri being generally intelligent. It will be in niche applications.
[00:53:53.11] The next step for the agent paradigm is to put agents in touch with other agents, for Siri to talk to Siri. But multi-agent systems have unpredictable dynamics. So we need to be careful about the systems that we build.
[00:54:04.71] And I've described in fairly high-level way two different approaches to doing that. One is, for small systems, you can view these things as an economic system, a game in the sense of game theory, model it as a game, and then understand what its game theoretic equilibria-- its Nash equilibrium-- behaviors are. Do I get something bad happening in the Nash equilibrium?
[00:54:23.07] An alternative is agent-based simulation, OK? So the alternative is to directly model the system. And we can do that because we have compute resources at scale, and we have data at scale. OK.
[00:54:35.52] So I'm going to wrap up. Thank you for your attention. I've giving you a tour of where agents came from, why it's a natural idea-- it took 20 years for it to become a reality, but it is a reality, and we've all got agents in our pockets now-- and where that might go.
[00:54:48.98] And that vision of the future of computing-- ubiquity, interconnection, intelligence, delegation, human orientation-- just seems to me to be inevitable. And the agents paradigm, it seems to me, is bang in the middle of that. OK. Thank you for your attention.
[00:55:11.62] MANUELA VELOSO: A couple of questions?
[00:55:20.18] AUDIENCE: Test. Thank you for your time, and thank you for your talk. I have two questions. But for the sake of time, I'm going to let you choose which one to answer, if that makes sense.
[00:55:30.56] MANUELA VELOSO: No. For the sake of time, you choose one, and just say one.
[00:55:32.92] AUDIENCE: OK.
[00:55:33.38] MICHAEL WOOLDRIDGE: Choose the easy one, please.
[00:55:35.19] AUDIENCE: All right, cool. One of the hot topics in artificial intelligence is the impact of automation in society. And I am curious as to what you think the impact of the evolution of multi-agent systems will have in automation and, subsequently, how we can think about educating our children, educating ourselves, and, ultimately, educating society.
[00:56:00.24] MICHAEL WOOLDRIDGE: So I think there's two slightly conflated-- I think the issue of automation is, and how multi-agent systems will impact that-- let me just take that one first. So a lot of my job, and I daresay your jobs, is full of fairly routine management of tasks-- processing paperwork, passing it on.
[00:56:17.54] There will be any number of workflows within an organization like this, as there is at the University of Oxford, to deal with processes which involve multiple people. And they're extremely tedious and time-consuming. The first thing that I could really see agents doing is simply automating the management of an awful lot of that so that you have agents managing, in our case, for example, when a student applies to us.
[00:56:40.52] You have an agent that manages that process, can remind me when I need to do things, can make sure that the paperwork gets to the next people, can flag up to the right people when things aren't processed in time, and so on. And that seems to me to be crying out for agent-based solutions. So I think there will be big applications there in that kind of scenario. And I think that will be an area where multi-agent systems has an awful lot to offer.
[00:57:07.91] I think the second part was to do with education, was is? It that right?
[00:57:10.37] AUDIENCE: Yeah, how to fine-tune education to overcome automation.
[00:57:17.18] MICHAEL WOOLDRIDGE: Well, I think that's a bigger question about AI itself rather than multi-agent systems. And I think the answer to that one is that the skills that won't be easily automated are human skills. Doctors, for example, they're not going to be automated by X-ray reading machines. So we have software that can read X-rays and diagnose heart disease and so on very, very effectively.
[00:57:39.10] But that's a tiny sliver of what a doctor does. An awful lot of what a doctor does-- most of what a doctor does-- is to do with human skills that requires huge amounts of training which are not going to go away any time soon, although I'm put in mind of this news article that you may have seen over the last 24 hours of this patient that was told he was going to die by a telepresence robot. That wasn't AI, by the way. It was just a crass application of telepresence technology.
[00:58:04.98] MANUELA VELOSO: OK. So there's one more question.
[00:58:07.41] AUDIENCE: Just a quick question.
[00:58:08.38] MANUELA VELOSO: Just a second. Just a second.
[00:58:14.45] AUDIENCE: How would you suggest we should deal with biases, applying databases full of old bias and inefficiencies? So how, by delegating to agents, how do you deal with biases in the data?
[00:58:25.62] MICHAEL WOOLDRIDGE: OK. So wow. That's a huge question. So I think what's interesting about bias is that the algorithmic treatment about bias-- and it's not just AI, anything to do with an algorithm which has to make decisions, it's the same thing-- but the algorithmic treatment of bias is something we didn't really anticipate there's going to be an issue. And it's just exploded over the last couple of years.
[00:58:47.85] So there, I think, we're just developing the science to understand what it means, for example, to be able to say in a precise sense, when is a data set biased? When is an algorithm unbiased or biased? We're just getting there. And people are frantically running ahead to try to understand those issues.
[00:59:04.46] And I'm pretty confident over the next 10 years, we will have a much richer understanding of that. What will be interesting is to see that experience fed back into undergraduate programs, for example, so that when we teach people about programming, we don't just teach them about programming. We teach them about those issues of bias.
[00:59:22.01] At the moment, it's a huge, great, difficult area. And there are no magic fixes for it. We're just at the beginning of a process to understand what those issues really are.
[00:59:32.79] MANUELA VELOSO: OK. We'll have two more questions. I see many hands up. OK. You have the mic. So then we select him for that time.
[00:59:41.59] AUDIENCE: Thank you very much. And thanks for the talk. The question you spoke [INAUDIBLE] about defining what's meant to be each agent's interest because originally, once we define what each agent is doing in its best interest, then you can define the Nash equilibrium. But
[00:59:55.48] In real-world applications, agents may have different interests, right? So what would be the two, three topics, or methods, that you think are the state of the art or inferring what is the reward function or the actual interest of each agent so we can model each agent and then go to multi-agent level?
[01:00:13.55] MICHAEL WOOLDRIDGE: OK. So the slightly technical answer, so I apologize for that, so the problem of I'm interacting with people, and I don't quite know what their preferences are-- they could be this sort of person, they could be this sort of person-- is a standard one in game theory. And there are standard tools.
[01:00:29.34] There's a variation of Nash equilibrium called Bayes Nash equilibrium, which deals with exactly that. We haven't done that in our work because it's an order of magnitude more complex. But nevertheless, it's a standard technique. And in principle, you can use that technique to understand this.
[01:00:43.51] You could then argue against the models that they use in-- what they do in Bayes Nash equilibrium is you've got a space of possible types. You could be this type or this type or this type of person-- in other words, have these preferences or these preferences or these preferences.
[01:00:56.44] And what you know is the probability that they're of this type and this type and this type. And you could immediately argue, well, actually, even that, actually, is asking quite a lot. On large-scale systems, that might not be an unreasonable thing to do. But for the reasons I've said, we don't look at large-scale systems using game theory.
[01:01:12.31] MANUELA VELOSO: We have [INAUDIBLE]. So one final question there.
[01:01:18.70] AUDIENCE: I have a question about agent-based modeling. So take the example of the flash crash. Statistics say this is something that will happen once every billion years or something. Do you think that there is some issues with the way we do agent-based modeling or the reliance on simulations to model whether things are likely to happen or not?
[01:01:40.03] MICHAEL WOOLDRIDGE: So is that an actual quote, the once every billion years? Because it seems a very silly quote, given that it's actually happened.
[01:01:46.39] AUDIENCE: Yeah, something like that. Usually, these events are super rare [INAUDIBLE] risks.
[01:01:49.96] MANUELA VELOSO: [INAUDIBLE]
[01:01:51.97] MICHAEL WOOLDRIDGE: Well, there have been smaller-scale flash crashes since then, right? There've been a number of them. So the scary thing about flash crashes, if they happen in the circumstances where, for example, like I say, at the end of the trading day, when you hit the trough when the markets closed, that's what's potentially very scary. And there could be others, right-- there could be other circumstances.
[01:02:13.00] So in our simulations, what we aim to do is, at the moment, we're just getting qualitative indications. Look, these are the kinds of conditions-- because these are hugely complicated events, it's not just one factor. Certainly, high-frequency trading, the flash crash couldn't have happened without high-frequency trading.
[01:02:32.02] So that's certainly a contributing factor, but by no means is it the only one. And actually, I'm not sure whether they got to the bottom of whether or not somebody had actually done something fraudulent in the flash crash. I'm sure some of you know the answer to that. But yes, so there's a huge range of things.
[01:02:44.71] But what we aim to do is be able to give you the kind of characteristics. Look, if your system has these properties, this is the kind of trajectory that it might exhibit under these circumstances-- so qualitative indicators, which is still useful for us. As I say, we're right on the frontier of going from that to be able to make-- if your leverage is this much and the crowding is this much, then the probability is this much. We're on the edge of being able to do that. We can't do that with confidence yet. That's a way off, I think.
[01:03:16.63] MANUELA VELOSO: OK. We'll have one final question. There was-- there in the back. Sorry. We also have in the back row [INAUDIBLE]. OK. Thank you.
[01:03:30.31] AUDIENCE: Hi, professor. When I hear you talk about the agents, and I see you show the video from Apple, I'm curious whether you see agents as a solution to the IO problem when dealing with computers. Do you see it as the next step towards, or the next trend towards, people interface with computers?
[01:04:00.12] Obviously, now, we see companies in America like CTRL-labs. And in their work, they're looking at gestures, reading the nerve signals to interact with computers-- one of the big revolutions in the industries.
[01:04:15.08] MANUELA VELOSO: [INAUDIBLE] a question.
[01:04:17.18] AUDIENCE: So yeah. The question is, do you see it as a solution to the IO problem, or is it a bigger application than that?
[01:04:23.11] MICHAEL WOOLDRIDGE: So yes, it's a problem. It's a solution to the human-computer interface problem, which, I think, is what you're saying. You're talk about input/output problem, is that right? So I think it's a solution to the human-computer interface problem.
[01:04:36.09] The reason that so many people are working on it is because, at the moment, they don't see any alternative. So gesture-based interface is certainly going to have a role to play. I think they're not on the stage at the moment, even remotely, where they could be rolled out.
[01:04:48.99] And I think that it's hard to, with a gesture-based interface, to think about booking a meeting with Manuela. It just seems easier to say that than it does to try and do something with gestures. Maybe somebody will come up with something innovative there. I don't know. But at the moment, I think gestures-based interfaces are very, very niche area.
[01:05:09.63] Brain reading, I think, is-- again, we're nowhere near being able to do anything like book me a meeting with Manuela through brain reading. I think the state of the art there is one or zero, and possibly something a little bit more sophisticated but not much.
[01:05:26.68] So at the moment, I say the reason that people are chasing that up is because they just don't see any alternative, right, for an awful lot of these systems. If you're driving a car, how are you going to interface? You can't take your hands off the wheel and start typing. And you certainly can't do gestures, right? So it's the only alternative that's there.
[01:05:43.79] MANUELA VELOSO: Very good. So let's thank Professor Wooldridge again.