← Terug naar blog

Can an AI controlled machine be accountable?

Data Platforms

RESULT

Can an AI controlled machine be accountable? Let’s start with a simple question: How many Supreme Court justices does a machine need to take up the job of Justice of the Supreme Court of the United States?

The machine is as good at the job as any human, in all but very few particulars. For one, it will have no free time. For another, the Supreme Court has only eight justices. Moreover, though sometimes a man is not fair, a machine never will be. It cannot distinguish right from wrong or, in a real sense, the truth from the falsity of statements or statements from actions. It will have no ethics or morality. Nevertheless, the Constitution, along with innumerable other provisions of federal law, seems to presume that we can trust the decisions of the justices of the Supreme Court to be correct and proper.

What if the machine receives and considers these same laws? What if it then makes rulings similar to those of the Supreme Court? Who is responsible for it? This is not a joke. It’s one of the central questions of our times. The U.S. Supreme Court, and more recently the U.S. Supreme Court’s travel advisory board, the U.S. Supreme Court Police, and assorted sister courts have been called on to intervene to control or limit machine decisions in many situations, from regulating when Web sites can display search results to restricting the use of alcohol or cocaine. As of today, one of the primary arguments used to limit the control of machines is that the machines are uncontrollable and the decision makers cannot be held accountable.

The machines know what’s going on, the people who use them do not know the context of the machine’s work and therefore cannot be held accountable. At least, the argument goes, it’s up to Congress to determine what control we want and need. On the surface, this is a fair argument. The machine doesn’t try to deceive anybody or threaten anybody or try to take money from anybody. It just simply performs its function as specified. It’s often difficult to decide what’s going on in the machine and to determine what the choices are that it’s making. This, of course, is true only if we consider the control of machines to be an improvement on the control of humans. If that’s the case, then we might think that the machines ought to be supervised. If the machines do what we want them to do, if they perform their functions, without interfering with anyone else’s activities or responsibilities, we might be more comfortable having them in charge. If, on the other hand, we are comfortable with humans being in charge of them, we might think that the machines could be safer, less harmful, more reliable, and more likely to perform what we want them to do, than the human-controlled machines we have now. If we are considering the latter, then it seems obvious that this is the wrong way to think about machines. If we are not willing to give humans control of them, we might instead think that we need to make sure the machines we want to control are predictable and that we can figure out their likely behavior. If the problem is unpredictability or difficulty in determining what the machine is up to, then it seems we should be trying to predict it. Or, if the problem is that the machines might end up doing what they were not intended to do, we should try to be sure that that won’t happen. If we can’t do that, then we need to be sure that our machines understand the rules and know how to behave. If, on the other hand, the machines could actually replace humans in power, and in decision making, and if we were OK with that, then the question is a bit different. We have to think about who will be responsible for controlling the machines.

In this case, we have a problem. If we want to trust the machines to be up to the job, we will need to assign some role for them to play. In a system where machines are responsible for regulating the machines, we have a feedback loop. Each of the machines have a role to play. The people who govern the machines must determine how to give them that role. If we do not know that we can trust the machines, we will need to be sure that we know what role to assign them and how to monitor that role and how to give the feedback back to the machine about whether or not they are acting within the role and standards assigned to them. On the other hand, if we are willing to give machines control, we might think that we can trust them and that we don’t need to do any of these things. If this is so, then our role might be to create the laws and rules they will use. If we don’t know how to do that, then we will need to determine how to give them their role and if we do not know that, then we will need to be sure that we can monitor them. The fact is, we do not know the answer to the question. We do not know if we can trust the machine. We can’t even know for certain what it will do. What we do know is that it will not have a role to play if we decide that humans should control them. If it has a role, then we can know what it will do and what laws and standards are required to insure that it will not do something we don’t want. We may not know all of these things at once. We might need to proceed through trial and error to figure this out.

But we need to find out how to be sure that machines will not do what we don’t want them to do and to give feedback to them about whether or not they are doing what they are supposed to be doing. The fact is, no machine can currently do this without a human telling it what to do. And, as long as we are going to do this, we might as well do it with the best possible people, or at least people who are trying to figure out how to make the best possible choices in our name. And if we don’t know that, we might be better off not having machines try to make the decisions.

About the author William H. McCord, Ph.D. is Professor of Communications and Philosophy at Iowa State University and Research Associate in the Center for Communication and Cognitive Neuroscience at that institution. He is currently the editor of the Journal of Media and Communication, and he is a member of the International Advisory Board of the International Journal of Applied Philosophy. He is also the co-editor (with Bonnie Anderson) of The Philosophy of Artificial Intelligence (Routledge, 2000). McCord’s essays on the philosophy of technology have been published in Studies in the Philosophy of Science, Technology and Society, Canadian Journal of Law and Jurisprudence, Environmental Ethics, Global Ethics, Theoria and International Communication and Ethics.

The question of whether a machine can be accountable is not a joke. But it seems to me that the question is which meaning of accountable do we mean. If we mean what was used in the text above, that the machines can’t be held accountable, then the answer is no, they can’t be held accountable, and you don’t need to have any safeguards for them because, well, they can’t be held accountable anyway. I’m thinking about the text in terms of being responsible for their actions, not necessarily that they will actually be held accountable in a court of law, in terms of the question of responsibility, as the book title puts it. Also, can a machine be responsible? In some sense, I think so.

As an example, the “turnover” of IPv4 addresses is a consequence of the relationship between the DNS and the RIRs. The DNS allows a system (e.g., some commercial web site) to ask for a certain range of IP addresses and that request can be fulfilled (or at least, was at one time). The RIRs are the service-provider of IP addresses that are available to be used by a system. So, in one sense, if a DNS-level request is not honoured, the RIR would be deemed to be at fault.

The RIR is both the service-provider of IP addresses and the owner of them. They are responsible for them, even if they are just the neutral conduit for delivering the addresses that the systems use to connect to the Internet. Finally, in this context, I’d like to comment on ‘accountable’. I think it is correct in the following sense. Someone, on the Internet, would not care if a machine is just capable of intelligent communication, nor would they care if it were making ‘its own’ intelligent communications. What they would care about is that it is harmful, or otherwise goes about things in a way that is likely to harm.

Therefore, the accountability does not need to be about being formally accountable to a body, in the sense that the accounts of the machine are to be verified by a human, but that the machine does its best to avoid getting people into trouble. Regarding “machine accountability” is a curious statement. You are saying “machine accountability” means accountability in the context of humans. And then you are saying “accountability” means that we should not trust machines. You are right, but that’s just semantics. The way to address the latter is simply by asking: Should we trust machines? To say “accountability” is to say “accountability to whom?” It means, according to context, “a way to determine who should get what to whom.”

If you are asking about “machine accountability,” you can’t be talking about humans, because human accountability is a subset of machine accountability. In our discussion of this “machine accountability,” we are looking to determine who gets what and to whom. For instance, we’re trying to figure out: Who should get what role, and what should a machine be told to do? We’re trying to figure out who the humans should be accountable to (e.g., to the government) and how to test for human accountability. When we talk about machines, we want to know if they can be accountable, and, what if a machine decides to behave the way a human would. That is also a reasonable argument, because machines can behave the way that we are programmed to behave.

They can decide based on how well our code has been followed. It seems we are taking the position that we are currently directly responsible to the Supreme Court of the United States for its decisions, that we are directly responsible to the Supreme Court Police for the decisions of the Supreme Court, and that we are directly responsible to the U.S. Supreme Court Advisory Board for its rulings. This doesn’t make sense to me. I say it “doesn’t make sense to me” even though you and I both agree that “accountability” means “accountability to humans” and we want to “control the behavior of the machines in our own names.”

Let’s try this way and maybe consider all the ways, by the way. The other ways are “1,2,3,4” and so on. They’re sort of OK, they’re fine in moderation, but “a” is better than “a,” and “b” better than “b.” I say it “doesn’t make sense to me” even though we both mean the machines are accountable. I’m not making the decisions. I’m not responsible. I know and decide what to do. Is this “accountability,” then? Then, how can machines possibly act in our names, control the behavior of the machines in our names, tell the machines what to do, and monitor the machines and feedback to the machine what it is doing, all “in our names,”? This is “hard to figure out.” Really? The one way we’re trying to figure it out “is by trial and error, with feedback given back by the machine to the human.” If “we” end up being responsible for the machine behavior, then the “machine accountability” “is a problem.” Why?

Who would be doing the act of regulating the machines and who is what we are doing? We have to be sure about these things. But can’t we control the behavior of the machines “in our names?” We might as well do it with the best-possible “people.” If we have to, then we might as well have the machines too. We are the robots “that have the roles we assign them.” We “want a decision machine that understands its own rules and the way to give feedback to the machine to determine whether or not they are doing what they are told to do.” In other words, we want control over the machine in our names and be sure that machines are able to “act “within our rules.” If you want the machine to decide on its own, “no need to decide what the machine is up to, we just tell the machine what we want, and monitor to make sure it does what we say” is the machine accountable.

DjimIT Nieuwsbrief

AI updates, praktijkcases en tool reviews — tweewekelijks, direct in uw inbox.

Gerelateerde artikelen