Result
Is there privacy and ethics in an AI world? Artificial intelligence is now the domain of vast corporations and universities. Is there an equivalent to the Information Commissioner’s Office for this AI frontier? By Steven Barnett They’ve come to Cambridge and Elon Musk is speaking. The topic: the technology to automate decision-making. You might think of Musk, the entrepreneur who is a co-founder of Paypal, Tesla and SpaceX, as an unlikely author to write a popular book on AI, but the real fact is that since 2012 he has dedicated nearly all his time to developing and marketing the tools of artificial intelligence. The year the book was released, a professor of computer science at Stanford University remarked that Musk seemed ‘no longer interested in the long-term impact of AI.’ Instead, he spent $5 million to build the AI Research Centre at the university. His contribution of human flesh and neural tissue to his team was not quite in the manner of a traditional grant, but he was clearly keen to keep up his image of a man unafraid to place his livelihood on the line. One of the more daunting things about Musk is his background: not only does he see a future for AI technology that poses risks to humanity, he also clearly has a clear and negative view of it. So what about the big AIs that have already been built? Will they gather information about us? Do they use this information to learn more? Even the most optimistic or naive science fiction of the 1960s included some sort of AI, although they probably didn’t talk much about privacy or ethics. Musk does, however, and this is where things get interesting. His book is called Neuralink: ‘I think that an AI which can map the human brain will fundamentally change the way we see ourselves and how we relate to one another,’ he says. ‘That’s one reason I want to be a part of building it.’ It is not a line of thought that most scientists take. For one thing, humans and the AI they build are inextricably connected. ‘It’s not that the AI won’t be able to function without humans,’ says Mark Riedl, a professor of AI and robotics at the University of Southampton. ‘It will. But the problems will become more acute as that process proceeds.’ Another problem for some researchers is that, like Musk, they see the inevitability of a future where AI is not confined to the few mega-corporations such as Google and Facebook but becomes widespread and the property of individuals. ‘I’d like to create the human-machine co-operative as the main driver of economic and social progress,’ says Martin Gräfe, a professor of AI and software engineering at the University of Twente. ‘The co-operative aspect implies that this will happen through a democratisation of the technological development.’ People will freely create their own versions of Siri and AIWOM, and this new intelligent system will have a choice of what to reveal. But when researchers talk about democratisation of the technology, they tend to see it in a positive way. But there is a danger here. People have already begun to see humans as machines, some scientists say. ‘We’ve reached the point where we can discuss your social network as your “profile”, your email archive as your “record” and your online shopping habits as your “behavioural patterns”,’ says Pascale Molinari, a director of the European Commission’s Robotics and Artificial Intelligence Lab. ‘The next step is that you use your fingerprint, your face or your skin tone, and there’s nothing to prevent the social networking of individuals being their profile, their record and their behavioural patterns.’ Molinari’s fears have some support. No sooner had his warning hit the news than the Daily Mail splashed with a story about ‘Facebook’s potential privacy issues’, quoting among other things: ‘Personalisation to tailor content or advertising can have an immediate privacy impact, for example the collection of Facebook ‘likes’.’ The Daily Mail article doesn’t state that it is referring to the social networking of individuals, but it can’t be anything else. The likes are Facebook’s lifeblood. By running a search on ‘Facebook like’ in recent newspapers, I found at least four stories in the Guardian, three in the Telegraph, one each in the Financial Times and the Telegraph. The themes were similar: the fact that we can all now see the deep personal connections made on social networking sites, and that some users’ data is being used to sell us things. This is not just a mistake. In the Mail piece, the writer, Robert Jones, suggests that Facebook might ‘sell’ our data. Later, another story carried the headline ‘Facebook’s Data Packages Are for Sale’. And the reality is not much different. My account is set up to allow advertisers to target me, either with specific types of ad or to identify me by age, gender, education, relationship status and so on. In addition, some searches are aggregated to show me the most popular searches. This is often done using data that has already been freely shared by people who have created ‘Likes’ on Facebook for themselves. Of course, all of this data is anonymised and encrypted, but that doesn’t mean that the fact that it’s being gathered and used is not very well known. You can no more act as if Facebook were a closed system than you could reasonably pretend that this newspaper or the broadcaster that sent me the article isn’t aware of the data it gathers. People can and will talk about their lives on Facebook. My private data might not be as incriminating as a police dossier, but it is personal, and when you’ve had your hands around it, all it takes is one court order or police request and it is over. In the past, when it was Google whose privacy settings were a focus of concern, it was usually government requests that threatened to reveal our personal information. The social media giants are making their systems work with far less data protection. There’s an element of naivety in treating the tech giants’ data policies as if they were going to remain mostly in the hands of the regulatory state. The future of the companies is now the subject of a regulatory battle between state authorities and these mammoth corporations. In April, German regulators announced that they were trying to block Facebook from re-marketing a bundle of apps called ‘Shoelace’ that were originally a targeted data grab to test a new payment system. The app was broken and Facebook removed the permission for people to share their payment details, but by then, the app was up and running on Facebook and people were already using it. The company re-marketed it as if the app was still working, and, in some cases, not just for a fee but for free. Google is already doing much the same thing, albeit for a different purpose. Last year, the British regulator received some 6,700 complaints from users about Android smartphones that were pre-loaded with some 20 ‘sponsored applications’. Most of these apps offered users coupons and deals to buy things in the offline world. The phenomenon of pre-loaded apps was never of major interest to Google because users did not usually install more than two. But because most of the user’s apps are free, Google had never bothered to explain that a paid app would be installed automatically when it was clicked, rather than the user having to allow the app to install itself. In the Android case, the regulator received complaints from customers who noticed a Google logo on their smartphone’s home screen, and complained about the intrusion. The logo prompted the complaint because it was visible on every screen of the device, including the data screen. Google admits this was a mistake and says it should never have used the Android logo as the method of installing apps on the device. It claims that there were only seven app installations linked to the incident and that the logo on the home screen of the devices in question would never have appeared had they not been updated. It says that it is in the process of changing its policies to prevent a similar thing from happening in the future. Like Google, Facebook is not a stranger to the technique of reverse engineering apps. It uses a process called ‘application integration’ that lets it quickly monitor the use of a particular app so that it can choose to replace or replace that app with another. In other words, if someone is using an app called, say, ShopTracker but you want to show them a new app that you have just launched, you can rewrite the app’s code so that it says it is called ShopTracker instead. This is a fairly standard thing for web browser companies to do, and it is perfectly legitimate, but a problem arises when you combine it with the fact that these apps are installed on people’s smartphones, and so on the home screen of their phones as well. When I mentioned the application to my data protection officer at the Independent, they were surprised. ‘But I’ve seen this with an app I’ve written myself. What is Facebook doing?’ The people on my team who use Facebook are fairly well informed about the privacy policies. I’m not going to pretend they all have access to all the data Facebook gathers about them. Most of the apps are on by default and many people don’t realise that they’re all part of the same data structure. Facebook is one of the least respected companies in the privacy-aware universe. It is no more than software, for which the output from it is constantly analysed in a way that seeks to reveal a big part of our lives. When I mentioned the application to my data protection officer at the Independent, they were surprised. ‘But I’ve seen this with an app I’ve written myself. What is Facebook doing?’ Neuralink is both scientific research and a powerful business tool. I try to install on my devices so that I can’t use them. Neuralink is the pure and unassailable software of the future, the one that can give us direct access to the inner workings of our brains. The first thing that Neuralink learns about me is my age, whether I live in one of the 10 most-frequently searched places, whether I have drunk alcohol in the past four weeks and so on. It doesn’t need a part of my brain to work out this sort of information; nor does it need a computer to run a self-learning algorithm over your data. Perhaps the trickiest are the learning machines such as neuralnetworks. �Muskelink is able to use them to change my behaviour or response to it. The goal of such a machine is not to protect us from AI but to allow us direct access to the inner workings of our brains. The company Neuralink is a pure and unassailable software tool that can change our thinking as it has our thinking. The link between your mind and your brain, and the fact that Neuralink is the real software of the future, makes it a dangerous piece of software, in some ways more complicated than most social networks. I’d prefer that it has to work on my devices without me. There’s a danger here that Musk, Gräfe and Riedl talk about. This sort of intelligent system has a choice of what to reveal. People will freely create their own versions of AIWOM, and this new intelligent system will have a choice of what to reveal. When you look at your iPhone or another of the wireless tech-services such as Facebook, Twitter or Spotify, you have to ask yourself if there is any possible way to link these software services to you as a unique entity that these devices can learn about, and how the output from it can be used to show ads, send messages or anything else. One must ask if this is the software of the future, and if the output of this software can be shown to you. What if the machines learnt to change our behaviour or response to it? What if the machines learnt to change my behaviour or response to it? What if these machines were given, free, direct access to your mind? When I checked my device, my first thought was about the endless scrutiny that new devices are set to be subject to. The sheer breadth of the data coming off our devices and how it will be analysed is just what a data protection officer would want to look into, and that is not a problem for them. But this isn’t your phone and it’s software, and it’s connected to the neuralink software that can read from your devices. When you check the data from your device and it is being offered up to you, or used by software for you, it is very hard to believe that it is all part of the same structure. The journalists and readers who complain about targeted data packages, or the phone pre-installed apps, are well aware of the data they are sharing, not the other way around. The company Facebook is having deep trouble about making sure that it knows about our activities on Facebook. I no longer share my data with my data protection officer; I no longer share it with Facebook. The data from my devices and the company Facebook that feeds me an endless stream of apps is very hard to believe is all part of the same structure. How often is the data being offered to me now set up for you?
Blijf op de hoogte
Wekelijks inzichten over AI governance, cloud strategie en NIS2 compliance — direct in je inbox.
[jetpack_subscription_form show_subscribers_total="false" button_text="Inschrijven" show_only_email_and_button="true"]Bescherm AI-modellen tegen aanvallen
Agentic AI ThreatsRisico's van autonome AI-systemen
AI Governance Publieke SectorVerantwoorde AI voor overheden
Cloud SoevereiniteitSoeverein in de cloud — het kan
NIS2 Compliance ChecklistStap-voor-stap naar NIS2-compliance
Klaar om van data naar doen te gaan?
Plan een vrijblijvende kennismaking en ontdek hoe Djimit uw organisatie helpt.
Plan een kennismaking →Ontdek meer van Djimit
Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.