A second investigation into Google DeepMind’s handling of sensitive medical records from Britain’s National Health Service (NHS) seems likely to further muddy the ability of the Google unit to apply artificial intelligence techniques to health data. The report from an independent panel of reviewers appointed by DeepMind comes two days after the UK’s privacy regulator found that the firm handled the data of 1.6 million patients unlawfully.
The experts who sit on DeepMind’s review panel, who were granted special access to the company’s technical systems and staff to conduct their probe, don’t seem to be entirely sure how the firm uses AI techniques on health data. One of the reviewers, venture capitalist Eileen Burbidge, told journalists at a briefing that DeepMind uses a “different kind” of AI in its Streams app, which is being used by doctors at the Royal Free hospital in London. She was responding to a question about its use of AI on the health data. The app is at the heart of the illegal data-sharing controversy; it sends automatic alerts to doctors if test results imply the presence of conditions like acute kidney injury.
Burbidge’s statement was curious because it appeared to contradict DeepMind’s long-held assertion that Streams doesn’t use any AI techniques. After the briefing, a DeepMind representative contacted Quartz to clarify that Burbidge may have been “misspeaking” and that Streams does not use any AI techniques. The episode suggested that even someone of Burbidge’s long experience and expertise, with a year to conduct the investigation along with eight other eminent figures, hadn’t quite nailed down what DeepMind does or doesn’t use its vaunted AI technologies on.
There was more confusion around DeepMind’s intentions for NHS data. The panel’s chair, Julian Huppert, a former member of the British parliament, said that his “interpretation” of the facts was that DeepMind had originally planned to apply its technology to NHS services, only to “step back” from this plan after discovering that the health data wasn’t in good enough shape to be used. “As [DeepMind] discussed this with people they found the state of data and information flow in the NHS was not as good as they hoped,” he said in response to questions from journalists. “They had to step back from the exciting AI they intended to do at the start, to dealing with problems of how do you get information flow.”
In response to further questions from Quartz, the panel said that it hadn’t actually spoken to DeepMind about its intentions for the NHS data. “No, we didn’t have conversations with DeepMind about how their scope of what they wanted to do the Royal Free evolved over time,” said Burbidge, responding for the panel.
The panelists’ confusion over DeepMind’s intent and the use of its technologies underscore the complexity of its task. Panelists are not paid, although DeepMind budgeted £50,000 (with an additional £9,315 spent) for their work. “All the reviewers put in quite a lot of time, as chair I put in a huge amount of time,” Huppert said. “We are set up as unpaid reviewers. It became quite clear that for some of us that is unsustainable.” The panel recommended its panelists be paid an honorarium with the option to donate it to charity. DeepMind has agreed (pdf) to a £6,157 annual payment, which is what an NHS non-executive director receives.
The panel said it would broaden the scope of its inquiry next year, including delving into DeepMind’s business model and its impact on other companies serving the NHS. Privacy experts following the case, like Jon Baines, who chairs the National Association of Data Protection Officers, generally approved of the panel’s findings, but pointed out that DeepMind should appoint a data-protection specialist to the body. None of the current panelists are experts on the subject. The Royal Free’s breach of data protection laws is what triggered an investigation by Britain’s Information Commissioner’s Office, which was set up to uphold the public’s information rights.
DeepMind’s involvement with Britain’s public health service has turned it into a “lightning rod for public concerns” about protecting people’s private data in an age of AI, Huppert noted in his introduction to the panel’s report. That in turn has exposed a tension between new technologies and the public trust that has been building for decades, wrote Fiona Caldicott, who heads the National Data Guardian, a watchdog body.
“Confidentiality remains as important as ever,” Caldicott wrote in response to the ICO’s decision. “People need to be able to tell their doctor, nurse, or care worker things about themselves and their health and care needs in confidence. If such information is then used in a way that patients and service users do not expect, this precious trust will become undermined.”
The AI scientists at DeepMind will have to pay careful attention to that advice as it delves deeper into Britain’s public health service.
Source: http://bit.ly/2tMirP5