This week in AI: Can we trust DeepMind to be ethical?

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, DeepMind, the Google-owned AI R&D lab, released a paper proposing a framework for evaluating the societal and ethical risks of AI systems.

The timing of the paper — which calls for varying levels of involvement from AI developers, app developers and “broader public stakeholders” in evaluating and auditing AI — isn’t accidental.

Next week is the AI Safety Summit, a U.K.-government-sponsored event that’ll bring together international governments, leading AI companies, civil society groups and experts in research to focus on how best to manage risks from the most recent advances in AI, including generative AI (e.g. ChatGPT, Stable Diffusion and so on). There, the U.K. is planning to introduce a global advisory group on AI loosely modeled on the U.N.’s Intergovernmental Panel on Climate Change, comprising a rotating cast of academics who will write regular reports on cutting-edge developments in AI — and their associated dangers.

DeepMind is airing its perspective, very visibly, ahead of on-the-ground policy talks at the two-day summit. And, to give credit where it’s due, the research lab makes a few reasonable (if obvious) points, such as calling for approaches to examine AI systems at the “point of human interaction” and the ways in which these systems might be used and embedded in society.

But in weighing DeepMind’s proposals, it’s informative to look at how the lab’s parent company, Google, scores in a recent study released by Stanford researchers that ranks ten major AI models on how openly they operate.

Rated on 100 criteria, including whether its maker disclosed the sources of its training data, information about the hardware it used, the labor involved in training and other details, PaLM 2, one of Google’s flagship text-analyzing AI models, scores a measly 40%.

Now, DeepMind didn’t develop PaLM 2 — at least not directly. But the lab hasn’t historically been consistently transparent about its own models, and the fact that its parent company falls short on key transparency measures suggests that there’s not much top-down pressure for DeepMind to do better.

On the other hand, in addition to its public musings about policy, DeepMind appears to be taking steps to change the perception that it’s tight-lipped about its models’ architectures and inner workings. The lab, along with OpenAI and Anthropic, committed several months ago to providing the U.K. government “early or priority access” to its AI models to support research into evaluation and safety.

The question is, is this merely performative? No one would accuse DeepMind of philanthropy, after all — the lab rakes in hundreds of millions of dollars in revenue each year, mainly by licensing its work internally to Google teams.

Perhaps the lab’s next big ethics test is Gemini, its forthcoming AI chatbot, which DeepMind CEO Demis Hassabis has repeatedly promised will rival OpenAI’s ChatGPT in its capabilities. Should DeepMind wish to be taken seriously on the AI ethics front, it’ll have to fully and thoroughly detail Gemini’s weaknesses and limitations — not just its strengths. We’ll certainly be watching closely to see how things play out over the coming months.

Comments