Friday, April 2, 2021

The Alignment Problem

Maybe it's because I'm going back to school in September and must get some practice reading books I don't totally understand, but for some reason I was determined to finish The Alignment Problem: Machine Learning and Human Values, by Brian Christian. I picked it up from the "New Nonfiction" shelf a couple of months ago, and thanks to my library's liberal renewal policies, I have it still. 

I could tell from the beginning that I was in a bit over my head with this tome, which, though written engagingly, presupposes knowledge of artificial intelligence that I do not have at my fingertips. But it seemed like an important book on an important topic so I plowed through it. 

I finished it last night and, after using the index to flip back and forth to various definitions I spaced out while perusing the first time, was at least able to understand what the alignment problem is and why it's important to solve it. 

The alignment problem is a term in computer science that refers to the divergence between the models we have created and the intentions we have when creating them, often imprecise or incomplete. It is, Christian assures us, a problem that the AI community is working to understand and rectify, but is by no means solved. 

Instead, he says, "We are in danger of losing control of the world not to AI or to machines as such but to models. To formal, often numerical specifications for what exists and what we want."

We must be concerned, Christian says, but not grim. "Alignment will be messy. How could it be otherwise? Its story will be our story, for better or worse. How could it not?"

Labels: ,

blogger counters