There was a Senate hearing this week on the potential benefits and risks of Artificial Intelligence. Senator Richard Blumenthal (D-CT), chair of the Senate Armed Services Investigative Subcommittee opened the hearing with an audiotaped warning about potential dangers in abusing AI technology. But it wasn’t him. It was an AI fake, so realistic that neither Blumenthal nor his colleagues could tell that he wasn’t speaking. [I encourage you to listen to the taped opening of the hearing and Senator Blumenthal’s rationale for beginning this way]. Unlike many congressional hearings that quickly devolve into grandstanding or partisan rancor, this hearing—with strong bipartisan support—should be a wake-up call for us all.

Even AI creators and business leaders in the field warn of AI’s potential dangers as the rapidly expanding technology has far outpaced regulatory policy (and even cultural norms and expectations about how to include it in education, politics, the arts and employment).

One witness at the Senate hearing was OpenAI’s CEO Sam Altman who said this about the technology that he helped to launch: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.”

Cecilia Kang, writing about this subject in the New York Times, says, “Mr. Altman said his company’s technology may destroy some jobs but also create new ones, and that it will be important for ‘government to figure out how we want to mitigate that.’ He proposed the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.”

Senator Blumenthal acknowledged Congress’s failure to keep up with the introduction of new technologies. “Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past…Congress failed to meet the moment on social media.”

This is not a new concern, nor is it a new topic in this space. See “Tools of Discernment” from February 19, 2020 or—even earlier—in “QAnon and You” from August of 2018. The former was sparked by the emergence of “deepfakes” and makes reference to a truly frightening illustration that appeared in The Week by Bonnie Kristian just before the pandemic broke. We became distracted by more pressing concerns and this illustration did not receive the play it merited. The latter post was prompted by concern over how technology can intersect with conspiracy theories to wreak havoc.

I wrote at the time, “this is no longer about the lunatic fringe—left or right. As we develop our perspectives and formulate our actions, we must hone our “tools of discernment” and examine the sources of the information we gather. Failure to do so has fatal consequences for our democracy. Once we lose the bright line between fact and fancy, between real news and fake accounts, we can no longer distinguish between pathways that genuinely lead to peace, justice and human dignity and those that wander blind alleyways of bigotry and fear.”

Since then, as Moore’s Law postulated, computational power has expanded exponentially. We may be on the brink of unprecedented runaway technology as foreshadowed in Stanley Kubrick’s groundbreaking film from 1968, 2001, A Space Odyssey. When that film was released, it seemed a dystopian science fiction fantasy. But 2001 was a long time ago (let alone 1968!). We have all seen how leaving our challenges to government policies is often less than a satisfactory strategy.

Rather, we must all be diligent in developing our own tools of discernment. We cannot be naïve about the potential for harm in AI technology which is not going away any time soon.

4 thoughts on “AI & Us

  1. We can be sure that those with power and wealth will use new technology for their advantage.
    Some tech leaders are urging a slowdown on AI mainly so they can position their particular company to have an advantage, not because they have our common good or ethics in mind.
    Only a combination of us not adopting shiny new stuff and powerful governmental regulations can save us from being preyed on more efficiently than ever.

  2. Yeah, I heard Altman’s comments.. Scientist Stephen Hawking warned about this before his death.. he was not optimistic about humanity’s ability to handle this properly. Nor am I. Just look the way Americans handle guns..
    This is another area where we need intelligence, restraint, discernment and wisdom. 😕

  3. Wow Bob, fantastic and thought provoking There may even need to be a Department of Technology, because this is so multi-dimensional. It isn’t about new products, it is the integration into existing products and services, and it is going to become its own product(s) — beyond current understanding. So great to raise your voice. Keep it up. Maybe we need an advocacy group just focused on AI — Ready to start again (smile)..

  4. My best guess is that AI, like a lot of tech innovations before it, will go through phases: introduction and promise, growth until ubiquity, reaction and downside revelation, and then a second phase of adjustment and adoption. It won’t make our lives a lot better, or worse, but will cause adaptations. And btw, I called my broker the other day and asked about their voice recognition box, which they’ve been using for close to a decade. They refused a direct response, deferring to the corporate line ‘we’re testing security protocols on an ongoing basis…’ My take on this is that they’re adapting. Hope it works.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.