I see a lot of LinkedIn posts that express disdain for AI, and hatred for badly-executed AI content. There’s a strong “human-versus-AI” stance taken by so many of us, which I don’t understand. Maybe I’m naïve, but didn’t the human race create AI? And isn’t the body of knowledge it mines really our own body of knowledge? Here’s how David Eagleman, the neuroscientist, explains it.
…“We have created a new intelligent species that we’re going to be sharing the planet with from now on. This is not to say that AI has exactly the same type of intelligence that human brains have, but obviously it has absorbed the higher knowledge sphere of humankind and it can spit that back to us with all sorts of remixes. Now, the question is, are we in trouble because of this new invention? Have we taken things a step too far?”
It seems that one of our biggest fears is that the masses may not be able to tell AI content from human content. And to those of you who say you can sniff out AI in a heartbeat, I ask…really? Can you just as quickly identify badly executed human-created content? What’s really the difference?
Is the model with seven fingers pitching a product any worse than the influencer who is paid by the retailer?
Is the robotic argument any worse than the Facebook rant that confirms and amplifies uneducated biases?
…“What we’ve seen in the past few years is the limits of our imagination. Even in domains we thought we had mastered, our internal models might be a lot more narrow than we had ever realized. But by playing with machines, we’re learning how they think, and more importantly, how we might think differently.
… ”A lot of people cast this as man versus machine, but I think a more productive lens is seeing it as man learning from machine. The prize is a new way of seeing the game, and maybe, by extension, a new way of seeing the world.”
Like any other tech revolution, how well we use the technology has everything to do with how well it serves us. I’m not an expert in AI, but I am qualified to talk about creativity and productivity. Here’s what I’ve learned so far.
1-Strategy first.
Just like any other tool. Think through the problem you are trying to solve, and take a little time to explore your own team’s ideas. Entertain various avenues to solve the problem and develop a clear, thoughtful plan.
2-Be generous with reiteration of prompts.
When you ask a flawed question, you will get a flawed answer. Take the time to experiment—you’ll get that investment back with the speed at which AI will answer. We are learning the language of prompts together, AI and us.
3-Evaluate the answer.
Check the sources and verify all assumptions and facts. See Andrew Peek’s recent post,
“A Word of Caution…”
It’s been argued that by using the new productivity tools, our own minds will atrophe and we will lose the skills we no longer use. But isn’t that up to us? If I finish this blog post two hours earlier because I enlisted the help of an artificial friend, can’t I find time to read a book or research a new subject, take a walk, or volunteer to help those less fortunate?
“If we take this on correctly, AI might just up our game. …At least in some ways, it’s amplifying human creativity. It’s like a jazz partner who plays a riff you weren’t expecting, or a painting mentor who introduces a color that you never thought to use.…
“These AI systems presumably don’t have esthetic tastes or emotional longing in the way that we do, but they’re awesome at doing remixes and trying strange new things out, and in this way they teach us just how much more flexible and expansive our own creativity can be.”
I don’t see how AI can possibly have inherently competitive, let alone evil, intentions. It doesn’t have its own intentions at all. Why do we fear this machine will suddenly turn against its creator? Isn’t that a uniquely human phenomenon?
“… when we think about the arrival of AI, it’s tempting to frame it as a contest. Will the machine replace the worker and the scientist, and the gamer and the composer?
“But the more interesting story is not about competition. It’s about collaboration and how AI is going to stretch the boundaries of human imagination.”
I understand there is legitimate reason for concern. Bad actors can leverage these new tools for evil. Bad actors can also unleash nuclear threats I believe would have far more evil and unimaginable consequences. If you think I am looking through rose-colored glasses, I welcome any reality check you may wish to offer.
If you are ready for a more scientific—and very human—view of Artificial Intelligence, I recommend listening to David Eagleman’s podcast. After that, if you still find me relentlessly and foolishly optimistic, I welcome your corrections and comments.