Dear Friends,
Over the last few years I have found myself considering the curious times we are living through. While I am not a historian, it seems to me that we are the active participants and witnesses to not only major historical events such as the Covid-19 pandemic, but also extraordinary cultural shifts, and disruptive technologies.
We have been changed forever by the internet which has enabled all types of information to be shared globally and instantly with the help of the smartphone which we also experienced its introduction. The internet and the smartphone have changed our lives and the world in ways that we still are uncovering. At a minimum, we live in a global society now.
Additionally, we have witnessed the advent of social media; regular mass shootings; extreme political polarization; misinformation as a tool for control; bitcoin and it’s crypto cousins; financial turmoil including unprecedented US debt of $31 trillion, record inflation, and major bank failures; the loss of rigorous independence in legacy media, medicine, and our universities; movements such as #MeToo and BLM; and Roe v Wade overturned, to name only a tiny number of historically, life-impacting events we are living through. Interesting times indeed.
And now ChatGPT by OpenAI has arrived. Considering how disruptive ChatGPT has already proven, it’s interesting and insightful, if not prudent, to learn about this emergent yet wildfire-like advent. For one thing is clear, ChatGPT and its AI counterparts that are sure to follow, are revolutionary in ways that we also are far from understanding. Like the internet, I suspect our world will never be the same. The jeanie is out of the bottle.
To that end, I have included two articles below that I have found quite interesting on the topic. The first addresses how we might consider ChatGPT from the framework of a parent navigating this with their children. The second addresses how we might frame the existential risk of the inevitability of AI.
I hope you enjoy. ~ WRW still away on holiday.
With recent advances in artificial intelligence (AI) such as ChatGPT, parents and teachers were first impressed by its abilities—then worried about what this means for kids. What happens when students can ask a bot to write their papers for them in seconds? Will they replace deep learning with copycat plagiarism?
Automated knowledge agents like ChatGPT fundamentally change the value of human expertise. In a world where much of our thinking can be outsourced to machines, the key role of humans is to ask rather than answer questions. In particular, developing the capacity for asking questions AI can’t answer is the best way to advance the collaboration between humans and machines to everybody’s benefit.
Since ChatGPT and similar technologies are optimized for providing quick, generic, and relatively adequate or accurate answers (not too different from Wikipedia), you can also teach young people to identify errors and mistakes, which requires deep learning and research. Think of human intelligence as a supervisor of machine intelligence and expertise as the ability to go beyond the prepackaged “fast facts” churned out by AI and provide value beyond the wisdom (or ignorance) of the crowds.
So what can you do now to prepare kids for the future? Help them develop curiosity to ask more and better questions. Research finds that playful activities such as games can boost curiosity—say, by using digital voice agents like Alexa and Siri to answer questions about things kids want to understand.
Don’t ban new technologies like chatbots. It risks turning kids into Luddites—or can tempt them to use it even more.
Do help young people cultivate curiosity by playing games. Establish family quiz time to ask questions, then use technology like chatbots and digital voice agents to search for the answer. Kids who can extract the right insights—because they’ve learned how to ask the right questions—and verify or correct the accuracy of the information will have skills no machine can replace anytime soon.
With gratitude,
Tomas
Tomas Chamorro-Premuzic is a professor of business psychology at Columbia University and UCL and the author of I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique.
Existential risk, AI, and the inevitable turn in human history
by Tyler Cowen March 27, 2023 at 12:27 am
In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape:
1. American hegemony over much of the world, and relative physical safety for Americans.
2. An absence of truly radical technological change.
Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.
In other words, virtually all of us have been living in a bubble “outside of history.”
Now, circa 2023, at least one of those assumptions is going to unravel, namely #2. AI represents a truly major, transformational technological advance. Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.
#1 might unravel soon as well, depending how Ukraine and Taiwan fare. It is fair to say we don’t know, nonetheless #1 also is under increasing strain.
Hardly anyone you know, including yourself, is prepared to live in actual “moving” history. It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.
I am reminded of the advent of the printing press, after Gutenberg. Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits. But it also created writings by Lenin, Hitler, and Mao’s Red Book. It is a moot point whether you can “blame” those on the printing press, nonetheless the press brought (in combination with some other innovations) a remarkable amount of true, moving history. How about the Wars of Religion and the bloody 17th century to boot? Still, if you were redoing world history you would take the printing press in a heartbeat. Who needs poverty, squalor, and recurrences of Ghenghis Khan-like figures?
But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum. We don’t know how to respond psychologically, or for that matter substantively. And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer). No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.
The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next door neighbor.
How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).
So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response. Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely. Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them. I have even funded some of this work through Emergent Ventures.
I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.
Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances. “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.
I would put it this way. Our previous stasis, as represented by my #1 and #2, is going to end anyway. We are going to face that radical uncertainty anyway. And probably pretty soon. So there is no “ongoing stasis” option on the table.
I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?” And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.
With AI, do we get positives? Absolutely, there can be immense benefits from making intelligence more freely available. It also can help us deal with other existential risks. Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders. And should we wait, and get a “more Chinese” version of the alignment problem? I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table. We can’t even resurrect WTO or make the UN work or stop the Ukraine war.
Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI? Do you really want to press the button, giving us that kind of American civilization?
So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait. The longer a historical perspective you take, the more obvious this point will be. We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge.
See you all on the other side.
You can read more by Tyler Cowen at Marginal Revolution.
A simpler time would be nice for awhile.