13.8 C
New York
Monday, March 4, 2024

Writing Isn’t About Writing — Or, the Real Danger of ChatGPT

Writing Isn’t About Writing — Or, the Real Danger of ChatGPT

Image generated by DALL-E

By: Charles P. Edwards

I grew up around writers. My grandfather wrote poetry. My mother was a journalist. My sister and I both became lawyers and professors. I have spent the past thirty years writing briefs, memos, and other legal prose. So, when I first heard about ChatGPT I thought that there is no way this thing can write. I was wrong.

Fortunately, writing isn’t about writing; it is about making choices and communicating ideas. Those things can be offloaded to generative AI, but doing so is a ticket on a train straight to Aldus Huxley’s Brave New World. If we want to keep what makes us unique as humans, we need to keep writing as a manual exercise.

What makes humans unique isn’t the ability to write; it is our intelligence. I offer my MBA students the following equation for intelligence:

memorization + pattern recognition + ethics = intelligence.

Since the Enlightenment, humans have spent most of their time and energy trying to figure out how things work. We have devoted most of our energies to the first two variables in the intelligence equation — uncovering facts, collecting them, and identifying patterns in them. In the “is-ought” phrasing of the philosopher David Hume, we’ve spent most of our time on the “is.”

We learned decades ago that computers were far better than humans at memorization. The smartphone effectively outsourced that skill to a computer we can carry around in our pockets. We have a joke in our family whenever someone asks a question about a fact and we all spend 20 minutes arguing about it: If only there was something that organized all of the world’s data and made it instantly accessible; at which time, someone pulls out their phone and ends the debate.

So what about pattern recognition? We are now learning that the way humans write involves patterns that can be recognized and replicated by AI. And, what the AI doesn’t know initially, it learns and replicates with incredible speed and aptitude.

We vaguely understood this before. We knew that Shakespere wrote sonnets a certain way, Aaron Sorkin writes dialogue a certain way, and Michael Lewis writes non-fiction a certain way. But, the idea that AI could replicate these authors and then create entirely new works of literature in their same style seems to have been too much for us. The idea that creativity is largely just pattern creation cuts to our core as human beings.

The most important variable in the intelligence equation, ethics, involves Hume’s “ought.” If you ask ChatGPT for a description of Hume’s is-ought problem, it will give you a well-written essay on the topic. What ChatGPT cannot do is tell you what ought to be in any moral or ethical situation — though it will walk you through the options and will offer suggestions, if you ask.

Deciding on what “ought” to be isn’t as easy as memorizing facts or applying them in a pattern. Humans have created some algorithms for doing so — various philosophies, forms of governance, and economic theories. But, even simple algorithms can produce unpredictable and unexpected results when run in a complex system, like a group of humans.

Complex systems are systems formed by independent agents making choices that affect and change the system as a whole. Those choices include millions of decisions made every day by humans acting in complex systems of various scales — from couples, to families, to firms, to industries, to political subdivisions, to nations. Those decisions, in turn, shape the system.

One rule of complex systems is that they evolve from simple rules, often in unpredictable and inexplicable ways. Another rule is that complex systems cannot be deconstructed into their initial simple rules. A third rule is that complex systems are dynamic and non-deterministic; they continue to evolve and change (until they don’t, but let’s not worry about that for now).

While the long-term outcomes of these decisions are not necessarily knowable, the immediate impact and direction usually are clear. Systems can be open or closed, market driven or command and control, selfish or generous, and so on through any number of variables. As I noted in my last post, the future has many paths and we must choose wisely, even if we don’t know the ultimate long-term effects of those decisions. Writing should play a key part in our decision-making process.

Our best ideas are rarely formed in the moment. How many of us have had “brilliant” ideas on a run, in the shower, or in our cars? How many of those ideas really were brilliant?

As any lawyer will tell you, ideas are mostly worthless; they must be turned into something else to have value. Ideas that are useful and non-obvious inventions may be patentable. Ideas that are unique and expressed in some medium might be protected by copyright law. You might be able to trademark ideas that are uniquely associated with a business or product. But, ideas alone have little or no value.

Ideas are the seeds for future value, but they must be planted in good soil and watered before even sprouts will appear. Those sprouts must be cultivated into plants, which must be pruned. All of this takes work, and time.

One thing you learn as a lawyer is that it takes longer to write a 25 page brief than it takes to write a 50 page brief, and it takes even longer to write a 10 page brief. Tightening your prose is part of the reason, and AI can certainly help there. But, a bigger part of the reason is tightening your argument. Making a strong argument is about selection, not compression. Deciding what to select takes work, and time.

We don’t have time in the modern era to sit like a Buddha under the bodhi tree for days, or to convene on the porch and debate philosophy like the stoics. But writing lets us sit for an hour or two, to capture the benefits of that work, and to come back to it, reflect on it and build on it. It might even cause us to reconsider some of those half-baked shower and running ideas. That time and work eventually create intelligent expressions of our ideas, opinions and arguments.

Charlie Munger had a saying that, “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.” Albert Einstein and Richard Feynman both are credited with quotes along the lines of “if you can’t explain something in simple terms, then you don’t understand it yourself.” Jeff Bezos required that any consequential decision at Amazon be supported by a thoroughly researched memo, not longer than six pages.

Understanding others’ opinions, simplifying ideas, and expressing them in a useful and persuasive way takes work. It is enticing to use AI to avoid this work, but doing so makes us less than who we are and reduces us to the level of our tools. Writing down our ideas, reflecting on them and refining them, on the other hand, makes proper use of these tools while maintaining what makes us unique as humans. It’s only when we give up doing this work that AI becomes a threat, rather than a tool. So, as we move forward in this brave new world, let’s resolve to do the work required of us as humans.

Source link

Latest stories