I recently engaged in thought-provoking discussion in a content marketing community (CMI - so much love for that group) about whether we are cheating ourselves out of certain skills by relying too heavily on AI tools (specifically GenAI tools). Most of the other contributors wrote a paragraph or two, but I ended up writing five. So I decided to expand on some points and turn it into a blog post :)
Tools as Complements
Tools of any kind are complements. They should make our lives easier and give us the space and capacity to do more. To do better. They should accompany higher-level cognitive processes like decision-making and memory and focus. But it's a slippery slope once we rely too heavily on tools like this. We risk the complements becoming replacements. For tasks, for thought processes, and eventually, possibly, horrifyingly, for ourselves.
Now, sure - in some situations, some menial manual tasks can be replaced with technology. Some automations make our lives easier, such as conducting A/B tests through machine learning, or automatically sending welcome emails right when someone signs up on your website. But those are tasks that still complement human creations. Technology deploys things created by humans. For example, the two things you're testing in an A/B test - whether they're subject lines or ad taglines or email copy drafts or blog posts - were still created by humans; the machine is simply evaluating data against a threshold and then pushing out the content that adheres to the rule you set beforehand. In these cases technology is helping move content that was already created by humans. Much more risk comes into play when technology takes away more meaningful decision-making or content creation or human development.
Instead of recognizing that tools are meant to complement human processes, in many cases, people look for shortcuts. They look for ease. It wouldn't be as harmful if people replaced that ease with something constructive - if they found a technological shortcut that would give them time to do more strategic or meaningful work - but that's not what we're seeing. We're seeing people using shortcuts for the simple goal of not having to do the task at hand themselves. They say, "Work smarter, not harder!" And then they lie back and kick up their legs while a computer spits out soulless drivel that isn't going to achieve the deeper goals of connecting or resonating. This has been happening especially with the use of GenAI tools like ChatGPT to create content (such as writing blog posts or email copy).
(You might then say, "Well, then they're not going to hit their goals with such low quality." I thought the same thing, and that may be be the case in many situations. Let's hope. But I've also seen people then change their metrics to accommodate the dumbed down AI versions of things as well, causing a vicious cycle of reducing quality, increasing quantity, and changing goals to accommodate the reduced quality. Like I said - a slippery slope.)
There are potential pitfalls to prepare for and try to mitigate before hitting the point of no return: When people rely too heavily on just one tool, or they misuse the tool (and give it a different meaning, beyond its original intention), or the tool replaces vital skills or causes certain muscles or practices to atrophy, then the tool can become dangerous. It needs to be examined and possibly safe-guarded in some way. In many cases, education can help people realize the potential impact of a tool being misused and guide people toward more favorable outcomes.
But first, before that conclusion can be drawn, tools need to be evaluated. Questioned. Philosophized about. Tested. "Broken." Rolled around in our palm like a marble and examined from every angle. We need to consider the "what ifs" and practice some scenario planning to understand their impact in a multitude of ways, beyond just right now, to ensure their net impact is positive over time.
Unfortunately, that "net impact over time" piece is lost on many people - maybe because of shiny object syndrome, or maybe because they really do think the tool is better than the alternative. And listen: I'm all for technology making our lives easier. But I'm more so a fan of humans and human development. It's fascinating to wonder about the impact new technologies will have on humanity, brain development, and the future of society. And beyond being fascinating, especially with what we're dealing with right now, it's necessary.
Let's consider some of the possible impacts of GenAI tools if they are not applied properly or are misused.
It's All About the Process - and the Future
Our community manager posed a question - if we use GenAI tools, are we cheating ourselves out of anything? The experience? Learning? Growing? My colleagues in the slack community posed brilliant points about human reasoning and the enrichment you get from going through any process. Like replacing a paper map with a digital GPS, relying on technology might help you arrive at point B faster, but you lose the skill that will help you also learn about points C, D, and E in the process.
(My dad grew up with paper maps, and we call him a Walking GPS because he knows how every street intersects and what areas look like from all vantage points. When I was in high school and I'd get lost in Boston, I'd call him and tell him what I was near, and he'd direct me exactly to where I needed to go. He knew the relationship between the streets and the general directions of various routes, so he could figure out how to get from any point A to any point B. When you follow a GPS's directions, you don't learn about the relationships or the context or the broader governing rules that could be applied to other situations.)
It's not just about the immediate output for one particular task; it's about the process, and the impact, and the future.
It's not just about the immediate output for one particular task; it's about the process, and the impact, and the future. Everything has both pros and cons, right? Sure, offloading some details into a digital tool can be helpful and can preserve energy or brain space. Consider Tiago Forte's concept of a Second Brain, which involves using technology to build a system outside of yourself that can remember certain details (like phone numbers) so you don't have to. Storing the information (technology) should still complement the use of the information (human). The use of technology shouldn't replace the need to still think or strategize or use discernment in the application of these details. Unfortunately, if left to its own devices (pun intended?), AI could unfortunately have that effect.
Remember that everything has an opportunity cost, and be aware of what you are losing when you gain ease. And beware the tools that cost you more than you should be willing to give up.
Discernment & Diversity of Thought are Uniquely Human - and are at Risk
Since the advent and growth of GenAI tools, the way people search for information is becoming alarmingly passive. One major thing that bothers me about the move from Google to ChatGPT as a search engine, for example, is the lack of discernment it breeds. Google gives us sources, and it's up to us to decide which ones to use and how to use them, or which ones to learn from and how to make up our minds as a result. But ChatGPT gives you answers - and questionable ones at that. People who rely on ChatGPT for results are losing discernment and the ability to think critically about how things fit together - how ideas synthesize, how solutions align with problems, the deeper and longer-term impact, and how else they might apply. And those are the skills that are desperately necessary now and always.
People who rely on ChatGPT for results are losing discernment and the ability to think critically about how things fit together - how ideas synthesize, how solutions align with problems, the deeper and longer-term impact, and how else they might apply. And those are the skills that are desperately necessary now and always.
Along with the lack of discernment is less diversity of thought. AI tools tend to give you either one output based on each input (think the singular Q&A process of ChatGPT, regardless of how many iterations you go through) or a few suggested options based on the one overall view of your input (think of that as a few similar options based on one average profile). It may seem like this is a good way to train GenAI to produce something that is consistent with your brand voice or in line with your style guide or a logical suggestion based on your previous blog post titles, and that may be true at times, but it could also remove your ability to broaden your perspectives and learn from a wide variety of sources rather than just what a model thinks is similar to you. That's a cost I'm not willing to pay.
We learn from those who are different. We innovate when we think outside the box. We expand our horizons when we try new things. AI, when misused, or when set up to curate similar results to your previous activities, limits that spectrum of perspectives.
A Jumbled Mess: The Content Perspective
Here's another angle to consider: GenAI is not the end-all be-all for content creation. People who use GenAI for content creation are churning out non-unique, non-personalized, non-human, jumbled messes of content, full of overused jargon and mass-reproduced concepts. Content shouldn't just be about generating something to check a box and move on; it should be about creating something unique and personalized and resonant, something strategic and insightful and meaningful, something that will inspire connection or a change in thought or behavior. The best content is human. And humans are receiving the short end of the stick while other humans are using GenAI to bypass the vital creative process required to truly create something new and impactful.
One bothersome factor of using a tool like ChatGPT to create content (regardless of how much editing you do) is the plagiarism piece. ChatGPT, when you break it down, is plagiarism. Inherently so. It is not creating - it is repurposing without citation. That is the definition of plagiarism. Instead of giving you a link so you can get the information directly and do something with it (i.e. make sense of it and quote it and cite it and synthesize it with your other ideas), ChatGPT takes quotes and information and summations and all this data and meshes it all together into one jumbled output. Generative AI isn't generating anything - it's copying all that has come before it and spitting out one messy amalgamation. A connection of mine on LinkedIn said she wants to call it DerivativeAI instead, and I LOVE that. GenAI is deriving all of its output from your ideas (and others' ideas) without giving you (or anyone else) credit for them.
And don't even get me started on the perils of GenAI in the publishing industry. AI-generated manuscripts are suddenly being mass-produced and sold online, some even with established authors' names on them without permission. There are huge ethical concerns here. First, books that use GenAI are not actually written by the people who claims they wrote those books. Second, if random people are using GenAI to produce entire novels and then slapping successful authors' names on them just to sell copies and make money, that's a severe issue. That would be an issue whether people were using GenAI or not, but for some reason, people think that the use of GenAI to plagiarize authors' work is "more okay" than writing an entire book and saying a famous author wrote it just to make a buck (trust me - the people who use GenAI to create books are not the type to sit down and write an entire manuscript). Authors are extremely wary both of people plagiarizing books and selling them under their name and of anyone using GenAI to create and sell books. The issues are both with the plagiarism piece (covered in the last paragraph) and with the idea of originality in content. The severity of this issue has skyrocketed in the last year or two and needs to be stopped immediately. But there are not yet the proper safeguards in place. Until then, with GenAI around, how can we trust anything we see online?
Lastly, don't be fooled - it's not as efficient as people thought it would be. 77% of workers say AI tools have decreased their productivity and added to their workload. That jumbled mess generated by the tool takes a lot more work to clean up and clarify and simplify and align to your brand voice and "humanize" and optimize for connection and infuse with meaning. If you had done it on your own to begin with, you would have saved time, and the product would be better and unique.
Clearly, its effects are opposite from its intentions. And yet the draw for ease is so strong that people continue to fall prey to its false promises.
The adverse impact continues. Thinking strategically from the content creation perspective, how can you differentiate yourself if you're using a tool that regurgitates what other people have already done? Being innately human and creating unique content from scratch is the biggest differentiator in the content world and that value is being bolstered by tools like this. Worse, as we noted, it's not citing any of its sources. Citing sources properly is very important for knowing whether your source is reliable AND giving the author credit. I care too much about writers (and designers) to allow tools to use their (our) work without giving credit where credit is due.
We Need to Think - And To Practice How to Think
There are certainly some ways AI tools can help you offboard and/or automate menial tasks that will free you up to do the more strategic, impactful, human decision-making and creation. That should be the goal, right? Unfortunately, I'm seeing people rely too much on AI tools - namely GenAI tools - to replace human skills (rather than complement them). And that's not okay.
How do you help people see it's a problem? It's a serious question we have to continue to contemplate and discuss. Many people who use these tools are the people who appreciate shortcuts and focus only on the output without caring about the journey or what it means for the future, outside of the task or themselves. How can you get people who prefer shortcuts to integrity to change their behaviors for the better, especially to benefit others?
I think that's the key - thinking outside of the task or ourselves. I've had rich conversations with people who, like me, wonder what the [mis]use of these tools could mean for humanity, for brain development, for all of the things that matter about our existence beyond delivering the next work task. I'm starting to see a divide between people who are interested in making the current task easy and people who care about what that action, performed by the masses at high quantities, means for humanity in the future.
For me, it comes down to discernment, to ethics, and to the future of human development. A recent CNBC article covered a similar topic, stating that people need what organizational psychologist Richard Davis calls "receptivity" - and that fewer people have it today than ever before. Davis, the managing director of Toronto-based leadership consulting firm Russell Reynolds Associates, describes receptivity as "the ability to have good judgment, to have insight about people," and it's a necessary skill for humanity. Not having receptivity, or discernment, is "a major concern." Receptivity is "a cognitive ability that you need to actually exercise in order to not lose it," Davis says. Unfortunately, relying on tools like ChatGPT can cause these skills to wane, can allow these muscles to atrophy. And when they atrophy, when we don't continually practice discernment and receptivity and proper judgment, we are less successful in the long run.
 We are cheating ourselves out of not only the abilities to choose, create, think critically, and synthesize, but also the ability to develop the muscles to choose, create, think critically, and synthesize.Â
So, yes, I believe that if we use GenAI tools heavily, especially to replace human skills, we are cheating ourselves - and future generations - out of not only the abilities to choose, create, think critically, and synthesize, but also the abilities to develop the muscles to choose, create, think critically, and synthesize. Working these muscles, along with reflection and self-awareness, leads to growth over time. If we know how to think about one thing, we can usually apply that thought process in other areas. Transferable skills and critical thinking cannot be replaced by technology. The lack of use of these muscles, though, will cause them to atrophy, so we have to maintain their use consistently in order to retain - and grow - their strength.
Zoom Out - and Consider the Impact Beyond Yourself
On the note about us choosing things and discerning what's good and right and sensible vs what's bad or wrong or unsuitable, I recently read an essay published by WBUR (Cognoscenti) that talks about how AI tools (when not used properly) can remove the beauty of discovering new things, of evolving who we are through our ever-changing choices, of enjoying a diversity of sources and ideas and perspectives rather than the one, solitary output from one AI tool. It's a great read. When we have our information curated for us, we only see what's handed to us based on one string of input. We limit discovery. And that's sad.
We need the beauty of discovering new things. We need the struggle of figuring out which source is reliable or which quote would align best with our argument. We need the ability to make choices for ourselves. We need to practice the art of appreciating diverse perspectives and thinking critically about the reliability of sources or the applicability of concepts or the synthesis of multiple viewpoints. We need to connect on a human level and to express our unique thoughts in a constructive way. We need to think, to write, to create, to choose, to evolve. Tools or no tools, these are the bases of being human that we need to preserve and progress now and always.
We need the beauty of discovering new things. We need the ability to make choices for ourselves. We need to practice the art of appreciating diverse perspectives and thinking critically about the reliability of sources or the applicability of concepts or the synthesis of multiple viewpoints. We need to connect on a human level and to express our unique thoughts in a constructive way. We need to think, to write, to create, to choose, to evolve. Tools or no tools, these are the bases of being human that we need to preserve and progress now and always.
So far, unfortunately, GenAI has been - and promises to be - a hindrance to these nobler goals, to future generations, to longer term success for society. We need to think beyond the task at hand and not surrender discernment and progression for a little bit of ease right now. It's not just about us, and it's not just about right now. We need to think about the long-term implications and the higher-level meaning, and we need to make good decisions now so that our future stays bright.

ความคิดเห็น