This post is part of an ongoing log where I’m exploring how AI is shaping leadership, education, and the future of work. Rather than a finished statement, this is a living reflection — one I’ll return to with updates, revisions, and new insights as the landscape evolves. I welcome dialogue, disagreement, and shared curiosity.
You can’t open up an article, attend a conference, or scroll through social media without hearing something about AI these days. But AI (artificial intelligence) is not just a new buzzword. Many of us are already using the tool on a daily basis to speed up responses, brainstorm ideas, and help us to plan for our futures. And unlike buzzwords of our past that sort of faded away into the mainstream (MOOCs, gamification, big data), I don’t think AI is going to have such luck. It will continue to revolutionize our everyday work and interactions.
I remember the first time I was shown how to knowingly interact with AI: a colleague asked me if I had used ChatGPT yet. I said no but asked him to show me. He gave me a simple prompt to start with: “Ask it, “What are 3 secret spy names you would give me?” As funny as the first set was, we went through several iterations, such as spy names from 007, Harry Potter, and Lord of the Rings. As entertaining as this experience was, my GPT skills are far more advanced. A year later, I am using GPT daily in my professional and personal life for helping to craft emails more quickly, ideate options, and even plan trips. I’ve crafted custom GPTs so I can load them with a repository of information and stock them with different sets of expertise.
Chances are I had been interacting with AI unknowingly for years. I talk to my Alexa every day. I used to rely on Google for answers to my questions. Even Microsoft Word and Grammarly were giving me suggestions for years. But there is something about GPT, isn’t there? For those of us who use it on a daily basis (or a large language model (LLM) like it), we are learning how to collaborate with a tool to be more productive in a more precise way. Most recently, I have found AI helpful for human interaction. I used the voice mode to walk through updating my CV and brainstorming my website updates. I use the Juniper voice (open and upbeat), which also gave me a lot of confidence in my work.
Sure, GPT isn’t always right. It frequently leaves out whole paragraphs of information and starts to flake out when a project gets too big, but it is still useful. I describe it as a grad student: super helpful, but you still need to check its work.
AI in the Workforce
In the general workforce, there are three types of AI adopters I’m interacting with right now. The first set knows it exists, but they are ignoring its potential and don’t use it on a daily basis. There are those of us who use it on a daily basis and continue to see what its potential is in our professional and personal lives; we’re trying out new tools all the time just because we’re interested in experimentation and being more productive. And then there are those who are immersed in the research, the custom coding, and applying AI in all sorts of custom ways to improve their customer engagement and imprint on the world. This group also understands AI’s full potential – both the positive and the negative impacts for the future of our society. And for the large majority, they are learning and using AI on their own – without any real support or encouragement from their employer. Employers who don’t know how powerful AI is or how many of their employees are using it multiple times a day for nearly every task are only avoiding reality.
Most employers aren’t even asking for AI as a skill yet in job descriptions. The annual AI Index Report from the Institute for Human-centered Artificial Intelligence (HAI), part of the Digital Economy Lab at Stanford University is the world’s most authoritative source for data and insights about AI. Stanford partnered with Lightcast (a leader in labor market analytics) to show how this data on AI intersects with the labor market. Currently, AI is really only sought after in tech, analytics, and finance (fraud) positions. When employers finally get around to listing AI as a skill needed for most jobs, what is it they will be looking for? Firstly, employers should want employees to know what AI tools actually are, where they are on the web, what existing tools AI is built into, and how to use them.
Employees today are coming out of degree programs that don’t even teach them how to utilize AI in their work. How many of us in higher ed are still focusing on penalizing students for using AI instead of teaching students how to team with it? AI is here, and like any new tool; waiting around won’t make it go away. This might be a case where we all just need to jump in and learn as we go.
Of course, employers will be looking for proof that AI actually supports teams in being more productive. Some research just came out that supports this point. A Harvard crew and Ethan Mollick published research that backs up WHY interacting with AI is a skill worth teaching. In a study of 776 employees at Procter & Gamble, researchers found that teams and individuals are more productive and creative utilizing AI. The types of tasks employees teamed with AI to work on included generating innovative ideas to boost shampoo sales, developing new product concepts, designing packaging, and proposing retail strategies. Teams using GPT-4 outperformed or matched traditional teams, especially in top-quality outputs. AI helped bridge expertise gaps, improved team creativity, and made the work experience more positive—suggesting AI can function as a true teammate, not just a tool. Surprisingly, AI use also improved emotional experiences at work: people using AI reported higher excitement and enthusiasm while experiencing less anxiety and frustration. This research is the proof many of us need to begin to acknowledge that AI is here, and it can support organizations – not hinder people’s creativity.
Mollick’s recent Substack post announcing the release of the working paper about a study is titled “The Cybernetic Teammate.” Now, this word ‘cybernetic’ is not new to me. While completing my PhD, I used a framework from the great Erich Jantsch. He was, what I would say, one of the founders of futures thinking, systems thinking, and sustainability. Jantsch was also known as a prominent figure in cybernetics. Cybernetics is a field that captures looking at the unintended consequences of technology on people and society. And maybe one goal in this community is that humans retain control and can think through the consequences of the implementation of new innovative technologies. So, before we even dive into the research and the article, the field of cybernetics is asking us to consider what it means when we implement and utilize AI as a teammate for good or bad.
AI and Ethics
But what about the bad and unethical side of AI? Joseph Gordon-Levitt made a legitimate case recently. Data companies are using big data on the internet and provided by real people (unknowingly) to generate content. He uses the example of a professor who puts their lecture online for free. An AI bot can find it and build a model that allows people to have a conversation with it. The AI bot is making money through subscriptions. The professor is never paid for that use. He asks if there is a world where human ingenuity AND AI can both be compensated.
On another note, I found this article linked in the TLDR newsletter really interesting: “I’d rather read the prompt”. In it, PhD student, Clayton Ramsey, pleads for us to write on our own. As he’s grading papers, he faces what so many academics face in grading: “The ChatGPT rhetorical style is distinctive enough that I can catch it, but not so distinctive to be worth passing along to an honor council.” TurnItIn is catching GPT ‘copy and pastes’ left and right these days. He pleads for students and others, “Don’t let a computer write for you!…I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.” However, we can imagine why students and many others are turning to language models to rewrite and write for them: too much is on the line: a grade that is subjective, an article submission that could be rejected, or an email that might result in a student suing you if you don’t phrase it correctly. Clayton brings us back to the main point of the article: “I now circle back to my main point: I have never seen any form of [creative] generative model output (be that image, text, audio, or video) [that] I [wouldn’t] rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience – if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.” If we can back away from being so critical in our grading and judgment of certain works, maybe more people would feel more free to post in their original language. But the reality is this world has become far too critical. We see it in social media comments, and we see it in journal and book article rejection letters. There are far better ways to act in life, such as supporting one another vs. nasty criticism.
Resources
- Stanford Report: AI Index Report
- Lightcast work: AI and the labor market.
- Ethan Mollick/Proctor & Gamble Research: The Cybernetic Teammate
- Joseph Gordon-Levitt: AI Companies Want to Legalize Theft
- Clayton Ramsey: I’d Rather Read the Prompt