Artificial Intelligence in the Workplace...
So, I've been using AI for a while now. My photo editor, my text-to-speech utilities, my grammar checker, my bank's telephony system that detects my voice to identify me or determine if I'm stressed, my phone that matches my face, and my fingerprints to allow me to log in. 'That's not AI', I hear you say, but I found this definition, and I think they all fit :
"Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP)."
But the latest giant technological leap everyone is obsessed with is generative AI. We're using it to write courses, generate images, music, computer code, and much more. A few months ago, everybody was proclaiming to the world that they were into AI (well, not everybody, but at least a few) and that it would change everything. But no one seemed to know what they would use it for. A colleague of mine posted to Facebook how he'd used it to generate an app, including code. Another is setting up events all over Facebook for AI enthusiasts to meet and talk about AI. I felt obliged to start investigating this phenomenon.
So, what did I find out? Well, lots of people are worried about the following possibilities:
An AI turning into a self-replication consciousness and, it has to be said, an evil organism that decides people are vermin and attempts to take over the world and wipe out the human race.
Knowledge workers losing their jobs (oops, that's me done for).
Human workers being universally replaced by AI bots.
To be honest, though, I find it useful. Now, you can find plenty of examples where it's being used to output low-quality products and quite a few where it's being used to perpetrate fraud. For example, I've seen quite a few ads on TikTok, Instagram, etc., with AI-generated videos featuring famous people advertising things or expressing opinions you wouldn't associate with the real person. This is a scary development. We already knew we couldn't believe anything we saw on the Internet, and now they've taken that to a new level. On a less dramatic note, I bought a book on SAFe (Structured Agile Framework) on Kindle for £19.00, which was very short and monotonously worded. But most frustratingly, it didn't deliver the case studies that the summary promised. I suppose a human could have written it, but I highly doubt it.
So, how are we using it at MCAREL? Well, one of the main struggles is embedding it in our workflows. Usually, there's an assumption that a human does the job better. But that's probably only true if you plan to generate and then throw it over the wall without editing or verification. We've all heard, if not experienced, that ChatGpt gets it wrong sometimes. In addition, if you are using it without any idea about the subject you're generating against, you might not know it's wrong; if it's a subject close to your heart or area of expertise, then good luck getting it to express the concepts that are important to your work. So, how does it fit into our workflow? We're not looking to usurp the creative process; we just want to speed it up.
For courses, ChatGpt will generate an outline in about 30 seconds. That's a few hours of work saved and probably an hour or so's research. We'll then manually go through the outline and check that it covers all the areas we need to cover. We can then feed our input into the program to flesh out each item. If we generate examples, those are tested and corrected by hand. Along the way, things will get corrected and changed until we're ready to publish the finished article. Our process doesn't allow someone who doesn't know the subject to generate and publish a course. So basically, we're not using the system to its full potential. It gives us some advantages in the research and the speed department. However, things are changing fast, and we fully expect that, over time, we will hand over more of the work to the tool.
I haven't mentioned the tools for video generation, voice synthesis, etc. Those seem to be still detectable as not human but improving fast. These need content as input. If you use AI to generate your content, you're getting closer to getting humans out of the loop. But humans provide the demand. AI can't take over until it is generating its own demand, Surely?