POV

AI and the Workplace: Opportunities and Challenges

 -  5 min read

Our Creative Technology team recently hosted a beer-and-learn seminar for the entire agency to discuss how generative artificial intelligence is shaping our everyday lives and the workplace—both the opportunities it offers and the challenges. This is the second of two articles based on that seminar that we’re sharing. Be sure to read the first article, 88’s 101 on Generative AI.

In the past year, we have seen AI (artificial intelligence) enter the public view from fearful news articles to online SASS (software as a service) products, to touting its ability to solve complex problems for us and improve our lives. It’s not just the general public and tech companies debating its role and feasibility. It’s been the point of discussion for various companies, large and small. 

Employees are dabbling in services like ChatGPT, Bing AI and Bard AI to aid them in their day-to-day work—to either speed up repetitive mundane work or to find inspiration from its output. On the surface, enabling employees to be more productive and freeing their time for more human-centric problems sounds good, but it does come with concerns for the employer and employee.

The way employees use AI varies depending on the company and the role. Software Engineers/Web Developers have been using tools such as GitHub CoPilot to aid them in writing code for a couple of years. Creative industries have used AI services like Dall-E and Photoshop’s Generative Fill Tool for creative ideation. The early adopters of using AI as a tool can boast about the time savings and increased productivity in their day-to-day lives.

So, what are some of the challenges of working with generative AI?

The first concern is the quality of the generated response. Receiving the best output requires the prompt or input to be specific and structured logically for the tool to consume and output the desired result. Even if the user provides a strong prompt, it does not guarantee an accurate response. AI tools are still new, and employees may need to know these constraints.

Illustration generated using ChatGPT and Dall•E, based on the post content.

Once you have constructed a strong prompt and the response appears high quality, it does not guarantee accuracy. All responses should be fact-checked. AI models are not continuously updated and can quickly provide information that is out of date or simply untrue. Using any unscrutinized response in a public forum could have catastrophic implications.

AI suffers from Algorithmic Bias, which creates an unfair result that could hurt groups of people. Bias includes race, gender, sexuality and ethnicity. Such occurrences are not a new phenomenon and have been observed in search results and social media feeds. Although it’s not the intended design of the algorithm that causes bias, the datasets that AI is trained on are likely the source. AI developers are working to address this, but it remains an ongoing issue.

An even bigger concern is the use of AI to deliberately create and spread false information, especially online. Again, tech companies who develop AI, researchers, governments and watchdog groups are working to combat this misuse, but it is a serious challenge.

Despite the concerns outlined, companies are adopting AI solutions in a metered and careful manner as they know that, ultimately, it would boost productivity if used correctly and ethically.

If the pitfalls of AI are well recognized, and sufficient training and guidelines are established, companies can use AI safely but carefully as long as the output is carefully monitored. It’s important to note that AI is still in its infancy and to expect ongoing changes and potentially undesirable results. We recently shared a basic overview on AI. Be sure to read 88’s 101 on Generative AI.