UncategorizedMay 25, 20230

AI & Chat GPT

It’s amazing how far we’ve come, even since 2001, with the ease and efficiency that technology has brought to the workplace.  It’s no surprise that it continues to evolve at a much faster pace. While we love the simplicity of asking Siri or Alexa a random question and immediately getting an answer, we must realize that improvements and new technology can create a variety of unintended consequences. In our world of HR, that means risk increases in areas where we might not realize it.

Last month, The People Perspective shared an article about the EEOC issuing guidance on the use of Artificial Intelligence (AI). AI has exploded quickly and while some may still be in the early stages of learning about AI, other companies are ahead of the curve. Many large, well-known companies have already banned employee use of AI tools, such as ChatGPT, due to concerns about unintentional leaks of proprietary or confidential information and potential security risks.

AI is evolving and there are many unknown factors at this point, but it will be important for employers to proactively learn about the impact of AI in their workplace and address any potential risks. AI can come into the workplace in a variety of ways, not just through employees using ChatGPT to perform the functions of their job. Many employers are purchasing AI software to perform business functions.

While AI can assist with various types of business operational activities, we’ll focus on the HR perspective where it can be used in areas such as recruitment, performance management and job description development. It can certainly create efficiencies, but those unintended consequences create risk that may outweigh the efficiency. For example, if an HR employee disclosed confidential information to AI software about a performance issue or an investigation in order to get guidance on how to handle it, the AI software retains the information disclosed so it’s no longer confidential in the employer’s records.

AI software is built on information that has been collected on an ongoing basis and developers use the data for continuous improvement of their AI output. With that being said, AI output sources could be inaccurate or unreliable. The data may also contain built in bias. Employers are responsible for the employment decisions they make. If they use AI data from a software company to make a hiring decision and an applicant files an EEOC claim for an unfair hiring practice, the employer is responsible for defending that decision, not the AI software company.

At The People Perspective, we stay on top of the latest trends and developments. If you are concerned about the impact of AI within your workplace, we can assist you with developing appropriate policies to mitigate your risk.

Written by: Lacy Bolling, SHRM-CP, PHR

Leave a Reply

Your email address will not be published. Required fields are marked *