How Can HR Navigate The Challenge Of ChatGPT?

June 22, 2023 thehrobserver-hrobserver-HR-ChatGPT

Exciting tools such as ChatGPT offer promising opportunities. However, caution is warranted, as evidenced by widely reported cases.

AI is not a novel technology, and its integration into human resources processes and policies has been ongoing. Given the current media coverage and political debates, some managers may be wondering about the implications. The answer lies in the rise of large language models (LLMs) and new general AI systems that have the ability to ‘learn’ and surpass their original tasks. In some cases, programmers themselves struggle to track how an AI tool acquires knowledge or produces results. This uncertainty is heightened by the fact that widely used tools are often provided free of charge by organisations operating in a regulatory gray area.

In a fiercely competitive market overshadowed by the imminent threat of a global recession, business owners, managers, and employees are acutely aware of the precarious nature of their investments and employment. They are naturally inclined to explore any potential source of competitive advantage. Exciting tools such as ChatGPT or Google Bard, as well as more advanced models like the Agents that assist humans in generating effective prompts, offer promising opportunities. However, caution is warranted, as evidenced by widely reported cases involving Samsung’s programmers or the lawyer who submitted a ChatGPT-generated brief on behalf of a client. Additionally, as the European Union leads the resistance against Big Tech, the human resources function finds itself at the forefront of what will undoubtedly be seen as the transformative paradigm of the decade.

For any HR manager seeking to navigate this challenging period, the following steps are recommended:

  1. Ban Large Language Model (LLMs) in the Workplace:

Prominent figures like Elon Musk and the Future Life Institute have called for a pause in AI development, and even OpenAI’s founder has appealed to the US government for regulation. The Italian government recently revoked a ban on ChatGPT and this week Google issued a frank warning to its own staff warning them about issues of privacy. When the chef advises against consuming their own food, it is wise to consider making changes. As a primary step in establishing a general AI policy framework, management should prohibit the use of LLM-based tools, specifically mentioning Google Bard, ChatGPT, and other tools that may have access to sensitive corporate information, such as slide development tools like Tome and SlidesAi.io. Issuing a clear and unambiguous directive will lay the foundation for testing and gradual implementation of AI applications once their benefits and risks are better understood.

  1. Conduct Internal Hacking Exercises:

HR should consider adopting the concept of ethical hacking from the IT realm, which generally invites an external party to test the firewalls, passwords, and other security systems that protect sensitive information. In a company setting, these tools are the administrative systems and processes implemented to ensure quality and adherence to corporate culture. LLM models are particularly effective at engaging with standardized administrative documents, which are also prone to information leaks. The next step, therefore, involves identifying an innovative employee and, under the supervision of a manager, allowing them to use ChatGPT and similar tools to explore how their behavior could potentially expose the company to risks. For instance, how could generative AI be used to process expense requests, conduct procurement exercises, qualify sales leads, or justify requests for promotion or salary increments?

  1. Provide Official Training:

It is not necessary to train every employee, but it is crucial to offer some form of official training support that is accessible to employees at all levels, including those who may not typically require access to company software or computers since most individuals now possess smartphones. Without access to official training resources, employees may turn to ‘gray information’ sources such as social media, where unverified practices can spread rapidly through short-form video platforms like TikTok. If the company fails to provide a secure space for information sharing and mentoring, employees will inevitably absorb anxiety from their social media feeds. Offering official training will counteract this and ensure a reliable and accurate source.

The above three steps will buy time for any organisation to identify beneficial use-cases for new forms of A.I. and also for the industry to mature. During this phase IT and HR teams should collaborate to rank emerging tools based on the value they offer, but also the risk they potentially pose in terms of breaching privacy, and the impact of such a negative outcome. However, the pause cannot be indefinite as productivity and effectiveness gains of this new technology are obvious. The impetus is therefore on implementing effective and efficient governance to enable innovation to continue, safely.

Author
Stephen King, FHEA, M-PRCA, M-EUPRERA

Senior Lecturer in Media at Middlesex University Dubai

Related Posts

December 6, 2024

10 Key Tips on How to Find a Great CEO!

November 25, 2024

Latest Recruitment Strategy – AI Driven