Home / News and Events / Latest News / AI in the Workplace – whether we like it or not

AI in the Workplace – whether we like it or not

We are at the crest of change yet again. While artificial intelligence (AI) itself is not new – many organisations, knowingly or not, already use forms of AI technology within their businesses – we are now at the cusp of a major change. So just as the internet of things changed the way we work, connect, shop, spend our leisure time – and in turn how businesses operate – the AI revolution is set to usher in massive change affecting job design, workforce planning and organisational development.

By creating a comprehensive policy around the use of AI in the workplace, Employers can help to ensure that the technology is used effectively and ethically and that employees are trained on how to use it safely and responsibly. The policy should be formulated with guidance and governance in mind. Already we have seen some of the risks and opportunities that Generative AI can bring however it is not without its limitations;

  • The difficulty with generating an AI policy for a business is that it’s impossible to define policy without knowing for what purpose AI will be used.
  • It is advisable that any business in a regulated space – tax, accounting, law/HR, transport, pharma, food, etc to be extremely cautious about use of AI tools, and not to presume that any output is compliant.
  • AI is only as good as the data it is set. The output is only as good as input.

How are businesses using ChatGPT?

Examples include:

  • Writing templates for online content.
  • Customer service correspondence (Chat Bots)
  • Writing code.
  • Writing sales pitches.
  • Summarising long reports.
  • Analyse business trends.

Supporters of the system maintain that it doesn’t signal a replacement of traditional workers, but it does give traditional workers a time-saving tool, the likes of which they have never seen before. In other words, it’s opening new doors.

Food for thought

  • Set guidance based principles. Most if not all employees will be using these tools in the next few years so forbidding is not an option.
  • Data Security needs to be at the forefront of all guidelines
  • Be mindful of bias
  • Experiment with the tools – knowledge is key and again it may be a workplace opportunity.
  • Think Strategically – can value be added in your company using this tool?

 

Why the need for a policy for ChatGPT?

Because as good as ChatGPT looks on first viewing, the system also has its share of limitations that could cause problems if left unchecked. Much of these limitations stem from the information bank available to it.

  • That bank does not keep up with any news cycle. The most recent information could be months, if not years, old. This means any ChatGPT-produced content could ignore the most recent relevant events.
  • The information bank can include biased sources. ChatGPT could misinterpret these as hard facts and present them as such.
  • The bank may contain sensitive data, which ChatGPT could deem fair game for widespread publishing. If organisations use ChatGPT for published content, they become liable.

In addition, the system (like any other technology) can make simple errors that might be challenging to spot.  Limitations are just one factor in why companies should create a policy for Chat GPT. The other is the pace of its popularity. Many will undoubtedly feel they need help to keep up and make sound decisions about its use. Policies help correct this imbalance.

What should a ChatGPT or AI usage policy contain?

  1. Data privacy and security: A policy should be put in place that outlines how the company will collect, store, and protect the data used by AI systems. This includes ensuring that only authorised personnel access data and that it is stored securely.
  2. Bias and discrimination: AI systems can reflect and amplify human biases and prejudices. The policy should address how the company will ensure that AI systems do not discriminate against individuals or groups based on protected characteristics such as race, gender, or age.
  3. Transparency and fact checking: The policy should require that AI systems used in the workplace are transparent and explainable. This means that employees should be able to understand how AI decisions are made and why specific outcomes are generated.
  4. Employee training: The policy should require that all employees who work with AI systems are trained on how to use them effectively and ethically. This includes understanding the limitations of the technology and the potential impact on their work.
  5. Accountability and responsibility: The policy should clearly define who is responsible for AI systems’ decisions in the workplace. This includes holding individuals and departments accountable for the outcomes generated by AI systems.
  6. Ethical considerations: The policy should address ethical concerns surrounding the use of AI in the workplace, such as the potential impact on employment and the ethical use of AI in decision-making.
  7. Continuous monitoring and improvement: The policy should require ongoing monitoring and modification of AI systems used in the workplace to ensure that they are functioning as intended and are not causing unintended consequences.
  8. If perhaps you require any guidance on this topic or any other HR topic don’t hesitate to contact the HR Team on 01 6622755 (Option 2) or alternatively via [email protected]