Loading ...

Artificial minds & legal lines - navigating Generative AI in the FMCG sector

Generative artificial intelligence (Generative AI) is sparking a revolution across numerous industries, including the Fast-Moving Consumer Goods (FMCG) industry. The corporate drive for increasing productivity and cost reduction, combined with Generative AI’s increased accessibility and multi-use functions have contributed to its exploding popularity over the past year. In 2023, Microsoft announced a $5 billion investment in computing capacity and capability to help Australia seize the AI era, with Generative AI to become a US$1.3 trillion market by 2032.[1]  

Benefits for the FMCG Industry

The FMCG industry especially stands to benefit from enhanced supply chain management, personalised marketing strategies and product innovation, ushering in a new era of efficiency and consumer engagement. Some of the benefits to the FMCG industry include an improved product offering, increased customer base as well as further revenue opportunities. However, as with any new technology, alongside the benefits, the potential risks should also be considered, and to the extent possible mitigated. 

In this article, we:

  • explore some ways Generative AI is being used in the FMCG industry

  • highlight potential risks of using Generative AI and key things to consider before its implementation

  • examine the EU’s approach to governing Generative AI’s deployment.

How the FMCG industry may be using Generative AI

The FMCG industry is known for high-volume sales, quick inventory turnover and rapid changes in consumer preferences. Using Generative AI and leveraging big data and the Internet of Things, enables FMCG businesses to auto-generate real-time insights into market and consumer trends. In turn, FMCG businesses can in a more efficient and creative way adapt their market positioning, pricing strategies and pivot product offerings.

A few examples of how Generative AI has been used in the FMCG industry are the following:

  • according to one of Mars’ spokespersons, AI is already:

‘helping us predict whether cats and dogs could develop chronic kidney disease; speeding up sequencing of pet genomes to provide individualized nutrition and care; and unlocking efficiencies in our manufacturing operations through digital twin technology’[2]

  • Amazon has recently released ‘Amazon Lex’, a service for building conversational interfaces into any application using voice and text (currently powering the Amazon Alexa virtual assistant)[3]

  • New Zealand supermarket chain, Pak’n’Save, rolled out the ‘Savey Meal-Bot’ which was intended to help shoppers determine meal plans for otherwise unwanted leftover ingredients.[4]

Generative AI – the risks

With all forms of technology there is a need to strike a balance between automation and human expertise. Failing to consider the limitations and risks of AI-powered technology such as ChatGPT, and failing to mitigate those risks, potentially exposes users to concerning issues.

Whilst some risks may not be fully understood (with many AIs being in their infancy stages), we provide an overview of some of the key legal risks already emerging regarding the use of Generative AI in the FMCG context:

  1. We already know there are concerns around data protection and privacy. Chat GPT’s Open AI Privacy Policy cautions that users’ personal information may also be shared with unspecified third parties to meet OpenAI’s business needs and without informing ChatGPT users. Use and data security must be considered before engaging with any Generative AI tool.

  2. Intellectual Property risks are ever-present when Generative AI is used, as the law continues to wrestle with how to best deal with Generative AI tool outputs. Australia’s position under the Copyright Act 1968 (Cth) is that copyright subsists in the original works of an author who was a ‘qualified person’. According to this definition, a Generative AI tool cannot be an author. While organisations such as OpenAI appear generous by stating under their terms of use, that they ‘assign all right, title, and interest if any, in and to the output’, clearly this is of little assistance to assignees, where the owner of the Generative AI tool never owned any rights to the output in the first place and therefore also cannot assign such rights to others.

  3. Reputational risks can also arise especially where Generative AI is deployed for customer-facing applications. This is critical to brand-conscious and highly competitive industries like the FMCG industry where the pursuit of personalised experiences relies heavily on customer input. Hallucinations, which may be generated by AI models, can expose customers to misleading information which may prove harmful if ultimately relied on. For example, one user of the Pak’n’Save’s ‘Savey Meal-Bot’, generated a toxic gas recipe using ingredients such as bleach and ammonia which drew unwanted public attention to the company [5]. This may test limitation of liability provisions used in terms under which customers are engaged, especially in the consumer law context, and instead of appearing innovative, a company will be perceived as careless at best.

How can Generative AI risks be reduced?

As a first port of call, FMCG businesses should consider whether the deployment of Generative AI lends itself to their day-to-day operations. If it is, the next step would be to ensure an appropriate AI governance framework or AI risk mitigation action plan is in place to assess the unique risks that Generative AI poses to that organisation. Key points to consider might include:

  • what data has the Generative AI tool been trained on? 

  • have employees been informed of the risks regarding the Generative AI tool in question?

  • what cybersecurity safeguards have been put in place in connection with the Generative AI tool? 

  • if possible, will employees be able check outputs for hallucinations, errors, defamatory or plagiarised content? 

Once this has been accounted for, the key is then to maintain and refine the AI governance framework. Early communication with other FMCG stakeholders, especially in upstream applications such as manufacturing, also set appropriate Generative AI usage boundaries and fosters a sense of accountability and trust. In a downstream application, it is prudent to maximise appropriate employee engagement and upskilling opportunities with the technology. 

The current AI regulatory landscape in Australia

1. Australia’s AI ethic framework – the Federal Government’s initiative to make Australia a global leader in responsible and inclusive AI

The 2019 published framework was designed to ‘guide businesses and governments to responsibly design, develop and implement AI’. It’s part of the Australian Government’s commitment to making Australia a global leader in responsible and inclusive AI. 

It includes 8 principles, which are voluntary, that organisations can apply to help:

  • achieve safer, more reliable and fairer outcomes

  • reduce the risks of negative impacts on those affected by AI applications

  • ensure the highest standards of ethical standards and good governance are adhered to when implementing AI.

2. Australian Government’s response to the safe and responsible AI in Australia consultation

In January 2024, the Federal Government’s Department of Industry, Science and Resources announced an interim response to the issues raised in its discussion paper on the safe and responsible development and deployment of AI in Australia (AI Response).[6] Although not all AI applications require a regulatory response, the Department found that the existing framework did not adequately address known risks and called for stronger protections in privacy law, a review of the Online Safety Act 2021 (Cth) and the introduction of new laws relating to misinformation and disinformation. Submissions also identified certain gaps in existing laws which did not support the deployment of AI systems in high-risk contexts such as critical infrastructure, medical devices and biometric identification.

The AI Response focuses on governance mechanisms to ensure AI is developed and used safely and responsibly in Australia and builds on the Rapid Research Report on Generative AI delivered by the government’s National Science and Technology Council.

3. Federal Government's temporary expert advisory group

The Australian Government’s temporary AI expert group was created as part of its interim AI Response to the safer and responsible AI consultation. It is made up of legal, ethics and technology experts who will advise on testing, transparency, and accountability measures for AI in legitimate, yet high-risk settings.

EU’s approach to AI regulation– Australia take note!

On 13 March 2024, the European Parliament voted in favour of implementing the EU Artificial Intelligence Act (AI Act). We expect Australia will likely follow suit, by implementing similar legislation to that of the EU or another jurisdiction.

The AI Act defines AI systems as machine-based systems that:

  • operate with varying levels of autonomy

  • may exhibit adaptiveness after deployment

  • for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. 

The AI Act will have extraterritorial application, meaning that it applies to providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are established or located within the union. This is particularly relevant to the FMCG industry which relies on its international supply chains to manufacture and distribute products by way of imports and exports. 

Further, for the EU Parliament:

  • the priority is to:

‘make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.’[7]; and

  • the AI Act specifically categorises AI risk into three separate categories: 

Risk 1 - Prohibited AI

These are classed as unacceptable risks of an AI system that are considered a threat to people and will be banned. They include:

  • cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children

  • social scoring: classifying people based on behaviour, socio-economic status or personal characteristics

  • biometric identification and categorisation of people

  • real-time and remote biometric identification systems, such as facial recognition. 

Risk 2 - High Risk AI

These are AI systems that negatively affect safety or fundamental rights and are split into two categories:

  • AI systems that are used in products falling under the EU’s product safety legislation (this includes toys, aviation, cars, medical devices and lifts)

  • AI systems falling into specific areas that will have to be registered in an EU database:

    • management and operation of critical infrastructure

    • education and vocational training

    • employment, worker management and access to self-employment

    • access to and enjoyment of essential private services and public services and benefit

    • law enforcement

    • migration, asylum and border control management

    • assistance in legal interpretation and application of the law.

Risk 3: General Purpose AI

These are AI models such as ChatGPT, when trained with a large amount of data are capable of competently performing a range of diverse intellectual tasks.[8]

Key takeaways

  • Determine how AI may be used as part of your FMCG business. 

  • Ensure your organisation has suitable policies and protocols in place governing the use of Generative AI.

  • Do not post private, confidential or otherwise commercially sensitive information into Generative AI tools such as Large Language Models (LLMs).

  • Check and re-check the factual correctness of outputs generated by LLMs. If you choose to rely on, or use LLMs outputs, you are doing so at your own risk. If published by you, you could potentially mislead your customers.

  • Consider warranties offered by providers of LLMs in their terms of use.  

If you require any assistance with AI, intellectual property or data and privacy related matters please do not hesitate to contact Bartier Perry Lawyers. 

Authors: Rebecca Hegarty, Robert Lee and Juan Roldan


[5] See the example of Pak’n’Save above where one user generated a toxic gas recipe using ingredients such as bleach and ammonia. This drew unwanted public attention to company .

[6] Federal Government’s response to its 2023 ‘Safe and responsible AI in Australia’ Paper