CoSTAR National Lab:
AI Principles
Last updated: February 2025
Context
To provide guidance to users (SMEs, other researchers, cultural institutions, partners) of the CSNL and researchers working with the CSNL’s infrastructure for the use of AI in R&D activities. This includes all CSNL collaborations, whether physically taking place within the Lab itself or not.
Artificial Intelligence will form a key component of the R&D activities undertaken by the CoSTAR National Lab (CSNL) as well as a crucial part of the infrastructure we offer for companies to access. These principles underpin our approach to the responsible use of AI: by our researchers, with our collaborators and in our partnerships.
These principles set out a definition of AI, a commitment to annual review of our approach and to listen to the needs and concerns of industry as we tackle the challenges and opportunities of AI.
Definitions
Artificial Intelligence an umbrella term for a range of technologies and approaches that often attempt to mimic human thought to solve complex tasks. Things that humans have traditionally done by thinking and reasoning are increasingly being done by, or with the help of, AI. (Information Commissioners Office)
Generative AI is a type of artificial intelligence that can create new content - including text, images, code, music, and synthetic data - by learning patterns from existing data. Unlike traditional AI systems that primarily analyse or categorise existing information, generative AI can produce novel outputs that maintain the statistical properties and characteristics of its training data.
Principles
We will act to ensure that the use of AI is to further the CoSTAR National Lab mission to drive inclusive, ethical and sustainable growth through R&D and innovation
Promote and respect human creativity by undertaking research that enables the creative sector to be more creative, develop and support talent.
Fundamentally respect and promote creator rights.
We will promote the responsible, secure and safe use of AI by all researchers and staff in the National Lab as well as in our collaborations, partnerships and programmes with the creative sector. Our responsible use of AI includes evaluating the use of any model, dataset or algorithm in terms of the following priorities:
Provenance: can data provenance be established and is the dataset clear of any copyright infringement or appropriately licenced and compensating data owners.
Bias: we will proactively address bias in our own work and we will not support projects that are likely to perpetuate discriminatory bias towards any groups with protected characteristics.
Transparency: We will act with transparency regarding where, how and when we use AI in our R&D work, including clearly detailing the provenance of any data generated and if we are generating synthetic data for experiments.
Sustainability: We will always promote approaches to more environmentally friendly uses of AI, including disseminating best practices.
We will always comply with the National Lab partners’ University research ethics policies, working to develop and disseminate best practice. We will work with creative industries partners to ensure that their own practices comply with:
Current UK government policy: https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety?utm_source=chatgpt.com
Ofcom’s approach to ethical harms: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/protecting-people-from-illegal-content-online/
European Parliament AI Act: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenc
The principles explained on the UKRI Trusted Research and Innovation page and the guidance issued by the National Protective Security Authority (NPSA).
In addition, we recognise that any model, algorithm or dataset can be mis-used and we cannot control the use of commercial models by others. We will review our use of models, datasets and algorithms on an annual basis and where we are alerted to incidents of illegal, unethical or misuse by reputable news organisations or a representative group of creative sector companies.
We will always act with informed consent in our R&D, ensuring that researchers, partners, users and the general public are aware of when and how their data will be used in our AI-based research. This includes ensuring that we will:
Never use sensitive/personal data without informed consent
Ensure that the maximum possible level of control is provided to users over their data and that we transparently and clearly explain what is and is not possible during the consent process.
Ensure that as far as possible any identifiable data and copyright assets can be removed on request
We believe that our role to generate growth in the creative industries is best served by prioritising and promoting ecosystem development over proprietary development. To enable this, we will:
Prioritise the safe use of open source and smaller models that can be run locally over commercial, very large, and cloud-based models
If you have any questions about our CoSTAR National Lab AI Principles, please contact costar@rhul.ac.uk.