Splash Image

France, Germany, and Italy have reached an agreement on how artificial intelligence (AI) should be regulated, according to a joint paper. The initiative supports “mandatory self-regulation through codes of conduct” for advanced AI foundation models, including large language models (LLMs), which are designed to deliver a wide range of outcomes. The joint paper is reported to note that the intrinsic risks lie in the application of AI systems rather than in the technology itself.

Major economic powers are moving with a more concerted urgency to position themselves to regulate AI, and now big data in languages that go beyond consent and internal compliance.  Recently the UK government held an AI Safety Summit and the Germans have followed with a digital summit bringing together industry, policy-makers and research interests.  

The joint paper makes interesting reading – in summary (after rehearsing some of the anticipated benefits from other EU regulatory initiatives):

  • We believe that regulation on general-purpose AI systems seems more in line with the risk-based approach. The inherent risks lie in the application of AI systems rather than in the technology itself. European standards can support this approach following the new legislative framework.

  • When it comes to foundation models, we oppose setting untested norms and suggest building, in the meantime, mandatory self-regulation through codes of conduct. They could follow principles defined at the G7 level through the Hiroshima process and the approach of Article 69 of the draft AI Act and would ensure the necessary transparency and flow of information in the value chain as well as the security of the foundation models against abuse.

  • To implement our proposed approach, developers of foundation models would have to define model cards.

  • Defining model cards and making them available for each foundation model constitutes the mandatory element of this self-regulation.

  • The model cards must address some level of transparency and security

  • The model cards shall include relevant information to understand the functioning of the model, its capabilities, and its limits and will be based on best practices within the developer community. For example, as we observe today in the industry: number of parameters, intended use and potential limitations, results of studies on biases, red-teaming for security assessment

  • An AI governance body could help to develop guidelines and could check the application of model cards.

  • This system would ensure that companies have an easy way to report any noticed infringement of the code of conduct by a model developer to the AI governance body. Any suspected violation in the interest of transparency should be made public by the authority.

  • No sanctions would be applied initially. However, after an observation period of a defined duration, if breaches of the codes of conduct concerning transparency requirements are repeatedly observed and reported without being corrected by the model developers, a sanction system could then be set up following a proper analysis and impact assessment of the identified failures and how to best address them.

Much of this thinking resonates with DSD.  Despite its emphasis on risk, the proposal recognises the need to focus on AI applications, and implicitly on data use and access to fuel and direct AI in contexts of human engagement (risky or otherwise).  By “mandatory self-regulation” the proposal is echoing Braithwaite’s enforced self-regulation which has informed some of the early DSD thinking.  This approach accepts that stated principles and best practice can assist the great majority of powerful data stakeholders to engage in respectful and responsible data management.  For the few that don’t, then there may be the need for degrees of compulsion at least in the early stages of data negotiation, provided by external authority.  DSD recognises the importance of universal principles and best practice invocations as well as contextual ownership.  It also anticipates the utility of external aids like stewardship and data licensing to stimulate initial engagement and negotiation among stakeholders.

Another feature of the mandatory self-regulation model in line with DSD is the foundation requirement for openness in the way that data is used in LLMs and other mechanisms of decision-making with direct influence on the interests of data subjects and their communities.  The suggestion about ‘model cards’ is intended to concretise openness and transparency.  Additionally, these devices would be a repository to which other stakeholders in a data ecosystem could return for information about data application ongoing.  Also, there is the potential for this information frame to be reflective and form the basis for ongoing assessment and evaluation which hopefully would have positive impacts on trusted data spaces and trust relationships. The use-cases conducted on DSD application emphasise the importance of reflection and review in the operationalising of DSD.

The mandatory self-regulation strategy, while relying on external oversight, possible intervention, and even last-resort sanctions, contrasts with DSD which does not emphasise these aspects .  However, the mutuality on which self-regulation stands is the initial and overwhelming commitment in this approach, as it is in DSD.  While the compulsory self-regulation proposal institutionalises openness processes and reporting requirements, there is always anticipated in particular contextual operations of DSD that stakeholder might look to more formalised transparency and accountability obligations.

Above all this mandatory self-regulation recommendation echoes DSD in the belief that regulating technology alone is not sufficient if we want a more responsible digital environment.  DSD goes further by offering all stakeholders and data subjects in particular, active roles on bargaining and coproducing trusted data spaces and respectful/responsible data engagements.

Another development in the AI governance groundswell can be found in the UK AI (regulation) Bill.  This proposed legislation will create an oversight body, to coordinate AI regulation across government agencies and businesses employing AI tech.  According to the Bill, "The functions of the AI Authority are to ensure that relevant regulators take account of AI; ensure alignment of approach across relevant regulators in respect of AI; undertake a gap analysis of regulatory responsibilities in respect of AI." The Authority is tasked with coordinating reviews of existing legislation, including aspects like product safety and consumer protection, to gauge their fitness for addressing the challenges and opportunities presented by AI.

The Bill delineates principles for the AI Authority to consider in regulating AI; including "safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress." Of particular interest in understanding the alignment of this proposal and DSD are the notions of contestability and redress, which are expressly provided for in the capacity for negotiating and mutualising data management concerns among players in DSD.

Another common direction with DSD is how the Bill mandates businesses involved in AI to be transparent, thoroughly test their AI systems, and comply with existing laws, including those related to data protection. It also emphasizes the need for AI applications to be inclusive and non-discriminatory, catering to diverse socio-economic groups, the elderly, and disabled people. In a significant move to ensure ethical use of AI, the Bill requires businesses to appoint a designated AI officer, responsible for "the safe, ethical, unbiased and non-discriminatory use of AI by the business."  This specification of potential discrimination via AI (and implicitly its use of personal data) is another way of understanding DSD’s interests in power asymmetries.  The Bill recognises the importance of identifying power imbalance and approaches the dispersal of power in favour of vulnerable stakeholders by having AI (and data use) cater to the interests of stakeholders who otherwise may be disadvantaged through power imbalance.

In the US senators have introduced a bill for AI regulation that intends to balance accountability and innovation, through in part providing recognition of consumer interests and promote education for consumers about AI systems.  Of this initiative a VP from IBM posted ‘“The Artificial Intelligence Research, Innovation, and Accountability Act of 2023 would promote U.S. leadership in innovative AI, protect consumers and citizens, and ensure a level playing field for both open source and proprietary AI. In the rapidly moving realm of artificial intelligence, this legislation takes a balanced approach to promoting innovation while establishing needed guardrails.’  Whether these aspirations could be achieved through legislative initiatives will remain to be seen but what is compatible with the spirit of DSD is the recognition of consumer (data subject) interests and the need for this stakeholder group to be better informed through concerted education programmes.

DSD is compatible with these proposed developments in data governance, and with already-operational governance options.  For instance, participatory stewardship, CSR and DSD complement each other in that they all provide pathways for mutualising interests, do not focus on data as property, recognise the susceptible location of data subjects, confirm data integrity and value through sharing information about data access and use, create safe data spaces to generate trust among stakeholders, and above all promote respect and responsibility in data engagement.