Illustration developed for the Youth Perspectives on Digital Self-Determination initiative as part of the EdTech use case.
Drawing from some real-life use-case experience, we will ground the impediments to participation and inclusion, by working through some of the reasons DSD may or may not be for you. Information deficit, distrust, and disillusionment with other governance models are barriers enough to all data governance models, and it is helpful to see some ways around the constantly arising challenges to interactive data dependencies and a genuine involvement in managing your data.
DSD crucially depends on the inclination of data subjects (and their communities) to get active in the control of their data. Because we have become so digitally connected and we produce so much data without realising (or perhaps caring), the first step for DSD is an awareness and engagement exercise, in which these blogs are one small part. Reasons for why data subjects might seem disinterested in data management could range from profound feelings of ineffectiveness to a lack of knowledge and therefore interest. It would be wrong to assume that data subjects uniformly distrust data use and so this has to be overcome along the way. We often find ourselves happy with or give little thought to the data relationships we develop with tech and info providers. So, is DSD awareness at risk of developing data paranoias that are unnecessarily disturbing.
The pace of data dependency, and of unrestrained data use is such that if representative and informed governance is to be expected in the broader management of our daily lives, then involvement with how our data is accessed and used, should feature prominently in this expectation. And we can’t worry, for instance, about the way generative AI might rule future communication modes and not require the personal data from which it feeds to be managed significantly by those who create and interconnect it.
Looking at motivation rather than inaction for those of us who rely primarily on data through our smart phone apps, which has been described as creating a culture of convenience with digital tech and data sharing the way we do, it is too easy to slump back on the sofa and say there is nothing we can do in the face of the info giants. Well, there is and if DSD can provide an opportunity and capacity to win back your data then the main concerns are motivation, willingness and successful engagement.
Motivation vs. cultures of convenience? Sometimes much of the technical details of the digital worlds elude our understanding. Without a strong grasp of mathematics, the complexity of algorithms remains beyond our comprehension, leading us to rely on the power of the digital amenities and the Internet of Things without much thought. However, while the algorithm itself may be complex, we can still understand the decisions it enables and the impact on us. So, it is with caring about how our personal data is accessed and used.
This blog shows how linking powerful data players to susceptible data subjects by first recognising power imbalances standing in the way of responsible and respectful data access and management, will open up a new way of seeing data exchange – not so much about protection but more in terms of positive engagement and fair outcomes. This emphasis, which will be argued as the positive/proactive dimension of DSD, has been woven through the earlier blogs. Power is discussed not as some radical reforming of data dependencies but simply in terms of fair and equitable discussions and negotiations between stakeholders in any uneven data ecosystem. (1)
If we have little interest in challenging data use and the way frontier technologies are developing then those at the front end of AI and big data will not have the benefit of alternative thinking about the pace of AI development and accessing/using personal data. As was the case with attributing and distributing ethical responsibility for AI ecosystem decision-making previously, those engaged in AI development down the line often felt that market pressures and confusion over terms made considering ethics not a productive use of their time. More recently, the fear of generative AI and data abuse is now an unavoidable part of tech culture, and our expectation for its governance reaches across political and technical lobbies. OpenAI’s recent radical shakeup, driven in part by what is said to be the suspicion of some board members that one day AI could destroy humanity, is only the tip of an iceberg of anxiety about frontier tech and data dependencies. It reveals that power plays between interests committed to AI expansion, and corporate oversight intended to give more time to consider these developments can even overturn conventional understandings of corporate governance. Against this, DSD offers a simple pathway back to connecting people with their data and away from the tech domination of data personhood. (2)
In making a case for DSD it is important not to oversell its capacity in solving all anxieties associated with AI and data use. Too often governance policy (as with the technology it would govern) avoid recognising and highlighting failures that the governance policy might encounter. For instance, personal data protection regimes (PDP) see the governance priority directed towards rights of privacy. However, if data is used and reused in contexts where no such rights exist, where people transact personal data as if privacy is a lesser concern, or private space does not feature where societies and communities interact, then other concerns not covered by PDP will remain outside the governance frame.
DSD will not function if:
-
Stakeholders initially avoid or reject invitations to come together around data practices,
-
Trust cannot be generated between stakeholders in any data ecosystem in which DSD could be applied,
-
Parties to DSD are not motivated to move from self-interest to mutualizing data control and benefits, and
-
Data generation and use is such that many data subjects vie for control of data, data messages are layered and complex in their communication, or interests cannot be easily identified and negotiated.
Blogs 1 and 2 dealt with the first three of these challenges but it is worth reiterating some more applied responses for consideration.
DSD Adoption: Use Cases
Migration
Our use-case which looked at migration commenced with the stark divide between powerful and powerless stakeholders. Refugees and other compelled migrants may be stateless or without the protections of clearly defined citizenship and as such for them, motivation to participate is not really a practical deliberation. In this use case, if DSD were to work with protocols to ensure more equitable data engagement, then aspiring migrants would need the individual and organizational assistance of NGOs and international organizations even to come to the negotiation table with large data holders.
Open Finance
The absence of trust relationships between data stakeholders was an initial feature of the open finance use case. Even when representatives from financial institutions could accept the possible advantages of DSD, they did not talk in terms of customer benefit, but more about how DSD could fit with their already-operating internal governance frames. Part of the problem centered on the absence of representative customer organisations that could speak to the big banks rather than a myriad of individual data subjects needing to advance their singular data interests.
Disability
In the disability use case, discussion quickly turned to market issues beyond data control. Data subjects seemed more concerned to have big tech firms create applications and platforms that recognized and serviced the unique needs of disability constituencies. In each use case, there were different reasons revealed for coming together and different outcomes hoped for.
The use cases, rather than solving issues of trust, motivation and initial engagement, highlighted the importance of different contexts for data access and use, various prevailing interests and particular dynamics that operate within different data priorities. It became obvious as the use cases unfolded that while DSD operated with certain common principles and stages, its ownership was context specific, the relationships it generated created and maintained different understanding of trust, and the outcomes sought varied. The clear constant was power imbalance and the underpinnings needed to see it resolved in favour of data subjects and their communities. The edutech use case helped understand how this might be achieved.
EduTech
Schools produce data about students, teachers and learning programmes. They work with different power relations and distinct objectives for using data. The use case looking at this type of data and its exchanges was required to account for pre-existing hierarchies of power and already established (and challenged) trust relationships between teachers and school administrations, students and teachers, and parents with all of these. Introduce AI-assisted learning programmes and new considerations of power and trust come into the data mix. The use-case identified two particular relationships where tech/data and human players needed to account for power and trust. If the data subject was the student, and when AI-assisted technology is introduced, new data is produced through the way students use the technology, and teachers process and assess this, then there needs to be a shift of power from teacher to student, consequent trust built, and negotiations about how the interests of stakeholders could be made more mutual. When it comes to teachers as data subjects and school administrators as the more powerful data harvesters, similar negotiations should be possible and DSD could facilitate the negotiations of power and trust required through providing safe/trusted spaces where the overall data interests in the school experience could be mutualized for all stakeholders. This intention is not possible to achieve provided that parties involved in DSD share a broadly similar view about what schools should do, and are willing to talk through how that can be achieved, partly through respectful and responsible data use. In this endeavour, students cannot be side-lined on the basis of capacity because they are data subjects with special needs, and teachers and parents cannot just assume data agency without first reverting to meaningful negotiations with students in trusted digital spaces.
Back to what would motivate such transactions – how can these negotiations of interest commence and progress under the umbrella of DSD? Motivation for stakeholders to participate in DSD regimes not only depends on greater information openness about data use and re-use, but also on the ability to convince them that their individual interests, moderated through negotiating data power, can produce mutualised benefit. Open banking stakeholder trade-offs are evidence that such a process is possible even between otherwise intractable data parties.
As a bottom-up process of data governance, DSD crucially involves the dispersal of influence and control to data subjects. If this is to be achieved, the mind-sets of powerful stakeholders need to change. Without such a transition in thinking, data-subjects will not trust the engagement over data and that the information it produces is the whole story. The more an atmosphere of openness is achieved, the more trust will be generated, and power will be dispersed as a living market/social consequence. The pay-off for the powerful stakeholders will be realised in more openness from data-subjects and a greater willingness on their part to share data in return and accept transparency as genuine. Said before, reputation is a valuable market commodity and the trust it generates will enable more respectful (and profitable) data exchange – a quantifiable benefit for any corporate balance sheet.
Returning to the upheaval in OpenAI and what might be behind it. There is sufficient indication from what has been revealed about the sacking of Sam Altman, that key players in the organisation’s management felt they were without sufficient candour and openness from the creative powerhouses that were driving generative AI in the eventual direction of ‘artificial general intelligence’. Trust between these stakeholders evaporated and a power clash eventuated. If DSD had been employed and management saw themselves in the position of data subjects requiring information about how developmental data was being accessed and used, it would have been necessary for the ‘star creators’ to concede information and some degree of power so that trust could be rekindled and the organisational culture of teamwork towards a common vision, would have been possible. It didn’t go that way.
Inherent within any dynamic, inclusive, participatory governance model like DSD is the necessity for openness and accountability from the data powerful to the data powerless. Additionally, for its own efficacy, the model requires constant evaluation by participants regarding the ways in which it has engendered or endangered trust bonds within the trusted data space. Data openness is a foundation stage in developing a DSD conversation. Accountability can become a reality in the context of data openness if it is designed to offer conflict empowerment as another motivation for mutualising the interests of stakeholders and negotiating responsible and respectful data access. The new UK AI Bill talks of ‘contesting’ interests as being an important consideration in AI governance, and DSD reflects this approach by offering the potential to mutualise different individual interests in data management through negotiation. DSD pathways recognise the need for time to identify when conversations are strained, negotiations break down or trust requires revisiting.
Will DSD be all plain sailing once it reaches a critical mass of operational common practice? This is not certain, and the likelihood that some DSD projects will not succeed or not sufficiently satisfy all stakeholders is always present when the achievement of data governance is left in the hands of those who will occasionally revert back to self-interest and ignore the importance of trust over immediate benefit. Accepting this potential, DSD not only requires ongoing critical evaluation from its participants, but also built-in dispute resolution opportunities so that conflicts of interest can be identified and resolved in time so that they do not fester and undo positive achievements of trust and power dispersal.
We live in exciting times for the governance of AI and data use. The preference to remove data governance away from individuals and back to top-down compliance with ethical principles or into the hands of external agencies is no longer answering crucial requirements for respectful and responsible data access. If these blogs have sparked your interest in DSD, then they are only a small window on how it works. Keep in touch with this website for more and different opportunities to learn about exchange in the development of DSD applications. Finally, in the words of Altman, ‘the mission continues’.
***
Notes:
(1) To continue the intention for simple terms, data ecosystem just means a space where different people with different roles and interests in data are connected.
(2) Data personhood here means more than just the digital personalities we create in many different digital spaces. Instead, it refers to the way we become identified through data in so many aspects of our life-spaces and life experiences.