Developing a framework for trustworthy...
This paper proposes a framework for developing a trustworthy artificial intelligence (AI) supported knowledge management system (KMS) by integrating existing approaches to trustworthy AI, trust in data, and trust in organisations. We argue that improvement in three core dimensions (data governance, validation of evidence, and reciprocal obligation to act) will lead to the development of trust in the three domains of the data, the AI technology, and the organisation. The framework was informed by a case study implementing the Access- Risk-Knowledge (ARK) platform for mindful risk governance across three collaborating healthcare organisations. Subsequently, the framework was applied within each organisation with the aim of measuring trust to this point and generating objectives for future ARK platform development. The resulting discussion of ARK and the framework has implications for the development of KMSs, the development of trustworthy AI, and the management of risk and change in complex socio-technical systems.
Additional Information
Field | Value |
---|---|
Data last updated | May 18, 2022 |
Metadata last updated | May 18, 2022 |
Created | unknown |
Format | |
License | No License Provided |
Created | 3 years ago |
Media type | application/pdf |
Size | 836,795 |
Datastore active | False |
Has views | True |
Id | e2eeeb6f-1168-4b5a-9e85-2649b86bd02f |
Package id | 838efbb2-26a9-40bf-a889-d513de8dcaea |
Position | 0 |
State | active |
Url type | upload |