Abstract
This article studies the notion of quantitative policies for trust management and gives protocols for realizing them in a disclosure-minimizing fashion. Specifically, Bob values each credential with a certain number of points, and requires a minimum total threshold of points before granting Alice access to a resource. In turn, Alice values each of her credentials with a privacy score that indicates her degree of reluctance to reveal that credential. Bob's valuation of credentials and his threshold are private. Alice's privacy-valuation of her credentials is also private. Alice wants to find a subset of her credentials that achieves Bob's required threshold for access, yet is of as small a value to her as possible. We give protocols for computing such a subset of Alice's credentials without revealing any of the two parties' above-mentioned private information. Furthermore, we develop a fingerprint method that allows Alice to independently and easily recover the optimal knapsack solution, once the computed optimal value is given, but also enables verification of the integrity of the optimal value. The fingerprint method is useful beyond the specific authorization problem studied, and can be applied to any integer knapsack dynamic programming in a private setting.
Funder
Rutgers University Computing Coordination Council Pervasive Computing Initiative
Publisher
Association for Computing Machinery (ACM)
Subject
Safety, Risk, Reliability and Quality,General Computer Science
Cited by
21 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献