Mar 1, 2022

10 things around Decentralized Identity today

Hi there,

I've been involved in decentralized identity space for about 5 years now. I've played with uPort, Azure Active Directory Verifiable Credentials, Mattr, etc. and recently I've launched several PoC projects and won a prize at the Decentralized Identity Hackathon hosted by Microsoft.

I'd like to have another chance to talk about this at conferences, so I'm just writing as I think. (Needless to say, this is completely my opinion and has nothing to do with the various organizations and businesses I am involved with.)

  1. Still surprisingly misunderstood, it is not a decentralized "identity".
    • DIDs are Decentralized "Identifiers", not Decentralized "Identities".
    • This is because Identity = Set of attributes (ISO/IEC 24760-1 2019), so if we are talking about decentralized "identity", the distributed claims spec in OpenID Connect is much more decentralized.
    • So what has been decentralized? Identifier and metadata are deployed on a distributed ledger (this is not specifically defined as a blockchain, and there are DIDs that are not distributed at all), which reduces the dependency on a single entity. In addition, it reduces the need to worry about availability and so on, although relatively speaking.
    • In other words, W3C defines Decentralized "Identifiers", and the only thing it decides is how to write identifiers and metadata (DID Document).
  2. Is Self-Sovereign Identity an Illusion?
    • What is sovereignty over data in the first place? As I mentioned earlier, it is unclear whether the word "Decentralized" means literally decentralized or distributed, or decentralized, and what is "self-sovereign"? Still no clear answer.
    • By publishing a metadata (DID Document) that includes an identifier and a public key for signature verification in a public/permission-less distributed ledger, it would be difficult for a specific entity to control the digital life or death of that entity. I understand what you are trying to say, but does it really lead to "self sovereign"?
    • The other thing is the portability of Verifiable Credentials. As I will discuss later, it is certainly possible to reduce the strength of binding between the IdP and the RP by using DIDs well, and as a result, it is possible to store Verifiable Credentials in software that runs on smartphones, etc., called a "Wallet", and individuals can carry it with them. In this sense, it is possible for individuals to feel they are able to control their identities by themselves.
    • In the end, the only thing I can say is that a self-sovereign identity is an idea and should be considered separately from technology. Indeed, given the current federation, Verifiable Credentials signed by DIDs and associated keys, which are relatively linked to distributed ledgers, make me feel that I can escape from the "control" of the business entities.
    • By the way, this may be the most important point, but I think we should not overlook the fact that it is virtually impossible to carry (move) DIDs and signed VCs across methods as long as DIDs are in the form of "did:method name:unique identifier".
  3. Is Verifiable Credentials the real deal?
    • As described in the white paper by the Trusted Web Promotion Council of the Japanese Cabinet Secretariat, which I am helping a little this year, the trend from implicit trust to explicit trust based on verification will be the key to DX. The key phrase "Don't trust, Verify" says it all.
    • However, what needs to be resolved here is the difference between digital signatures on SAML and OpenID Connect Assertions. There is certainly something new in the use of Verifiable Credentials for vaccine certification in COVID-19, but the reality is that it is just JSON that has been digitally self-signed by the Digital Agency Japan, so in terms of tamper resistance, it is no different from SAML Assertions or OpenID Connect id_ token. (Of course, I think this is a very important approach in terms of using FHIR's standardized Schema as a payload, and in terms of interoperation at the application layer.)
    • So, what is the point of combining VCs with DIDs? As a matter of fact, I cannot say there is no advantage. What I mean by that is, compared to the case where the public key for signature verification is published by jwks_uri, etc., there are advantages such as the following cases;
      • The operator does not have to think so much about the availability of the IdP (even if the IdP is down, if the DID Document is published on the distributed ledger, the controllability is reduced but the relative availability is often improved).
      • Even if the IdP is shut down, users will be able to prove the authenticity of their signed credentials.
    • However, there is a common story of compromise of signature algorithms, so it is not clear that the signature of Verifiable Credentials can be trusted forever just because the public key is stored in a distributed ledger (where the availability is relatively less affected by the convenience of a single operator). In actual operation, it is likely to be necessary to reissue VCs at least once every few years. In that case, the logic that VCs are superior in terms of business continuity of IdPs is very limited (i.e., at least at the level of being able to prove their credentials for a few years even if the IdP goes out of business), and it is not likely to be a silver bullet against neglect by the IdPs of the state, which is discussed in the context of so-called social inclusion.
  4. Trust in VC issuers is a difficult issue
    • When issuing a VC, the issuer signs it with a private key associated with his or her DID, so who is the DID? Can it be trusted? This is an important point.
    • However, I'm sure I'm not the only one who feels that keywords like "distributed" and "decentralized" are vain when they depend on something outside the DID/VC model.
    • And, in the end, the most suspicious thing is the reliability of Resolver to lookup DID Document from DID. Of course, open implementations, including Universal Resolver, can be trusted to a certain extent in terms of transparency, but in the end, we can only "trust" the integrity of the implementer of the driver and the business running the actual instance.
  5. No matter how verifiable they are, they are not always trusted.
    • In the first place, Trust Framework is not something that can be closed in the IT world.
    • In fact, no matter how much you say that a digitally signed data set is tamper-proof, humans are heuristic creatures, and they trust others better when being presented with a "physical ID card made of paper or plastic that you have seen before" in person.
    • This is the essence of DX, it is good to define the stages of Digitization to Digitalization, but in reality, I think there is a big gap between Digitization and Digitalization. We all love PDF and Excel, and we are at the stage where we think we can continue our business if we return to paper as a last resort, so I don't think we can move forward to the Digitalization stage where paper is not a prerequisite.
    • In this sense, our biggest mission may be to find a use case that can fully utilize the characteristics of Verifiable as soon as possible. The Trusted Web Promotion Council, which I mentioned earlier, is expected to play a major role in this area.
  6. They are not suitable for KYC, or identification and identity verification
    • In the end, the essence of Identity Proofing is what NIST SP800-63A calls,
      • Resolution
      • Validation
      • Verification
    • NIST SP800-63A states that the essence of identity proofing consists of the following steps: Resolution, Validation, and Verification. However, if you think about it carefully, the essence of validation is an inquiry to the authority (this is also reflected in the word "reference" in the identity proofing), so it is not enough to prove that the evidence has not been tampered with.
    • It is true that the revoke specification (Status List 2021) is becoming more standardized, so it will be possible to confirm validity, but it does not guarantee the reliability of the KYC process at the issuer of the Evidence, and the reliability of the Authority is more important than the tamper-resistance of the Evidence itself.
    • Also, verification (confirming the identity of the entity listed in the Evidence and the entity with the Evidence) is not possible.
    • If this is the case, it would be more realistic to use it as a proof of qualification, as OpenBadge is doing, rather than for identity verification.
  7. In the end, the biggest advantage is that it is possible to reduce the degree of binding between the IdP and the RP.
    • In this case, why not OpenBadge? However, considering that most of the current OpenBadge is not Signed but Hosted (which verifies authenticity and validity by querying the Issuer), there is a certain advantage in terms of the degree of binding between systems (at least until Signed becomes popular). (at least until the Signed type becomes popular).
    • In other words, in the end, the best way to use it is to reduce the degree of coupling between systems (between OP and RP).
    • In fact, when we were discussing the use cases at IIW last year, I mentioned that it might be possible to reduce the management burden (licensing, infrastructure sizing, availability) of the university's ID infrastructure. It seems to have resonated the most with the audience. (At least my friend Vittorio)
  8. What is the actual state of standardization?
    • It still looks like chaos, with DIDcomm being pushed by the Hyperledger folks at the Decentralized Identity Foundation and Trust over IP Foundation, and SIOPv2 and OIDC4VP at the OpenID Foundation.
    • In the first place, whether to use JSON-LD or JSON for VC is also a point of endless debate.
    • In the midst of this, various vendors are starting up as businesses, implementing specifications at a delicate stage and releasing sample code, so developers around the world are copying them, creating an even more chaotic situation, and a world of "what is standardization?
  9. Are Zero-Knowledge Proof (ZKP) and Selective Disclosure the real deal?
    • Zero-knowledge proofs have been studied for a long time, such as uProve (acquired by MS) and IBM's IdeMix, but they are still far from practical use. (Come to think of it, I miss the time when I was testing the Private Preview of Windows Identity Foundation with uProve's test implementation more than 10 years ago.)
    • ZKP and Selective Disclosure are often confused, but in the end what is needed is Selective Disclosure. The BBS+ is doing a good job in this area, but there are still some issues (e.g., limited scope of hiding).
    • It is often said that it is not possible in the physical world, but it would be nice if it could be done in the digital world. There are expectations for the maturity and implementation of technology in this area to deal with the problem that if you show your driver's license to check your age when entering a bar, the guard will know information other than your age. However, is it really problematic if the guard could know your name in addition to your age? So, I think we need to discuss the use cases more.
  10. In the end, has it solved any of the world's problems?
    • Problems that often said are,
      • Privacy
      • Verifiability
    • However, looking at the above, I can't say that it has solved that problem.
    • Rather, as I mentioned above, the biggest advantage is the reduction of administrative and infrastructure costs by reducing the degree of binding between OP and RP.

However, I believe that this technology is very interesting and has the potential to change the world, so I will continue to study it.

No comments:

Post a Comment