Litentry this week: Explain Litentry’s architecture like I’m five

:sparkles: This week we published a new article, “Developing a DID Aggregator on Blockchain (Part Ⅱ)”, which explains the technical architecture of Litentry. It took a lot of time for us to design and finalize the concepts of the protocol, and to improve, write and translate (Well Rome was not built in a day🏛). In this weekly report, I’ll give you a “explain Litentry’s architecture like I’m five”, for those who need a concise explanation of the concepts.

First of all, you probably already know that Litentry is an identity aggregator. What this aggregator does is to:

  1. Obtain identity scores from multiple data analysis platforms and aggregate them;
  2. Compute the aggregate identity score according to the customizable weighting algorithm;
  3. Encrypt the data and shield compuation with TEE to enhance privacy.

For example, if we want to find “loyal users of Polkadot”, we will first obtain identity scores of User A from multiple data analytics platforms:

  • Analyst 1 indexes data from Parity nodes and finds that user A holds 1000 DOT, giving users A 1000 points, weighted 0.008;
  • Analyst 2 reads on-chain governance events on Polkadot network, and finds that user A participated in on-chain governance twice, giving user A 2 points, weighted 3.0;

Finally, Litentry will compares the identity scores provided by different data analysts, and then weight the targeted data to calculate an aggregated identity score, which is (1000 * 0.008 + 2* 3.0) in our example. As such, user A got a comprehensive score of 14 in the dimension of “Polkadot project Loyalty”. Of course, these scores are only some hypothetical examples.

You may have noticed that the Litentry Runtime itself doesn’t do much data analysis, but rather works with multiple data analyst platforms, as this reduces the load on the Litentry Runtime considerably. This means that when we do identity aggregation, we involve the third-party data analyzers mentioned above, as well as the platforms that provide the source identity data.

In summary, the protocol architecture consists of three layers:

  1. Source data layer. The source platforms of the data obtained by our identity analysts, such as Etherscan, The Graph, Onfinality and other data providers.
  2. Address analysis layer. It mainly serves as an external server that provides data analysis, such as Nansen, Chainalysis, our upcoming product Litentry Whitelisting and other address analysis platforms.
  3. Identity aggregation layer. Litentry generates address relations belonging to the same identity, and then obtains the corresponding address analysis data from the address analysis layer, and carries out the weighted computation.

In consideration of privacy and data security, Litentry encrypts the identity calculation tasks and identity data and completes the calculation in the off-chain TEE worker to ensure that the data is not overheard from the outside.

Litentry architecture diagram

:point_right: Read the full article if this sparks your interest.

Testnet: NFT pallets, SubstraTEE setup

  • Added runtime benchmarks for NFT pallets
  • Updated NFT pallet to track latest upstream dependencies
  • Investigated on substraTEE and env setup
  • Refactored integration test
  • Added documentation deployment support of pallets on CI
  • Substrate benchmark testing
  • Prepared parachain testing

Misc: Upgraded validators, registrar dependencies

  • Upgraded Polkadot & Kusama Validators
  • Refined Chainbridge/validator scripts
  • Updated dependencies for Kusama-registrar
  • Fixed smart contract invokes problem
  • Web app: Next.js migration
  • Developed token migration UI

P.S. You may be wondering what “dependency” means. This is the upper-level source of the code. Since we sometimes use code from other projects, we need to keep an eye on their codes and underlying updates.

Beincrypto interview: Application of Identity Aggregation in KYC authentication

Spoted: Hanwen in a nice suit

Berlin-based blockchain media Beincrypto interviewed Hanwen about his views on decentralized identity replacing traditional KYC in certain scenarios. Here are some excerpts:

Why couldn’t we use the associated on-chain user history as KYC material? Since most of the information is public and transparent, the only problem is that it is pseudo-anonymous and discreet. But Litentry’s identity aggregation protocol solves this problem.

[Identity aggregation] is like creating a different you in the crypto room, but no other person can see the full picture of it as the identity association information is securely stored in the Trusted Execution Environment.

This identity will be an alternative to KYC. A user can provide this identity for any Dapps. On an IDO platform, the DApp sees, for example, that the user already has a lot of activities in Ethereum and holds many cryptocurrencies and has not sold them. The user is seen as a strong keeper and a targeted user. The DApp wants to whitelist the user and distribute more assignments instead of other newly created wallet addresses, which may be a bot.

Litentry x DAFI AMA: DID in staking

Litentry spoke at an AMA hosted in the DAFI community to interact with its members and answer questions about DID.

Q: What is the future of identity data solutions?

A: When we talk about the topic of “identity data solution” in the decentralized web, we are actually talking about data integrity and data analysis power. As we can predict, the future of the web is going to consist of more decentralized networks and blockchains, more anonymous accounts of users are gonna exist along with more generated user data. To maintain a dynamic user profile over multiple networks, the bottleneck is system throughput, which can be slowed down by message communication efficiency, computing resources, but mostly, data integrity and data analysis power.

The problem of data availability is that getting verified data from blockchains relies on running blockchain full nodes, it requires a lot of computer resources thus lower the data availability and distributed level. Solutions adding in this layer improving blockchain data query efficiency is important in the future. The problem of data analysis power is the ability to analyze raw blockchain data and turn it into human-readable identity-related data. This relies on the accumulation of data analyzing methods and Litentry is a competitive framework over this. The marginal effect will happen as we use more and more of the Litentry Network, identity in the decentralized web will become smarter.

Litentry x MathChain:DID in DeFi

​Litentry Partners with MathChain to explore possibilities in the adoption of Decentralized Identity (DID) in DeFi and cross-chain markets. Litentry is able to aggregate identities across various blockchains and provide better DID service for dApps in the Clover ecosystem. At the same time, we will explore developing a brand new Litentry interface that supports identity aggregation, displaying and management, such as displaying a user’s NFT collection, rewarded metals from other Dapps etc. Litentry will also integrate MathChain account system, allowing MathChain’s identity data to be indexed by Litentry Network.

About Litentry

Litentry is a Decentralized Identity Aggregator that enables linking user identities across multiple networks. Featuring a DID indexing protocol and a Substrate-built distributed DID validation blockchain, Litentry provides a decentralized, interoperable identity aggregation service that mitigates the difficulty of resolving agnostic DID mechanisms. Litentry provides a secure vehicle through which users manage their identities and dApps obtain real-time DID data of an identity owner across different blockchains.

Stay in touch with us through