GithubHelp home page GithubHelp logo

consensys / defi-score Goto Github PK

View Code? Open in Web Editor NEW
279.0 25.0 79.0 8.19 MB

DeFi Score: An open framework for evaluating DeFi protocols

Home Page: https://defiscore.io

License: Other

Python 99.93% JavaScript 0.05% Shell 0.02%
ethereum blockchain consensys defi

defi-score's Introduction

All Contributors DeFi Score Banner

The DeFi Score is a framework for assessing risk in permissionless lending platforms. It's a single, consistently comparable value for measuring protocol risk, based on factors including smart contract risk, collateralization, and liquidity.

We encourage the Ethereum community to evolve the methodology, making it more effective and easier to use.

Table of Contents

Example Scores

We've provided a few example scores with a breakdown of each component. Although the underlying methodology is complex, it should be simple for a user to understand.

DeFi Score Examples

Implementation

Want to run the numbers yourself? Check out the implementation instructions.

Components

The DeFi Score methodology can be organized into Smart Contract Risk, Financial Risk, and Other Considerations.

DeFi Score Banner Components

I. Smart Contract Risk

  • Smart Contract Security (45%)

    Errors, bugs and unexpected outcomes in smart contracts can cause real financial harm. These risks can be minimized by proactive code audits and formal verification from reputable security firms.

    Our model assesses code security by looking at three pieces of off-chain but public data:

    1. Time on Mainnet Normalized time since the protocol first launched on mainnet
    2. No Critical Vulnerabilities: No vulnerabilities have been exploited
    3. Four Engineer Weeks 4 or more engineer weeks have been dedicated to auditing the protocol
    4. Public Audit: Has the audit report been made public
    5. Recent Audit: Has there been an audit in the last 12 months OR have no code changes been made
    6. Bounty Program: Does the development team offers a public bug bounty program?

II. Financial Risk

  • Collateral (20%)

    While all of the current platforms use very conservative collateral factors, the highly volatile nature of crypto assets means that these high collateral factors may still be insufficient. Collateral Risk is assessed by looking at two pieces of data, both derivable from on-chain data. The first data point is the utilization rate. The second data point is an analysis of the collateral portfolio using the CVaR (Conditional Value at Risk) model, also known as the Expected Shortfall model.

  • Liquidity (10%)

    The currently scoped platforms all attempt to incentive liquidity by using dynamic interest rate models which produce varying rates depending on the level of liquidity in each asset pool. However, incentivized liquidity does not mean guaranteed liquidity. The absolute level of liquidity is used.

III. Centralization Risk

  • Protocol Administration (12.5%)

One of the biggest contributors to centralization risk in DeFi protocols is the use of admin keys. Admin keys allow protocol developers to change different parameters of their smart contract systems like oracles, interest rates and potentially more. Protocol developer’s’ ability to alter these contract parameters allows them to cause financial loss to users. Measures like timelocks and multi-signature wallets help mitigate the risk of financial loss due to centralized elements. Mult-signature wallets help mitigate this risk by distributing control to a larger number of developers, meaning that the loss or compromise of a single private key cannot compromise the entire system. Timelocks help mitigate risk by allowing protocol users to exit their positions before a change can take place.

  • Oracles (12.5%)

Another large element of centralization risk in these protocols is oracle centralization. There are many different flavors of oracle systems being used to power these protocols. Some protocols use a fully self-operated oracle system while others use externally operated oracles like Uniswap and Kyber. Samczsun’s writeup on oracles and their ability to cause financial loss provides good background information. The oracle centralization score is not focused on whether these price feeds are manipulatable or not (they all are), but whether a single entity can manipulate them with ease. In the self-operated model, it only takes the oracle owner to manipulate its data. Decentralized oracles can’t be manipulated in the same way, but may not always represent the fair market value for an asset, which is why developers building on top of decentralized oracles opt to use price volatility bounds to defend against these types of attacks.

Disclaimer

The current DeFi Score algorithm uses min max normalization for certain metrics (Utilization Index and Liquidity Index). Anyone can fork the code and add support for new pools. However, if you add a pool that introduces a new lower or upper bound of utilization or liquidity, this will have a material effect on the scores for all other pools. The DeFi score team regularly adds support for new pools once they meet our requirements which you can read more about here.

Further Reading:

DeFi Score: Assessing Risk in Permissionless Lending Protocols

Contributors


Jack Clancy

💻 📖 📢

Jordan Lyall

📆 📖 🎨

tlip

🎨 🖋

ispytodd

🖋 📝

Anthony H.

🌍

Antonina Norair

📖

Tom French

📖

Kevin Arbi

💻

Community

Join the DeFi Score community on Telegram.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 2.0 Generic License.

defi-score's People

Contributors

allcontributors[bot] avatar anthonyhuanggr avatar cmackeen avatar corbinpage avatar dependabot[bot] avatar flamingyawn avatar jclancy93 avatar jordanlyall avatar karbica avatar tlip avatar tomafrench avatar toninorair avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

defi-score's Issues

Insurance/Regulatory Risk

What's the point of evaluating Insurance/Regulatory Risk in the framework if none of existing DeFi products can meet that criteria?

dYdX liquidity metric

The way dYdX works is that it does not matter what the utilization of DAI is, just what the token balance is in the contract that determines whether you can withdraw your funds at any time.

The protocol allows for >100% utilization because it is a dollar denominated protocol, that allows users to sell dai they dont have and others to borrow dai.

Say there is 0 dai in dydx:
User 1: Has $400 worth of USDC, takes a 4x long USDC/DAI, borrowing 1200 DAI
User 2: Has $400 worth of ETH, posts a limit sell of 1000 DAI, borrowing 1000 DAI

Everyone remains solid because it is dollar denominated, and there is no DAI in the system. If there were dai in the system, even at utilization > 100%, you could withdraw up to the dai balance of dYdX. This is likely the mechanism they will use when they add BTC trading, which I hear is in the works. Also, this is generally how centralized exchanges like Bitmex ETH derivatives work as well

Random unedited thoughts on risk

Preface: I love what you guys are working on. Def much needed. But, realistically, if this very necessary thing is going to keep a diverse range of user types informed of the risks (or even just maybe prevent a few people from throwing fat piles of crypto at random defi platforms that have been overhyped on twitter and underaudited in reality), I think we are going to need more. I'm going to dump it on you. Please don't take it personal. This says more about the space and I am all too pleased to see you take this hard issue head on and move so fast. Your product is needed. Also, please forgive how hot-messy this is. 🚒

What is the goal? Who is the target audience?

Right now DeFi is a fucking dumpster fire 🔥 and any market movement will be like throwing gasoline, old christmas trees, and a pile of overly-hairsprayed hair on top of it 🔥🔥🔥🔥🔥. Also, it's hard to build a product for everyone. Yay!

Questions: What's the end goal or use-case that DeFi Score imagines having the greatest impact? Is a one-size-fits-all number really giving the user what they need to make a somewhat—anywhat—informed decision? Can you use same input for a tailored output for different user-types?

  1. If your goal is to keep idiots from investing in DeFi then you may just want to perfect a very beautiful icon that is universally recognized. Like the poison icon. The radioactive icon. Or the Beware of falling rocks! road sign. Or all of the above. 😅

  2. If the goal is to make users relatively aware that investments into DeFi platforms may... carry risk + carry more risk than just holding crypto + carry more risk than the same action/position in the traditional world (lending, leveraging, shorting) + inform them that not all defi products are created equal.......it may be worth considering how you capture not just the risk, but the relativity of said risk to things a user may be more familiar with.

  3. If the goal is to give product creators access to a score that they can display in end-products then you can de-prioritize a beautiful site for end-users with the scores, dashboard, etc. and instead focus on selling to products who touch these end users. Expend more energy on giving these products access to a diverse range of information + good documentation + good examples + good case studies + empower these products to make their own specific choices as it relates to their product and their user demographics. For example, a mobile wallet targeting noobs may display a simple red-orange-green color system. Multis (a multisig interface for treasury management) may show as much information as possible The advantage of taking this route is that the end-product should (hopefully) know their users best. This then frees you up to create a product that serves a very wide range of user types without actually having to literally serve all those users directly. (Because, take it from me, it is NOT fun to build a product for all the user types!)

Smart Contract Risk

Misc notes that I couldn't fit elsewhere:

  • the number of audits I've seen where the project didn't implement all, or even most, of the recommended fixes is fucking terrifying. Just because something has been audited, doesn't mean anyone read it. This increases risk as not only is there an issue, but that issue is known and literally spelled out in the security audit for anyone to find!

  • Similarly, a very dangerous time for smart contracts/protocols is after an audit has been complete, but before fixes have been implemented / funds have been moved to a fixed contract.

  • Similarly still, even speculation about the security of a smart contract increases the risk of a bug being found in that contract. For example, the DAO bug was almost found by Gun shortly before the attack. He detailed the issue however hadn't connected a little piece and so the issue wasn't actually an issue. Days later, the attack starts, and lo and behold, if Gun or others had spent a bit more time or their brain had been working a bit differently, they may have found it first.

Smart contract risk decreases as AMOUNT OF TIME AND MONEY increases

I think this is really, really a huge hole in your current analysis. As someone pointed out, since most contracts can be updatable in a myriad of ways, time does NOT specifically reduce that risk. However, it does reduce the risk of attacking the contract directly, economic stuff going haywire in the contract, etc. You already separate security of smart contracts from admin/updatability of contracts so I'll just be explicit that I'm focused on the contract attack vectors themselves, not the admin ones.

Background:

The absence of security audits and formal verification increase our certainty about the risk of a product far more than the presence of them. For clarity: If flipping a coin is 0 and not having an audit is -100, then having an audit would be a 10. 10 is way better than -100, but not much better than 0.

Therefore, any smart contract without a security audit is far more likely to be a scam or have creators who have such low appreciation for security that it is almost certainly insecure. Regardless of all else, this should be weighed very heavily.

A smart contract that has an audit...could totally still be hacked / broken / manipulated. Therefore, this should be weighed less heavily.

(Grain of salt disclosure: I'm obviously still suffering PTSDAO and PTSParity#1 and PTSParity#2.)

Things that do increase certainty around / decrease probability of Bad Things™ happening

How battle-tested and hacker-tested is a contract or system? This is the reason we trust the Gnosis Multisig more than the Gnosis Safe even though they were both created by one of the most diligent teams in the space. In the same vein, if you deployed a multisig and put $1m in it on 1/1/2017 and it wasn't hacked by 1/1/2018, I would be more confident in that contract than the same one being deployed on 1/1/2019 and surviving until 1/1/2020. This is because the amount of hacker eyes and the sophistication of said hackers was greater in 2017 than in 2019.

I am more certain that a contract won't be hacked/broken when...

  • There are more funds held by said contract. This is relative to other crypto contracts and also relative to other things a hacker could do to make a quick buck. Also, having big honeypot in a single contract is more likely to attract more nefarious eyes than a pass-thru contract.)

  • It's been in production use for longer. Duh.

  • It's survived periods of high volatility in short amount of time; it's survived large drops or long-term bear market conditions; it's survived large gains or long-term bull market conditions. (note: only applicable in some cases, e.g. DAI)

  • It is more popular on a social / PR level. Does everyone in the ecosystem talk about it (Compound)? Do people outside the ecosystem know about it (the DAO)? Or does no one even know about it (all the other random contracts out there)?

  • It is more used. Have there been a lot of transactions through the contract? Are those transactions by a relatively diverse set of people or market conditions? How big are the transactions? (This one specifically helps more with contracts that fail due to game theory / bad economics / etc.)

  • It hasn't changed, nor have admin contracts changed, nor have addresses changed, etc. Example: DAI: Team, brand, naming, everything looks the same to a layperson. But hey, you know, it's actually been in production for like....70 days. 🤷 The Parity Multisig #1 & #2: had held hundreds of millions for a long time, had been audited, was a slightly different version of the original foundation multisig which still holds the EF's ether (safely) today. But then they modularized it to save gas. And that was enough to make a vulnerability exploitable.

  • It's been hacked before. There is a ton of literature on this sort of subject if you want to dive into this so I'll try to be brief. Essentially there are two viewpoints and I don't know which I agree with more:

    • If something has been hacked + fixed, it's more likely to be secure. Example: Monero vs Zcash. Rationale 1: you learn from experience and mistakes won't be repeated. Rationale 2: code always has bugs and if you haven't found any it's only because you haven't found them not bc they don't exist.

    • If something has been hacked, it's less likely to be secure. Also, other contracts by same teams are less likely to be secure. Rationale: the company/people don't know how to be secure, there is bad company culture, immaturity, bad QA, contract is too complex, etc. Examples: Parity (obviously). The flipside is Gnosis. Though there still is time before we know if the Gnosis Safe will hold up to the security of the original Gnosis Multisig, their track record has been good so far.

Somehow, these must be captured. I believe this is the #1 factor that will move smart contract security risk around. I would even say that an unaudited, unverified contract by a non-name team that has held billions for a long period of time is more secure than an audited, formally verified, blah blah blah contract by a known team that's held $500k for all of 2019. (Assuming both are non-upgradable, of the same nature, etc.)

PS: I am not alone. Ameen phrases is thusly regarding compound:

The contracts have also held $20M+ for over 6 months, $50M+ for over 2 months, and currently hold $100M+. For me personally, the most important metric of contract security is total funds held in contract * time held in contract, and Compound has been secure with quite a large public bounty thus far.

https://medium.com/@ameensol/what-you-should-know-before-putting-half-a-million-dai-in-compound-fafdb2645f77

Everything is relative!!

Do I know what it means if UnknownDeFi#1 has a higher number than UnknownDeFi#2? Probably not, because I don't know what either really are or what the numbers really mean. However, I probably have some sense of the risk of holding crypto vs risk of holding USD vs investing in stocks vs investing in gov't bonds.

If you label a gov't bond as a 1, stocks as a 2, UnknownDeFi#2 as a 90, and UnknownDeFi#1 as 95, and GivingAStrangerAllMyCashToHold as a 100, that's far different than just UnknownDeFi#2 as a 90, and UnknownDeFi#1 as 95.

Consider using or providing icons, pictographs or words

Numbers are meaningless without a lot of context. However, things like these capture relativity and and digestible at a glance:

1200px-Hsas-chart_with_header svg

or...

Store-Side-UI

"Centralization Risk" is so crypto nerdy it hurts

Yes, how contracts are controlled, managed, updated, fed data is potentially highly risky. These are necessary categories!! BUT! Don't classify them as "centralization."

When a crypto-native looks at this type of risk and assigns a category of "centralization," it makes sense. But, when you start with "centralization risk," this is NOT what comes to mind. When I think of centralization I think of The DAO vs Compound vs Blockfi. I think about whether I trust smart contracts more than a custodian. Etc.

May be worth renaming protocol administration and oracles to something else. 🤷

And, since we are on the subject...

Admin/access/upgradability risk is very diverse

  • The Taylor Hack: "Somehow the hacker got access to one of our devices and took control of one of our 1Password files."

  • The Bancor Hack: Hackers accessed a multisig wallet used to upgrade smart contracts and withdrew the money mostly in Ether.

  • Platforms that have a big red button to halt, update, etc. While this introduces the risk of a nefarious third-party gaining access to an admin key and changing shit, it also reduces the risk of total catastrophic loss if a hack does occur. So that's a thing.

  • Platforms that don't have a big red button. There is no way to stop, pause, recover, save, anything in any condition, including bad code, hackers, something breaking, bad economics, etc. So THAT'S a thing.

  • What is the risk of the multisignature contract that protects the upgradability of the platform? Kidding....but not really at all. Imagine if Bancor's admin contract was a Parity multisig and now Bancor's ability to update their shit is locked. Oopsies! Compounding risk is fun!

Compounding Risk

How would one start to be able to calculate the risk of things combined?

  • Multisig example detailed above

  • Risk of holding DAI vs risk of lending DAI via Compound.

  • If I take my hard-earned ETH, exchange for USDC on Uniswap, put the USDC into Compound for cUSDC, then mint the ETHRSIAPY Set on TokenSets...

  • dontsayitdontsayitdontsayitdontsayit ARE YOU SERIOUS WHAT ARE WE GOING TO DO ABOUT SYNTHETIX?! OMG, or pooling sETH via Uniswap. Ahhhhhhhhhhhhhh.....🏃no, run AWAY my emoji dude RUN AWAY!

  • And while some tokens are purely speculative and their risk is very akin to traditional market risk, some (DAI, sETH) do heavily rely on the security of the overall platform. A good test would be to ensure that the risk of standard-speculative-erc20-token is not the same as sETH and neither are the same as ETH. Luckily the market risk (e.g. price to 0 bc entire team and community died suddenly) cancel each other out as they exist for all the tokens so you can focus on the other risks except oh yeah fucking stablecoins. 😉

People / Team / Culture

I don't know how this fits in exactly, but it does. If you talk to two teams in the space you will see differences in priorities, specifically UX vs security. Good example is Gnosis vs Argent. Gnosis is willing to go to market slower, be more diligent, perfect. They are scared. They have crazy internal processes in place for security things. Argent is...just not that. They prioritize getting users, having best UX possible. Which you prefer is subjective, but as the provider of an objective-ish DeFi Score, the emphasis will have to be put strongly on security over UX.

I have a lot of ideas around this and I'm sure others have more, but I'm not sure there's a way to capture this via an algorithm as there is a lot of subjectivity. Ideas:

  • Look at how the team responds to security audits or a person reporting a security issue. The best example of this is Coinomi. 2017: i ignore you on github, i ignore you on reddit, i yell at you and call you fud weeks later, i distract by saying "well at least were better than another insecure wallet!" 2019: ignore, deflect, hid old tweets, denies, blames victim, yells, distracts.) (source: https://bitsonline.com/coinomi-vulnerability-respond/, https://www.reddit.com/r/CryptoCurrency/comments/av7gfi/warning_coinomi_wallet_critical_vulnerability/). I don't know if Coinomi is secure or insecure, but I sure as hell know they they don't know either and will yell any anyone who tells them they have a problem. I can also tell you white hats and gray hats start looking mighty dark once you treat reporters like shit.

  • How obsessed and paranoid about security are they? Talk to Robert (Compound) for 15 minutes and he will readily advertise that there will always be a non-zero risk for code. This is good. People pointing at their audits if people ask about risk....not so much...that's just deflection. Check out some recent podcasts with Ryan Selkis on what they are doing at Messari. There may be a way to capture universal info by classifying the responses to certain questions. It may just throw a red flag. I don't know.

  • Culture - how they update, when they update, what their github looks like, what any bounties / audits have found, how quickly they respond to a vuln report, how they respond, etc. Is there a correlation between a bunch of critical bugs in initial audit and bugs in the future? Is there a correlation between a bunch of little bugs in the audit and a critical bug being found in the future? https://github.com/solidified-platform/audits may be a good place to start with this data.

Bonus Points: Normal Usage/Market Risk

When I consider my users, one of the biggest risks in integrating defi platforms is whether or not the user actually understands the very-well-studied market risks that occur in any market. In the traditional financial sector, the people investing do know that an asset could go down and could go up. In crypto even this most basic fact isn't necessarily known. More worrisome, some think they know but they really don't.

I categorize these risk separately than the more extreme risks of a contract going to zero or nearly-zero. The events that cause these risks to happen, happen regardless—it's just a matter of luck whether you lose or win on any given day. This includes things like...

  • The market going up or down, your position changing for the worse, that you may have been better off just holding ETH, or that you lose money because of your position. These will not destroy the contract or system, but they may destroy an individual.

  • Risk of having your individual position wiped out. Again, this will destroy you but not the system. Things like liquidation come to mind.

Example: compare a normal lottery to Pool Together. Playing the lottery has a very high risk of losing your entire "investment". However, with pool together, that risk is ~0 (assuming everything else works as intended.) If your risk score puts the risk of a traditional lottery (but onchain/with crypto) to a flawless implementation of Pool Together, the scores should be different.

There's like tens of thousands of people who do nothing but research and analyze these types risks in order to make the right calls in the real world so it's unlikely you're going to figure this out on a huge level.

BUT perhaps a score could be given to just show whether there's a strong likelihood of retaining initial investment, gaining, losing, or 'your guess is as good as mine, crypto is volatile, shrug.' This would at least help differentiate between gambling vs pool together vs lending vs 100x longing. Right now, I worry that theoretically, a gambling defi thing could have the same risk score as lending which doesn't feel right.

Liquidity Is Not A Measure of Risk

Liquidity
The currently scoped platforms all attempt to incentive liquidity by using dynamic interest rate models which produce varying rates depending on the level of liquidity in each asset pool. However, incentivized liquidity does not mean guaranteed liquidity. A user takes on risk that they will not be able to withdraw their lent out assets on demand because all the assets are currently lent out.

Liquidity risk is assessed by a single data point that is derivable from on-chain data, which is the level of liquidity. This data point is the 30 day EMA of liquidity, normalized using logarithmic min-max normalization of the amount of liquidity in USD across all of the available lending pools. The absolute level of liquidity is used instead of the percentage utilization (outstandingDebt/totalAssets) because it has a side effect of also scoring larger pools higher.

This has no bearing on the safety of the protocols from the perspective of lenders, and simply puts a thumb on the scale for bigger protocols, which is the opposite of the way risk actually flows. If the intention is to create information in the marketplace that can help lenders determine the risk-adjusted rate, then this is not only not useful but actively distortionary. Only factors that can cause the underlying lending pool to shrink in size due to losses should be considered. To this end, collateral quality is an excellent metric while size of the lending pool is an irrelevant metric.

This is a UX issue, not an issue of risk. For all of the protocols, eventually borrowers will be liquidated or forced to pull out as borrowing rates skyrocket. This means that, while possibly annoying, it will take a few weeks for lenders to get back their full funds. It also means they'll be earning extremely high interest rates while they wait. Again, as this does not reflect a potential loss to lenders, it doesn't have bearing on the effective risk-adjusted rate.

I would suggest that a better and more relevant criteria would be to quantify the throughput of the liquidation mechanism and to weigh this against what quantity of liquidation would be needed given the level of borrowing activity on the platform. It would be noted that smaller protocols with a lower absolute quantity of funds being borrowed are actually safer because they require less throughput in their liquidation mechanism, and are therefore less likely to result in borrowers defaulting, ceteris paribus.

Supporting Oracle Manipulation Metric

Given the recent exploitation of the Compound money markets, community members have raised concerns with Compound receiving a higher score than Aave, despite having more easily manipulatable oracles.

To address this, we want to add a new component to the financial score, which is the cost of oracle manipulation. There are a couple of different ways to measure this oracle risk, depending on the platform:

  1. Compound - Order Book based Oracle - what is the cost to +/-10% the price?
  2. If a protocol was using Chainlink, what are the underlying sources and the incentives to report prices honestly?
  3. If using an on-chain TWAP oracle like Uniswap - what is the cost to +/- 10% of the price?

An acceptable solution to this bounty would be a general design for how to measure oracle price manipulation across all of these providers in a fairly consistent way. Additionally, some scratch code would be nice but is not absolutely required.

Composability must be included in risk assessment

Tokenisation and smart contract interaction provides opportunities to create new interactions on top of existing smart contract protocols programatically. By such one is able to create a financial protocols or derivatives that utilises existing DeFi protocols such as utilising lending, exchanges, derivatives and insurance-like products.

As an example, lending protocol such as Aave and Compound utilises MakerDAO stablecoin protocol and the liquidation networks use Kyber Network or Uniswap for means to liquidate collateral called positions for securing the network from insolvency.

In such case, these lending protocols are exposed to the systemic risk inherited within the underlying protocols that the they are de facto relying upon. In the above, case Aave and Compound are exposed to MakerDAO, Kyber and Uniswap (directly or indirectly).

The rabbit hole goes deeper when derivative protocols are build as 3rd (and beyond) layers on top of the DeFi infrastructure, whereas these protocols would be exposed to risk in multiple layers, similarly the underlying protocols (level 1 etc.) would be exposed similarly to the risk of these higher level protocols as they provide and consumer liquidity.

On the other hand, composability could work as risk mitigation as well. For example, instead of running an oracle network by own, one could join a network that is well functioning and more decentralized. Similarly, 3rd layer protocols could rely on multiple 1-2 layer protocols for diversification purposes and mitigate the risk through composability.

In traditional finance such risk assessment is handled by the amount of trust each counter-party can sustain with the risk/reward capture. However, as trust-based system are based on credit between all counter-parties, a black swan event usually causes a large domino effect as everyone is exposed to everyone in the market.

To improve the risk assessment, I recommend to take into account the composability within the scoring system as a protocol might include inheritable risk from other protocols. The suggestion would highlight protocol builders to rely on protocols that are mitigating risk.

Margin Maintenance Is Critical To Counterparty Risk

Right now the scores don't consider differences in margin maintenance between platforms. The higher the margin maintenance requirement, the safer it is for lenders to trust that borrowers won't default in the case of a sudden market movement. Compound's margin maintenance is the highest of all the platforms, with 150% margin maintenance for simply borrowing DAI against ETH, meanwhile Fulcrum and dYdX have a margin maintenance of 115% for the same asset. I believe Nuo's margin maintenance is even lower, and has been around 105-110% (I can't find documentation on Nuo's site about this now). I believe Nuo's low margin maintenance was a critical factor in the losses the platform caused lenders to sustain earlier in the year.

I believe this is a fairly critical metric to consider when assessing the risk to lenders, perhaps even moreso than the throughput of the liquidation mechanism itself.

Privacy Audit or the project compliance discloser?

Hi All:

As we know that most of DeFi projects facing Regulatory Risk due to fewer guidelines to follow from the government.

But another part of the regulatory risk is from the Privacy law, not just from finance-related regulatory, which we already have EU GDPR and CCPA (California Consumer Privacy Act) can follow up. Do the projects comply with those privacy acts? Those project should have done some Privacy Compliance audit? Or their website or GitHub should have a section or people follow up on the project compliance?

My suggestion is to consider a Privacy audit report or Regulatory compliance discloser as part of Defi score.

Or All those projects are openness and no privacy regulatory need to comply?

Please advise, Many thanks.

Anthony

Asset-based LTVs and adjusting the LTV levels based on volatility mitigates risk

Current DeFi score takes collateral and Loan-to-Value (LTV) ratios into account as a risk within the overcollateralized lending protocols. These lending protocols might apply either asset-based LTVs such as Aave and Compound or apply same LTV across all listed assets (the so called MakerDAO model).

Firstly, applying same LTV across all assets increases the risk of the protocol as the peculiarities of different assets in not taken into account. More importantly, different assets have different liquidity risk on the secondary market which affects the ability to resell the collateral in the secondary market due to a liquidation event. The level of LTV exposes directly to the liquidity risk of an asset since higher LTV triggers quicker liquidation event in case of volatility. Cross asset-based LTV protocols are subject to higher risk and therefore should decrease the overall score.

Secondly, the volatility of an asset affects the liquidation event as stated in the DeFi score white paper. Being able to react on the volatility of an asset overtime should be included into the risk assessment. Lending protocols that are monitoring assets and their volatility and adjusting their LTV ratios bears less risk compared to lending protocols that do not adjust LTV ratios in case the volatility of an asset changes radically.

For example, an asset might gain 40-60% increase in value on day one (the REP example on 16th January 2020). Borrower pledges the asset as a collateral with 66% LTV (which might be max for that particular asset). If the price rise is momentary and bounces back down 40-60%, there would be more liquidation events for new borrowers (who are pledging collateral on the increased market price) as the collateral value decreases back to normal, which increases the sell pressure of an asset upon a chain of liquidation events. Therefore, lending protocols that do not react on the increased volatility and adjust the LTV ratios are subject to higher risk and the overall score should decrease.

Analogy form property price shocks and boom-bust scenarios and how adjusting LTVs is connected to the risk: https://www.ey.com/Publication/vwLUAssets/ey-effectiveness-of-loan-to-value-ratio-policy-and-its-transmission-mechanism-empirical-evidence-from-hong-kong/$FILE/ey-effectiveness-of-loan-to-value-ratio-policy-and-its-transmission-mechanism-empirical-evidence-from-hong-kong.pdf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.