Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default parameters for newly deployed collateral types are too low and create liquidation risk #8960

Open
otoole-brendan opened this issue Feb 21, 2024 · 13 comments
Labels
enhancement New feature or request

Comments

@otoole-brendan
Copy link
Contributor

otoole-brendan commented Feb 21, 2024

What is the Problem Being Solved?

Our default parameters for new collateral adds are too low and recently resulted in us having to put out comms to notify two ambitious users (who squeaked in with small vaults before we smoke-tested and increased parameters) in order for them to avoid liquidation.

Current default parameters are:
Mint Limit: 1000IST
Collateralization Ratio: 170%
Liquidation Ratio: 150%

Description of the Design

For all new mainnet core-eval collateral adds

Mint Limit: Unchanged (1000IST)
Increase Collateralization ratio to 400%
Increase Liquidation ratio to 380%

Ensure automated/CI testing is updated to account for this change and does not break

Test Plan

Ensure CI/automated testing doesn't break/is updated

Further context/discussion in Slack:

https://agoricopco.slack.com/archives/C038TM0EUKG/p1708531858850789?thread_ts=1708531023.979719&cid=C038TM0EUKG

@anilhelvaci
Copy link
Collaborator

Could you elaborate on how Collateralization Ratio is calculated @otoole-brendan ? I see LiquidationMargin which corresponds to "Liquidation Ratio" here, however I can't see a Collateralization Ratio:

makeParamManagerSync(publisherKit, {
[DEBT_LIMIT_KEY]: [ParamTypes.AMOUNT, debtLimit],
[INTEREST_RATE_KEY]: [ParamTypes.RATIO, interestRate],
[LIQUIDATION_PADDING_KEY]: [ParamTypes.RATIO, liquidationPadding],
[LIQUIDATION_MARGIN_KEY]: [ParamTypes.RATIO, liquidationMargin],
[LIQUIDATION_PENALTY_KEY]: [ParamTypes.RATIO, liquidationPenalty],
[MINT_FEE_KEY]: [ParamTypes.RATIO, mintFee],

My guess is that Collateralization Ratio is result of adding LiquidationMargin and LiquidationPadding in some way but I don't have a calculation on how 25% of LiquidationPadding and 150% of LiquidationMargin adds up to 170% of Collateralization Ratio. I am trying to understand that do we need to change anything other than the LiquidationMargin when updating default params for new collateral managers?

For reference on how LiquidationPadding and LiquidationMargin currently set:

liquidationPadding: makeRatio(25n, stable),
liquidationMargin: makeRatio(150n, stable),

@otoole-brendan
Copy link
Contributor Author

@anilhelvaci #6910 discusses the approach we went with here:

Liquidation Margin = Liquidation Ratio
Minimum Collateralization Ratio = Liquidation Margin + Padding

I might have been wrong about default MCR being 170% - you're probably right if you can see it's 175% currently.

@anilhelvaci
Copy link
Collaborator

Yeah below snippet suggests that Minimum Collateralization Ratio = 175% where LiquidationMargin = 150% AND LiquidationPadding = 25% @otoole-brendan

// Simplifying the expression:
// (and + bnd) / y d**2
// (a + b) nd / y d**2
// ((a + b) n / y d) * (d / d)
// (a + b) n / yd

@dckc
Copy link
Member

dckc commented Jun 27, 2024

Evaluating a great big bundle of code every time we add a vault collateral is messy, at best. Since we need a new bundle for addAssetToVault and such anyway, I hope we'll consider making it a new contract (or just a vat?), so that in the future, we don't need to evaluate a big bundle but rather just make a call to the contract/vat.

I did some prototyping:

cc @Jorge-Lopes

@dckc
Copy link
Member

dckc commented Jun 27, 2024

The fact that changing the Liquidation Ratio isn't a 1 line tweak in the core-eval to add a new vault is a consequence of the fact that the connections between governance parameter values (and contract terms) and config file options is incomplete and ad-hoc. I hope we'll consider solving the more general problem:

cc @Jorge-Lopes

@anilhelvaci
Copy link
Collaborator

Evaluating a great big bundle of code every time we add a vault collateral is messy, at best. Since we need a new bundle for addAssetToVault and such anyway, I hope we'll consider making it a new contract (or just a vat?), so that in the future, we don't need to evaluate a big bundle but rather just make a call to the contract/vat.

I did some prototyping:

cc @Jorge-Lopes

I like this idea 👍 However, I thought the point of making a core-eval for addAssetToVault was to limit the ability to add new collateral managers with BLDe DAO's authority. Are you suggesting to change this?

@dckc
Copy link
Member

dckc commented Jun 28, 2024

...

I like this idea 👍 However, I thought the point of making a core-eval for addAssetToVault was to limit the ability to add new collateral managers with BLDe DAO's authority. Are you suggesting to change this?

No; the relevant method(s) would be on the creatorFacet of this contract-to-be, which would only be available in the bootstrap context and hence require BLD staker governance to access. So the core eval code to add a new vault collateral type would be something like:

const details = { issuerName: 'Tok123', denom: 'ibc/...', decimalPlaces: 18, liquidationRatio = 3.25, ... };

async ({ consume: { interAssetKit } }) => E(E.get(interAssetKit).creatorFacet).addCollateralType(details);

That's it. Short and sweet.

@Jorge-Lopes
Copy link
Collaborator

Jorge-Lopes commented Jun 28, 2024

Thank you very much for your feedback, @dckc.

Your approach appears to be a more suitable solution for a process we anticipate will become increasingly frequent in the future.
However, it currently falls outside the scope of the specifications defined for this assignment, which is pending a decision as described in this issue.

We will notify @otoole-brendan about this potential path, and if he agrees, we will proceed accordingly.

@otoole-brendan
Copy link
Contributor Author

@Jorge-Lopes @anilhelvaci what Dan is proposing seems like an overall improvement but I'm failing to parse if there are any major trade-offs to consider with this change in approach. Could you break it down in more simpler terms so I can understand?

we don't need to evaluate a big bundle but rather just make a call to the contract/vat

@Jorge-Lopes
Copy link
Collaborator

Jorge-Lopes commented Jul 12, 2024

@Jorge-Lopes @anilhelvaci what Dan is proposing seems like an overall improvement but I'm failing to parse if there are any major trade-offs to consider with this change in approach. Could you break it down in more simpler terms so I can understand?

Both designs require a core-eval to be proposed and accepted by the BLDer DAO, the diference relies on the load put on system regarding the core-eval.

In order to break it down in more detail we would like some help from @dckc to clarify a few things:

  • Are you suggesting that we move away from the core-eval generated using agoric run?

As far as we know, in order to import tooling in the core-evals, agoric run needs to bundle code that handles the logic.

Thank you in advance.

@dckc
Copy link
Member

dckc commented Jul 17, 2024

  • Are you suggesting that we move away from the core-eval generated using agoric run?

yes.

As far as we know, in order to import tooling in the core-evals, agoric run needs to bundle code that handles the logic.

We're grabbing the tooling from the BootstrapPowers promise space: async ({ consume: { interAssetKit } }) => { ... }

@Jorge-Lopes
Copy link
Collaborator

Greetings @otoole-brendan
Here, we will try to describe both approaches identified above and highlight their differences.

Approach 1: Core-eval Generated Using Agoric-run

When a core-eval is generated using the agoric-run, it exports a function called behaviour. On-chain, this function needs to evaluate the manifestBundle to get the manifest and its metadata. This means that every time a new proposal is submitted to add a new vault type, the large bundle will be evaluated again.

Approach 2: Core-eval Manually Generated

The second approach, involves deploying a new contract that will exposes the required methods in its creatorFacet to add a new collateral type and change the initialParamValues if necessary.

Then, a core-eval is manually created, fetching all the necessary powers from the BootstrapPowers space. This way allowing the core-eval to enforce the logic, such as calling the addCollateralType, without requiring to evaluate the manifest bundle.

Conclusion

We understand and are able to perform what Dan is suggesting. However, in order to identify which option is more suitable, we believe OpCo engineering team should discuss and decide what the correct way is, since both approaches has its merits and the difference between these actions are purely technical.

The fundamental question is:

Should we keep using the core-eval generated by agoric run when adding a new collateral type? Or should we write a new core-eval as Dan suggested?

@dckc
Copy link
Member

dckc commented Jul 18, 2024

one detail...

Approach 2: Core-eval Manually Generated

Typically, that short core-eval will likely be generated using the Cosmos Proposal Builder web form.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants