GithubHelp home page GithubHelp logo

2023-11-convergence-judging's Introduction

Issue H-1: Lowering the gauge weight can disrupt accounting, potentially leading to both excessive fund distribution and a loss of funds.

Source: #94

Found by

0xDetermination, ZanyBonzy, bitsurfer, bughuntoor, hash

Summary

Similar issues were found by users 0xDetermination and bart1e in the Canto veRWA audit, which uses a similar gauge controller type.

Vulnerability Detail

  • When the _change_gauge_weight function is called, the points_weight[addr][next_time].bias andtime_weight[addr] are updated - the slope is not.
def _change_gauge_weight(addr: address, weight: uint256):
    # Change gauge weight
    # Only needed when testing in reality
    gauge_type: int128 = self.gauge_types_[addr] - 1
    old_gauge_weight: uint256 = self._get_weight(addr)
    type_weight: uint256 = self._get_type_weight(gauge_type)
    old_sum: uint256 = self._get_sum(gauge_type)
    _total_weight: uint256 = self._get_total()
    next_time: uint256 = (block.timestamp + WEEK) / WEEK * WEEK

    self.points_weight[addr][next_time].bias = weight
    self.time_weight[addr] = next_time

    new_sum: uint256 = old_sum + weight - old_gauge_weight
    self.points_sum[gauge_type][next_time].bias = new_sum
    self.time_sum[gauge_type] = next_time

    _total_weight = _total_weight + new_sum * type_weight - old_sum * type_weight
    self.points_total[next_time] = _total_weight
    self.time_total = next_time

    log NewGaugeWeight(addr, block.timestamp, weight, _total_weight)
  • The equation f(t) = c - mx represents the gauge's decay equation before the weight is reduced. In this equation, m is the slope. After the weight is reduced by an amount k using the change_gauge_weight function, the equation becomes f(t) = c - k - mx The slope m remains unchanged, but the t-axis intercept changes from t1 = c/m to t2 = (c-k)/m.
  • Slope adjustments that should be applied to the global slope when decay reaches 0 are stored in the changes_sum hashmap. And is not affected by changes in gauge weight. Consequently, there's a time window t1 - t2 during which the earlier slope changes applied to the global state when user called vote_for_gauge_weights function remains applied even though they should have been subtracted. This in turn creates a situation in which the global weightis less than the sum of the individual gauge weights, resulting in an accounting error.
  • So, in the CvgRewards contract when the writeStakingRewards function invokes the _checkpoint, which subsequently triggers the gauge_relative_weight_writes function for the relevant time period, the calculated relative weight becomes inflated, leading to an increase in the distributed rewards. If all available rewards are distributed before the entire array is processed, the remaining users will receive no rewards."
  • The issue mainly arises when a gauge's weight has completely diminished to zero. This is certain to happen if a gauge with a non-zero bias, non-zero slope, and a t-intercept exceeding the current time is killed using kill_gauge function.
  • Additionally, decreasing a gauge's weight introduces inaccuracies in its decay equation, as is evident in the t-intercept.

Impact

The way rewards are calculated is broken, leading to an uneven distribution of rewards, with some users receiving too much and others receiving nothing.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Locking/GaugeController.vy#L568C1-L590C1

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L189

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L235C1-L235C91

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Locking/GaugeController.vy#L493

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Locking/GaugeController.vy#L456C1-L475C17

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Locking/GaugeController.vy#L603C1-L611C54

Tool used

Manual Review

Recommendation

Disable weight reduction, or only allow reset to 0.

Discussion

0xR3vert

Hello,

Thanks a lot for your attention.

Thank you for your insightful observation. Upon thorough examination, we acknowledge that such an occurrence could indeed jeopardize the protocol. We are currently exploring multiple solutions to address this issue. We are considering removing the function change_gauge_weight entirely and not distributing CVG inflation on killed gauges, similar to how Curve Protocol handles their gauges.

Therefore, in conclusion, we must consider your issue as valid.

Regards, Convergence Team

CergyK

Escalate

The severity of this issue is low for two reasons:

  • Admin endpoint, the admin can be trusted to not use this feature lightly, and take preventative measures to ensure that accounting is not disrupted, such as ensuring that there is no current votes for a gauge and locking voting to set a weight.

  • _change_gauge_weight has this exact implementation in the battle-tested curve dao contract (in usage for more than 3 years now without notable issue): https://github.com/curvefi/curve-dao-contracts/blob/master/contracts/GaugeController.vy

sherlock-admin2

Escalate

The severity of this issue is low for two reasons:

  • Admin endpoint, the admin can be trusted to not use this feature lightly, and take preventative measures to ensure that accounting is not disrupted, such as ensuring that there is no current votes for a gauge and locking voting to set a weight.

  • _change_gauge_weight has this exact implementation in the battle-tested curve dao contract (in usage for more than 3 years now without notable issue): https://github.com/curvefi/curve-dao-contracts/blob/master/contracts/GaugeController.vy

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

0xDetermination

Addressing the escalation points:

  1. If there are no current votes for a gauge, the weight can't be lowered. Also, locking voting is not really relevant for this issue. I don't think there are any preventative measures that can be taken other than never lowering the gauge weight.
  2. This function is in the live Curve contract, but it has never been called (see https://etherscan.io/advanced-filter?fadd=0x2f50d538606fa9edd2b11e2446beb18c9d5846bb&tadd=0x2f50d538606fa9edd2b11e2446beb18c9d5846bb&mtd=0xd4d2646e%7eChange_gauge_weight)

nevillehuang

I think all issues regarding killing and changing weight for gauges (#18, #94, #122,#192), all the arguments are assuming the following:

  1. The admin would perform appropriate measures before executing admin gated functions - To me, this is not clear cut like the way admin input would be. From sponsor confirmation, you would understand that they did not understand the severity of performing such issues so it wouldn't be unreasonable to say they are not aware enough to perform such preventive measures before this functions are called. If not, I think this should have been explicitly mentioned in known design considerations in the contest details and/or the contract details here, where the purpose of locking and pausing votes are stated.

  2. Afaik, when @CergyK mentions the admin can take preventive measures such as ensuring no current vote and locking votes, then what would happen during a scenario when the current gauge that an admin wants to change weight or kill gauge (e.g. malicious token) has votes assigned. Wouldn't that essentially mean admin can never do so, and therefore breaks core functionality?

  3. The admin would never call change_gauge_weight because it has never been called in a live curve contract - This is pure speculation, just because curve doesn't call it, it does not mean convergence would never call it.

Czar102

See my comment here.

Planning to reject the escalation.

nevillehuang

@0xDetermination @deadrosesxyz @10xhash

Do you think this is a duplicate of #192 because they seem similar. I am unsure if fixing one issue will fix another, given at the point of contest, it is intended to invoke both functions.

0xDetermination

@nevillehuang I think any issue with a root cause of 'lowering gauge weight' should be considered the same if I understand the duplication rules correctly. So it seems like these are all dupes.

Czar102

As long as an issue is valid and the root cause is the same, they are currently considered duplicates. Both #192 and #94 have a root cause of not handling the slope in _change_gauge_weight, despite having different impacts.

Czar102

Result: High Has duplicates

sherlock-admin2

Escalations have been resolved successfully!

Escalation status:

walk-on-me

Hello, we fixed this issue on this PR.

You can see on these comments, description of the fix :

IAm0x52

Fix looks good. _change_gauge_weight has been removed completely

Issue H-2: LockingPositionDelegate::manageOwnedAndDelegated unchecked duplicate tokenId allow metaGovernance manipulation

Source: #126

Found by

bughuntoor, cawfree, cergyk, hash, lemonmon, r0ck3tz

Summary

A malicious user can multiply his share of meta governance delegation for a tokenId by adding that token multiple times when calling manageOwnedAndDelegated

Vulnerability Detail

Without checks to prevent the addition of duplicate token IDs, a user can artificially inflate their voting power and their metaGovernance delegations.

A malicious user can add the same tokenId multiple times, and thus multiply his own share of meta governance delegation with regards to that tokenId.

Scenario:

  1. Bob delegates a part of metaGovernance to Mallory - he allocates 10% to her and 90% to Alice.
  2. Mallory calls manageOwnedAndDelegated and adds the same tokenId 10 times, each time allocating 10% of the voting power to herself.
  3. Mallory now has 100% of the voting power for the tokenId, fetched by calling mgCvgVotingPowerPerAddress, harming Bob and Alice metaGovernance voting power.

Impact

The lack of duplicate checks can be exploited by a malicious user to manipulate the metaGovernance system, allowing her to gain illegitimate voting power (up to 100%) on a delegated tokenId, harming the delegator and the other delegations of the same tokenId.

Code Snippet

    function manageOwnedAndDelegated(OwnedAndDelegated calldata _ownedAndDelegatedTokens) external {
        ...

❌      for (uint256 i; i < _ownedAndDelegatedTokens.owneds.length;) { //@audit no duplicate check
        ...
        }

❌      for (uint256 i; i < _ownedAndDelegatedTokens.mgDelegateds.length;) { //@audit no duplicate check
        ...
        }

❌      for (uint256 i; i < _ownedAndDelegatedTokens.veDelegateds.length;) { //@audit no duplicate check
        ...
        }
    }
function mgCvgVotingPowerPerAddress(address _user) public view returns (uint256) {
        uint256 _totalMetaGovernance;
        ...
        /** @dev Sum voting power from delegated (allowed) tokenIds to _user. */
        for (uint256 i; i < tokenIdsDelegateds.length; ) {
            uint256 _tokenId = tokenIdsDelegateds[i];
            (uint256 _toPercentage, , uint256 _toIndex) = _lockingPositionDelegate.getMgDelegateeInfoPerTokenAndAddress(
                _tokenId,
                _user
            );
            /** @dev Check if is really delegated, if not mg voting power for this tokenId is 0. */
            if (_toIndex < 999) {
                uint256 _tokenBalance = balanceOfMgCvg(_tokenId);
                _totalMetaGovernance += (_tokenBalance * _toPercentage) / MAX_PERCENTAGE;
            }

            unchecked {
                ++i;
            }
        }
        ...
❌    return _totalMetaGovernance; //@audit total voting power for the tokenID which will be inflated by adding the same tokenID multiple times
    }

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L330

PoC

Add in balance-delegation.spec.ts:

    it("Manage tokenIds for user10 with dupes", async () => {
        let tokenIds = {owneds: [], mgDelegateds: [1, 1], veDelegateds: []};
        await lockingPositionDelegate.connect(user10).manageOwnedAndDelegated(tokenIds);
    });

    it("Checks mgCVG balances of user10 (delegatee)", async () => {
        const tokenBalance = await lockingPositionService.balanceOfMgCvg(1);

        // USER 10
        const delegatedPercentage = 70n;

        //@audit: The voting power is multiplied by 2 due to the duplicate
        const exploit_multiplier = 2n;
        const expectedVotingPower = (exploit_multiplier * tokenBalance * delegatedPercentage) / 100n;
        const votingPower = await lockingPositionService.mgCvgVotingPowerPerAddress(user10);

        // take solidity rounding down into account
        expect(votingPower).to.be.approximately(expectedVotingPower, 1);
    });

Tool used

Recommendation

Ensuring the array of token IDs is sorted and contains no duplicates. This can be achieved by verifying that each tokenId in the array is strictly greater than the previous one, it ensures uniqueness without additional data structures.

Discussion

shalbe-cvg

Hello,

Thanks a lot for your attention.

After an in-depth review, we have to consider your issue as Confirmed. We will add a check on the values contained in the 3 arrays to ensure duplicates are taken away before starting the process.

Regards, Convergence Team

walk-on-me

Hello dear auditor,

We performed the correction of this issue.

We used the trick you give us to check the duplicates in the arrays of token ID.

You can find the correction here :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457545377 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457546051 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457546527

IAm0x52

Fix looks good. Arrays that have duplicates or that aren't ordered will cause the function to revert

Issue H-3: Tokens that are both bribes and StakeDao gauge rewards will cause loss of funds

Source: #182

Found by

0x52, GimelSec, detectiveking, lemonmon

Summary

When SdtStakingPositionService is pulling rewards and bribes from buffer, the buffer will return a list of tokens and amounts owed. This list is used to set the rewards eligible for distribution. Since this list is never check for duplicate tokens, a shared bribe and reward token would cause the token to show up twice in the list. The issue it that _sdtRewardsByCycle is set and not incremented which will cause the second occurrence of the token to overwrite the first and break accounting. The amount of token received from the gauge reward that is overwritten will be lost forever.

Vulnerability Detail

In L559 of SdtStakingPositionService it receives a list of tokens and amount from the buffer.

SdtBuffer.sol#L90-L168

    ICommonStruct.TokenAmount[] memory bribeTokens = _sdtBlackHole.pullSdStakingBribes(
        processor,
        _processorRewardsPercentage
    );

    uint256 rewardAmount = _gaugeAsset.reward_count();

    ICommonStruct.TokenAmount[] memory tokenAmounts = new ICommonStruct.TokenAmount[](
        rewardAmount + bribeTokens.length
    );

    uint256 counter;
    address _processor = processor;
    for (uint256 j; j < rewardAmount; ) {
        IERC20 token = _gaugeAsset.reward_tokens(j);
        uint256 balance = token.balanceOf(address(this));
        if (balance != 0) {
            uint256 fullBalance = balance;

            ...

            token.transfer(sdtRewardsReceiver, balance);

          **@audit token and amount added from reward_tokens pulled directly from gauge**

            tokenAmounts[counter++] = ICommonStruct.TokenAmount({token: token, amount: balance});
        }

        ...

    }

    for (uint256 j; j < bribeTokens.length; ) {
        IERC20 token = bribeTokens[j].token;
        uint256 amount = bribeTokens[j].amount;

      **@audit token and amount added directly with no check for duplicate token**

        if (amount != 0) {
            tokenAmounts[counter++] = ICommonStruct.TokenAmount({token: token, amount: amount});

        ...

    }

SdtBuffer#pullRewards returns a list of tokens that is a concatenated array of all bribe and reward tokens. There is not controls in place to remove duplicates from this list of tokens. This means that tokens that are both bribes and rewards will be duplicated in the list.

SdtStakingPositionService.sol#L561-L577

    for (uint256 i; i < _rewardAssets.length; ) {
        IERC20 _token = _rewardAssets[i].token;
        uint256 erc20Id = _tokenToId[_token];
        if (erc20Id == 0) {
            uint256 _numberOfSdtRewards = ++numberOfSdtRewards;
            _tokenToId[_token] = _numberOfSdtRewards;
            erc20Id = _numberOfSdtRewards;
        }

      **@audit overwrites and doesn't increment causing duplicates to be lost**            

        _sdtRewardsByCycle[_cvgStakingCycle][erc20Id] = ICommonStruct.TokenAmount({
            token: _token,
            amount: _rewardAssets[i].amount
        });
        unchecked {
            ++i;
        }
    }

When storing this list of rewards, it overwrites _sdtRewardsByCycle with the values from the returned array. This is where the problem arises because duplicates will cause the second entry to overwrite the first entry. Since the first instance is overwritten, all funds in the first occurrence will be lost permanently.

Impact

Tokens that are both bribes and rewards will be cause tokens to be lost forever

Code Snippet

SdtStakingPositionService.sol#L550-L582

Tool used

Manual Review

Recommendation

Either sdtBuffer or SdtStakingPositionService should be updated to combine duplicate token entries and prevent overwriting.

Discussion

0xR3vert

Hello,

Thanks a lot for your attention.

Absolutely, if we kill a gauge or change a type weight during the distribution, it would distribute wrong amounts, even though we're not planning to do that. We can make sure it doesn't happen by doing what you said: locking those functions to avoid any problems.

Therefore, in conclusion, we must consider your issue as a valid.

Regards, Convergence Team

walk-on-me

This issue has been solved here :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4

Follow the comment : https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457470558

IAm0x52

Fix looks good. Now uses a sum instead of a set

Issue M-1: Division by Zero in CvgRewards::_distributeCvgRewards leads to locked funds

Source: #131

Found by

cergyk

Summary

The bug occurs when CvgRewards::_setTotalWeight sets totalWeightLocked to zero, leading to a division by zero error in CvgRewards::_distributeCvgRewards, and blocking cycle increments. The blocking results in all Cvg locked to be unlockable permanently.

Vulnerability Detail

The function _distributeCvgRewards of CvgRewards.sol is designed to calculate and distribute CVG rewards among staking contracts. It calculates the cvgDistributed for each gauge based on its weight and the total staking inflation. However, if the totalWeightLocked remains at zero (due to some gauges that are available but no user has voted for any gauge), the code attempts to divide by zero.

The DoS of _distributeCvgRewards will prevent cycle from advancing to the next state State.CONTROL_TOWER_SYNC, thus forever locking the users’ locked CVG tokens.

Impact

Loss of users’ CVG tokens due to DoS of _distributeCvgRewards blocking the state.

Code Snippet

    function _setTotalWeight() internal {
        ...
❌      totalWeightLocked += _gaugeController.get_gauge_weight_sum(_getGaugeChunk(_cursor, _endChunk)); //@audit `totalWeightLocked` can be set to 0 if no gauge has received any vote
        ...
    }
    function _distributeCvgRewards() internal {
        ...
        uint256 _totalWeight = totalWeightLocked;
        ...
        for (uint256 i; i < gaugeWeights.length; ) {
            /// @dev compute the amount of CVG to distribute in the gauge
❌          cvgDistributed = (stakingInflation * gaugeWeights[i]) / _totalWeight; //@audit will revert if `_totalWeight` is zero
        ...
/**
* @notice Unlock CVG tokens under the NFT Locking Position : Burn the NFT, Transfer back the CVG to the user.  Rewards from YsDistributor must be claimed before or they will be lost.    * @dev The locking time must be over
* @param tokenId to burn
*/
function burnPosition(uint256 tokenId) external {
...
❌      require(_cvgControlTower.cvgCycle() > lastEndCycle, "LOCKED"); //@audit funds are locked if current `cycle <= lastEndCycle`
...
    }

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L321

Tool used

Recommendation

If the _totalWeight is zero, just consider the cvg rewards to be zero for that cycle, and continue with other logic:

-cvgDistributed = (stakingInflation * gaugeWeights[i]) / _totalWeight; 
+cvgDistributed = _totalWeight == 0 ? 0 : (stakingInflation * gaugeWeights[i]) / _totalWeight;

Discussion

0xR3vert

Hello,

Thanks a lot for your attention.

We are aware of the potential for a division by zero if there are no votes at all in one of our gauges. However, this scenario is unlikely to occur in reality because there will always be votes deployed (by us and/or others) in the gauges. Nevertheless, your point is valid, and we will address it to be prepared for this case.

Therefore, in conclusion, we must acknowledge your issue as correct, even though we are already aware of it.

Regards, Convergence Team

nevillehuang

Since DoS is not permanent where in as long as protocol/users themselves vote for the gauge, I think this is a low severity issue.

CergyK

Escalate

Escalating based on latest comment:

Since DoS is not permanent where in as long as protocol/users themselves vote for the gauge, I think this is a low severity issue.

If we reach this case (totalWeights == 0), the DoS is permanent. There would be no other way to reset this variable, and all user funds would be locked permanently.

It is acknowledged that there is a low chance of this happening, but due to the severe impact and acknowledged validity this should be a medium

sherlock-admin2

Escalate

Escalating based on latest comment:

Since DoS is not permanent where in as long as protocol/users themselves vote for the gauge, I think this is a low severity issue.

If we reach this case (totalWeights == 0), the DoS is permanent. There would be no other way to reset this variable, and all user funds would be locked permanently.

It is acknowledged that there is a low chance of this happening, but due to the severe impact and acknowledged validity this should be a medium

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

nevillehuang

@CergyK As mentioned by the sponsor, they will always ensure there is votes present to prevent this scenario, so I can see this as an "admin error" if the scenario is allowed to happen, but I also see your point given this was not made known to watsons. If totalWeight goes to zero, it will indeed be irrecoverable.

Unlikely but possible, so can be valid based on this sherlock rule

Causes a loss of funds but requires certain external conditions or specific states.

Czar102

I believe this impact warrants medium severity. Planning to accept the escalation.

Czar102

Result: Medium Unique

sherlock-admin2

Escalations have been resolved successfully!

Escalation status:

walk-on-me

Hello dear auditor,

we fixed this issue.

You can find how on the following link :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457528119

IAm0x52

Fix looks good. Now utilizes a ternary operator to prevent division by zero

Issue M-2: LockPositionService::increaseLockTime Incorrect Calculation Extends Lock Duration Beyond Intended Period

Source: #136

Found by

bughuntoor, cergyk, jah, rvierdiiev

Summary

LockPositionService::increaseLockTime uses block.timestamp for locking tokens, resulting in potential over-extension of the lock period. Specifically, if a user locks tokens near the end of a cycle, the lock duration might extend an additional week more than intended. For instance, locking for one cycle at the end of cycle N could result in an unlock time at the end of cycle N+2, instead of at the start of cycle N+2.

This means that all the while specifying that their $CVG should be locked for the next cycle, the $CVG stays locked for two cycles.

Vulnerability Detail

The function increaseLockTime inaccurately calculates the lock duration by using block.timestamp, thus not aligned to the starts of cycles. This discrepancy leads to a longer-than-expected lock period, especially when a lock is initiated near the end of a cycle. This misalignment means that users are unintentionally extending their lock period and affecting their asset management strategies.

Scenario:

  • Alice decides to lock her tokens for one cycle near the end of cycle N.
  • The lock duration calculation extends the lock to the end of cycle N+2, rather than starting the unlock process at the start of cycle N+2.
  • Alice's tokens are locked for an additional week beyond her expectation.

Impact

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L421

Tool used

Users may have their $CVG locked for a week more than expected

Recommendation

Align the locking mechanism to multiples of a week and use (block.timestamp % WEEK) + lockDuration for the lock time calculation. This adjustment ensures that the lock duration is consistent with user expectations and cycle durations.

Discussion

walk-on-me

Hello dear judge,

this issue has been solved.

In order to solve it, we decided that when a user comes for locking or increase it's lock, we'll not take anymore the one in the CvgControlTower. If an user decide to performs a lock action between the timestamp where the cycle is updated and the CVG distribution ( by the CvgRewards ). He'll just lock for the next week, having no incidence on protocol.

Remarks : We are keeping the cvgCycle of the CvgControlTower for :

Burning the NFT Claim the rewards of the Ys You can check how by following the comments done here :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457499501 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457504596 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457504988 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457505267 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457508720 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457510511

IAm0x52

Fix looks good. Cycles are now aligned based on computed week rather than cvgControlTower

Issue M-3: Delegation Limitation in Voting Power Management

Source: #142

Found by

lil.eth, pontifex, ydlee

Summary

MgCVG Voting power delegation system is constrained by 2 hard limits, first on the number of tokens delegated to one user (maxTokenIdsDelegated = 25) and second on the number of delegatees for one token ( maxMgDelegatees = 5). Once this limit is reached for a token, the token owner cannot modify the delegation percentage to an existing delegated user. This inflexibility can prevent efficient and dynamic management of delegated voting power.

Vulnerability Detail

Observe these lines :

function delegateMgCvg(uint256 _tokenId, address _to, uint96 _percentage) external onlyTokenOwner(_tokenId) {
    require(_percentage <= 100, "INVALID_PERCENTAGE");

    uint256 _delegateesLength = delegatedMgCvg[_tokenId].length;
    require(_delegateesLength < maxMgDelegatees, "TOO_MUCH_DELEGATEES");

    uint256 tokenIdsDelegated = mgCvgDelegatees[_to].length;
    require(tokenIdsDelegated < maxTokenIdsDelegated, "TOO_MUCH_MG_TOKEN_ID_DELEGATED");

if either maxMgDelegatees or maxTokenIdsDelegated are reached, delegation is no longer possible. The problem is the fact that this function can be either used to delegate or to update percentage of delegation or also to remove a delegation but in cases where we already delegated to a maximum of users (maxMgDelegatees) OR the user to who we delegated has reached the maximum number of tokens that can be delegated to him/her (maxTokenIdsDelegated), an update or a removal of delegation is no longer possible.

6 scenarios are possible :

  1. maxTokenIdsDelegated is set to 5, Alice is the third to delegate her voting power to Bob and choose to delegate 10% to him. Bob gets 2 other people delegating their tokens to him, Alice wants to increase the power delegated to Bob to 50% but she cannot due to Bob reaching maxTokenIdsDelegated
  2. maxTokenIdsDelegated is set to 25, Alice is the 10th to delegate her voting power to Bob and choose to delegate 10%, DAO decrease maxTokenIdsDelegated to 3, Alice wants to increase the power delegated to Bob to 50%, but she cannot due to this
  3. maxTokenIdsDelegated is set to 5, Alice is the third to delegate her voting power to Bob and choose to delegate 90%. Bob gets 2 other people delegating their tokens to him, Alice wants to only remove the power delegated to Bob using this function, but she cannot due to this
  4. maxMgDelegatees is set to 3, Alice delegates her voting power to Bob,Charly and Donald by 20% each, Alice reaches maxMgDelegatees and she cannot update her voting power for any of Bob,Charly or Donald
  5. maxMgDelegatees is set to 5, Alice delegates her voting power to Bob,Charly and Donald by 20% each,DAO decreasesmaxMgDelegatees to 3. Alice cannot update or remove her voting power delegated to any of Bob,Charly and Donald
  6. maxMgDelegatees is set to 3, Alice delegates her voting power to Bob,Charly and Donald by 20% each, Alice wants to only remove her delegation to Bob but she reached maxMgDelegatees so she cannot only remove her delegation to Bob

A function is provided to remove all user to who we delegated but this function cannot be used as a solution to this problem due to 2 things :

  • It's clearly not intended to do an update of voting power percentage by first removing all delegation we did because delegateMgCvg() is clearly defined to allow to delegate OR to remove one delegation OR to update percentage of delegation but in some cases it's impossible which is not acceptable
  • if Alice wants to update it's percentage delegated to Bob , she would have to remove all her delegatees and would take the risk that someone is faster than her and delegate to Bob before her, making Bob reaches maxTokenIdsDelegated and would render impossible for Alice to re-delegate to Bob

POC

You can add it to test/ut/delegation/balance-delegation.spec.ts :

it("maxTokenIdsDelegated is reached => Cannot update percentage of delegate", async function () {
        (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(25);
        await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(3);
        (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(3);

        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);
        await lockingPositionDelegate.connect(user2).delegateMgCvg(2, user10, 30);
        await lockingPositionDelegate.connect(user3).delegateMgCvg(3, user10, 30);
        
        const txFail = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 40);
        await expect(txFail).to.be.revertedWith("TOO_MUCH_MG_TOKEN_ID_DELEGATED");
    });
    it("maxTokenIdsDelegated IS DECREASED => PERCENTAGE UPDATE IS NO LONGER POSSIBLE", async function () {
        await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(25);
        (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(25);

        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);
        await lockingPositionDelegate.connect(user2).delegateMgCvg(2, user10, 30);
        await lockingPositionDelegate.connect(user3).delegateMgCvg(3, user10, 30);

        await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(3);
        (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(3);        

        const txFail = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 40);
        await expect(txFail).to.be.revertedWith("TOO_MUCH_MG_TOKEN_ID_DELEGATED");
        await lockingPositionDelegate.connect(treasuryDao).setMaxTokenIdsDelegated(25);
        (await lockingPositionDelegate.maxTokenIdsDelegated()).should.be.equal(25);
    });
    it("maxMgDelegatees : TRY TO UPDATE PERCENTAGE DELEGATED TO A USER IF WE ALREADY REACH maxMgDelegatees", async function () {
        await lockingPositionDelegate.connect(treasuryDao).setMaxMgDelegatees(3);
        (await lockingPositionDelegate.maxMgDelegatees()).should.be.equal(3);

        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);
        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user2, 30);
        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user3, 30);

        const txFail = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 40);
        await expect(txFail).to.be.revertedWith("TOO_MUCH_DELEGATEES");
    });
    it("maxMgDelegatees : maxMgDelegatees IS DECREASED => PERCENTAGE UPDATE IS NO LONGER POSSIBLE", async function () {
        await lockingPositionDelegate.connect(treasuryDao).setMaxMgDelegatees(5);
        (await lockingPositionDelegate.maxMgDelegatees()).should.be.equal(5);

        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user10, 20);
        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user2, 30);
        await lockingPositionDelegate.connect(user1).delegateMgCvg(1, user3, 10);

        await lockingPositionDelegate.connect(treasuryDao).setMaxMgDelegatees(2);
        (await lockingPositionDelegate.maxMgDelegatees()).should.be.equal(2);

        const txFail2 = lockingPositionDelegate.connect(user1).delegateMgCvg(1, user2, 50);
        await expect(txFail2).to.be.revertedWith("TOO_MUCH_DELEGATEES");
    });

Impact

In some cases it is impossible to update percentage delegated or to remove only one delegated percentage then forcing users to remove all their voting power delegatations, taking the risk that someone is faster then them to delegate to their old delegated users and reach threshold for delegation, making impossible for them to re-delegate

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L278

Tool used

Manual Review

Recommendation

Separate functions for new delegations and updates : Implement logic that differentiates between adding a new delegatee and updating an existing delegation to allow updates to existing delegations even if the maximum number of delegatees is reached

Discussion

shalbe-cvg

Hello,

Thanks a lot for your attention.

After an in-depth review, we have to consider your issue as Invalid. We have developed a function allowing users to clean their mgDelegatees and veDelegatees. Therefore there is no need to divide this delegation function into two different ones (add / update).

Regards, Convergence Team

nevillehuang

Hi @0xR3vert @shalbe-cvg @walk-on-me,

Could point me to the existing function you are pointing to that is present during the time of the audit?

shalbe-cvg

Hi @0xR3vert @shalbe-cvg @walk-on-me,

Could point me to the existing function you are pointing to that is present during the time of the audit?

Hello, this is the function cleanDelegatees inside LockingPositionDelegate contract

nevillehuang

Agree with sponsor, since cleanDelegatees() and removeTokenIdDelegated() here allow removal of delegatees one-by-one, this seems to be a non-issue.

ChechetkinVV

Escalate

Agree with sponsor, since cleanDelegatees() and removeTokenIdDelegated() here allow removal of delegatees one-by-one, this seems to be a non-issue.

The cleanDelegatees function referred to by the sponsor allows the owner of the token to completely remove delegation from ALL mgDelegates, but it will not be possible to remove delegation from ONE delegate using this function. This is obvious from the _cleanMgDelegatees function which is called from the cleanDelegatees function.

The only way for the owner to remove delegation from ONE delegate is using the delegateMgCvg function. But this becomes impossible if the delegate from whom the owner is trying to remove delegation has reached the maximum number of delegations.

Perhaps recommendations from #202 and #206 reports will help to better understand this problem.

This problem is described in this report and in #202, #206 reports. Other reports describe different problems. They are hardly duplicates of this issue.

sherlock-admin2

Escalate

Agree with sponsor, since cleanDelegatees() and removeTokenIdDelegated() here allow removal of delegatees one-by-one, this seems to be a non-issue.

The cleanDelegatees function referred to by the sponsor allows the owner of the token to completely remove delegation from ALL mgDelegates, but it will not be possible to remove delegation from ONE delegate using this function. This is obvious from the _cleanMgDelegatees function which is called from the cleanDelegatees function.

The only way for the owner to remove delegation from ONE delegate is using the delegateMgCvg function. But this becomes impossible if the delegate from whom the owner is trying to remove delegation has reached the maximum number of delegations.

Perhaps recommendations from #202 and #206 reports will help to better understand this problem.

This problem is described in this report and in #202, #206 reports. Other reports describe different problems. They are hardly duplicates of this issue.

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

nevillehuang

@shalbe-cvg @walk-on-me @0xR3vert @ChechetkinVV

I think I agree with watsons in the sense that it seems not intended to remove all delegations once max delegation number is reached just to adjust voting power or to remove a single delegatee, for both mgCVG and veCVG.

I think what the watsons are mentioning is that this would then open up a scenario of potential front-running for delegatees themselves to permanently always have max delegated mgCVG or veCVG, so it would permanently DoS the original delegator from updating/removing delegatee.

Czar102

From my understanding, this issue presents an issue of breaking core functionality (despite the contracts working according to the design, core intended functionality is impacted).

I believe this is sufficient to warrant medium severity for the issue.

@nevillehuang which issues should and which shouldn't be duplicated with this one? Do you agree with the escalation?

nevillehuang

@Czar102, As I understand, there are two current impacts

  1. Cannot clean delegates one by one, but forced to clean all delegates when maxMgDelegatees or maxTokenIdsDelegated
  2. Frontrunning to force DOS on delegator performing delegation removal from a delegator to invoke the max delegation check

This is what I think are duplicates:

  1. 142 (this issue mentions both impacts), 202, 206
  2. 40, 51, 142 (again this issue mentions both impacts), 169

Both impact arises from the maxMgDelegatees/maxTokenIdsDelegated check thats why I originally grouped them all together. While I initially disagreed, on further review I agree with escalation since this is not the intended contract functionality intended by the protocol. For impact 2, based on your previous comments here, it seems like it is low severity.

Czar102

Thank you for the summary @nevillehuang. I'm planning to consider all other issues (apart from #142, #202, #206) low, hence they will not be considered duplicates anymore (see docs). The three issues describing a Medium severity impact will be considered duplicates and be rewarded.

Czar102

Result: Medium Has duplicates

sherlock-admin2

Escalations have been resolved successfully!

Escalation status:

Issue M-4: cvgControlTower and veCVG lock timing will be different and lead to yield loss scenarios

Source: #178

Found by

0x52, 0xAlix2, bughuntoor, cergyk, hash

Summary

When creating a locked CVG position, there are two more or less independent locks that are created. The first is in lockingPositionService and the other is in veCVG. LockingPositionService operates on cycles (which are not finite length) while veCVG always rounds down to the absolute nearest week. The disparity between these two accounting mechanism leads to conflicting scenario that the lock on LockingPositionService can be expired while the lock on veCVG isn't (and vice versa). Additionally tokens with expired locks on LockingPositionService cannot be extended. The result is that the token is expired but can't be withdrawn. The result of this is that the expired token must wait to be unstaked and then restaked, cause loss of user yield and voting power while the token is DOS'd.

Vulnerability Detail

Cycles operate using block.timestamp when setting lastUpdateTime on the new cycle in L345. It also requires that at least 7 days has passed since this update to roll the cycle forward in L205. The result is that the cycle can never be exactly 7 days long and the start/end of the cycle will constantly fluctuate.

Meanwhile when veCVG is calculating the unlock time it uses the week rounded down as shown in L328.

We can demonstrate with an example:

Assume the first CVG cycle is started at block.timestamp == 1,000,000. This means our first cycle ends at 1,604,800. A user deposits for a single cycle at 1,400,000. A lock is created for cycle 2 which will unlock at 2,209,600.

The lock on veCVG does not match this though. Instead it's calculation will yield:

(1,400,000 + 2 * 604,800) / 604,800 = 4

4 * 604,800 = 2,419,200

As seen these are mismatched and the token won't be withdrawable until much after it should be due to the check in veCVG L404.

This DOS will prevent the expired lock from being unstaked and restaked which causes loss of yield.

The opposite issue can also occur. For each cycle that is slightly longer than expected the veCVG lock will become further and further behind the cycle lock on lockingPositionService. This can also cause a dos and yield loss because it could prevent user from extending valid locks due to the checks in L367 of veCVG.

An example of this:

Assume a user locks for 96 weeks (58,060,800). Over the course of that year, it takes an average of 2 hours between the end of each cycle and when the cycle is rolled over. This effectively extends our cycle time from 604,800 to 612,000 (+7200). Now after 95 cycles, the user attempts to increase their lock duration. veCVG and lockingPositionService will now be completely out of sync:

After 95 cycles the current time would be:

612,000 * 95 = 58,140,000

Whereas veCVG lock ended:

612,000 * 96 = 58,060,800

According to veCVG the position was unlocked at 58,060,800 and therefore increasing the lock time will revert due to L367

The result is another DOS that will cause the user loss of yield. During this time the user would also be excluded from taking place in any votes since their veCVG lock is expired.

Impact

Unlock DOS that cause loss of yield to the user

Code Snippet

CvgRewards.sol#L341-L349

Tool used

Manual Review

Recommendation

I would recommend against using block.timestamp for CVG cycles, instead using an absolute measurement like veCVG uses.

Discussion

walk-on-me

Hello dear judge,

this issue has been solved.

In order to solve it, we decided that when a user comes for locking or increase it's lock, we'll not take anymore the one in the CvgControlTower. If an user decide to performs a lock action between the timestamp where the cycle is updated and the CVG distribution ( by the CvgRewards ). He'll just lock for the next week, having no incidence on protocol.

Remarks : We are keeping the cvgCycle of the CvgControlTower for :

  • Burning the NFT
  • Claim the rewards of the Ys

You can check how by following the comments done here :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457499501 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457504596 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457504988 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457505267 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457508720 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457510511

IAm0x52

Fix looks good. All cycles are now aligned by a computed weekly timestamp instead of using cvgControlTower

Issue M-5: SdtRewardReceiver#_withdrawRewards has incorrect slippage protection and withdraws can be sandwiched

Source: #180

Found by

0x52, 0xkaden, Bauer, CL001, FarmerRick, caventa, cducrest-brainbot, detectiveking, hash, lemonmon, r0ck3tz

Summary

The _min_dy parameter of poolCvgSDT.exchange is set via the poolCvgSDT.get_dy method. The problem with this is that get_dy is a relative output that is executed at runtime. This means that no matter the state of the pool, this slippage check will never work.

Vulnerability Detail

SdtRewardReceiver.sol#L229-L236

        if (isMint) {
            /// @dev Mint cvgSdt 1:1 via CvgToke contract
            cvgSdt.mint(receiver, rewardAmount);
        } else {
            ICrvPoolPlain _poolCvgSDT = poolCvgSDT;
            /// @dev Only swap if the returned amount in CvgSdt is gretear than the amount rewarded in SDT
            _poolCvgSDT.exchange(0, 1, rewardAmount, _poolCvgSDT.get_dy(0, 1, rewardAmount), receiver);
        }

When swapping from SDT to cvgSDT, get_dy is used to set _min_dy inside exchange. The issue is that get_dy is the CURRENT amount that would be received when swapping as shown below:

@view
@external
def get_dy(i: int128, j: int128, dx: uint256) -> uint256:
    """
    @notice Calculate the current output dy given input dx
    @dev Index values can be found via the `coins` public getter method
    @param i Index value for the coin to send
    @param j Index valie of the coin to recieve
    @param dx Amount of `i` being exchanged
    @return Amount of `j` predicted
    """
    rates: uint256[N_COINS] = self.rate_multipliers
    xp: uint256[N_COINS] = self._xp_mem(rates, self.balances)

    x: uint256 = xp[i] + (dx * rates[i] / PRECISION)
    y: uint256 = self.get_y(i, j, x, xp, 0, 0)
    dy: uint256 = xp[j] - y - 1
    fee: uint256 = self.fee * dy / FEE_DENOMINATOR
    return (dy - fee) * PRECISION / rates[j]

The return value is EXACTLY the result of a regular swap, which is where the problem is. There is no way that the exchange call can ever revert. Assume the user is swapping because the current exchange ratio is 1:1.5. Now assume their withdraw is sandwich attacked. The ratio is change to 1:0.5 which is much lower than expected. When get_dy is called it will simulate the swap and return a ratio of 1:0.5. This in turn doesn't protect the user at all and their swap will execute at the poor price.

Impact

SDT rewards will be sandwiched and can lose the entire balance

Code Snippet

SdtRewardReceiver.sol#L213-L245

Tool used

Manual Review

Recommendation

Allow the user to set _min_dy directly so they can guarantee they get the amount they want

Discussion

shalbe-cvg

Hello,

Thanks a lot for your attention.

After an in-depth review, we have to consider your issue as Confirmed. Not only users can get sandwiched but in most cases this exchange directly on the pool level would rarely succeed as get_dy returns the exact amount the user could get. We will add a slippage that users will setup.

Regards, Convergence Team

walk-on-me

This issue has been solved here :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4

Follow the comment : https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457486906 https://github.com/Cvg-Finance/sherlock-cvg/pull/4#discussion_r1457489632

IAm0x52

Fix looks good. User can now specify a min out parameter

Issue M-6: Division difference can result in a revert when claiming treasury yield and excess rewards to some users

Source: #190

Found by

cergyk, hash, vvv

Summary

Different ordering of calculations are used to compute ysTotal in different situations. This causes the totalShares tracked to be less than the claimable amount of shares

Vulnerability Detail

ysTotal is calculated differently when adding to totalSuppliesTracking and when computing balanceOfYsCvgAt. When adding to totalSuppliesTracking, the calculation of ysTotal is as follows:

        uint256 cvgLockAmount = (amount * ysPercentage) / MAX_PERCENTAGE;
        uint256 ysTotal = (lockDuration * cvgLockAmount) / MAX_LOCK;

In balanceOfYsCvgAt, ysTotal is calculated as follows

        uint256 ysTotal = (((endCycle - startCycle) * amount * ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;

This difference allows the balanceOfYsCvgAt to be greater than what is added to totalSuppliesTracking

POC

  startCycle 357
  endCycle 420
  lockDuration 63
  amount 2
  ysPercentage 80

Calculation in totalSuppliesTracking gives:

        uint256 cvgLockAmount = (2 * 80) / 100; == 1
        uint256 ysTotal = (63 * 1) / 96; == 0

Calculation in balanceOfYsCvgAt gives:

        uint256 ysTotal = ((63 * 2 * 80) / 100) / 96; == 10080 / 100 / 96 == 1

Example Scenario

Alice, Bob and Jake locks cvg for 1 TDE and obtains rounded up balanceOfYsCvgAt. A user who is aware of this issue can exploit this issue further by using increaseLockAmount with small amount values by which the total difference difference b/w the user's calculated balanceOfYsCvgAt and the accounted amount in totalSuppliesTracking can be increased. Bob and Jake claims the reward at the end of reward cycle. When Alice attempts to claim rewards, it reverts since there is not enough reward to be sent.

Impact

This breaks the shares accounting of the treasury rewards. Some user's will get more than the actual intended rewards while the last withdrawals will result in a revert

Code Snippet

totalSuppliesTracking calculation

In mintPosition https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L261-L263

In increaseLockAmount https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L339-L345

In increaseLockTimeAndAmount https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L465-L470

_ysCvgCheckpoint https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L577-L584

balanceOfYsCvgAt calculation https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L673-L675

Tool used

Manual Review

Recommendation

Perform the same calculation in both places

+++                     uint256 _ysTotal = (_extension.endCycle - _extension.cycleId)* ((_extension.cvgLocked * _lockingPosition.ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;
---     uint256 ysTotal = (((endCycle - startCycle) * amount * ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;

Discussion

walk-on-me

Hello

Indeed this is a real problem due the way that the invariant : Sum of all balanceOfYsCvg > totalSupply

And so some positions will become not claimable on the YsDistributor.

We'll correct this by computing the same way the ysTotal & ysPartial on the balanceYs & ysCheckpoint

Very nice finding, it'd break the claim for the last users to claim.

deadrosesxyz

Escalate The amounts are scaled up by 1e18. The rounding down problem comes when dividing by MAX_PERCENTAGE which equals 100. Worst case scenario (which will only happen if a user deposits an amount which is not divisible by 100), there will be rounding down of up to 99 wei. Not only it is insignificant, it is unlikely to happen as it requires a deposit of an amount not divisible by 1e2. Believe issue should be marked as low.

sherlock-admin2

Escalate The amounts are scaled up by 1e18. The rounding down problem comes when dividing by MAX_PERCENTAGE which equals 100. Worst case scenario (which will only happen if a user deposits an amount which is not divisible by 100), there will be rounding down of up to 99 wei. Not only it is insignificant, it is unlikely to happen as it requires a deposit of an amount not divisible by 1e2. Believe issue should be marked as low.

You've created a valid escalation!

To remove the escalation from consideration: Delete your comment.

You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final.

10xhash

  1. The rounding down is significant because it disallows the last claimers of a TDE cycle from obtaining their reward.
  2. An attacker can perform the attack which requires no expectations from the other users.
  3. The reasoning to classify non 1e2 amounts as unlikely would be the neatness part and the UI interacting users. There is a functionality provided by the CvgUtilities contract itself to lock Cvg using swap and bond tokens which shouldn't be discriminating non 1e2 amounts.

deadrosesxyz

Worst case scenario, only the last user will be unable to claim their rewards (even though I described above why it is highly unlikely). In the rare situation it happens, it can be fixed by simply sending a few wei to the contract.

nevillehuang

Imo, just off point 1 alone, this warrants medium severity at the least. The fact that a donation is required to fix this means there is a bug, and is not intended functionality of the function.

Czar102

I agree that this is a borderline low/med. I don't see a reason to discriminate nonzero deposits mod 100. That said, I am siding with the escalation – the loss here is insufficient to consider it a material loss of funds (at no point in the lifetime of the codebase will it surpass $1), and a loss of protocol functionality isn't serious enough if simply sending some dust to the contract will resolve the issue.

Planning to accept the escalation and consider this a low severity issue.

CergyK

@Czar102 please consider report #132 which I submitted which allows to steal an arbitrary amount from the rewards under some conditions, which is a higher impact.

My issue shares the same root cause as this one, so I did not escalate for deduplication. However if you think that this issue should be low, maybe it would be more fair to make my issue unique since the impact is sufficient.

nevillehuang

#132 and this #190 shares the same impact, if this is invalid, #132 should be invalid as well. Namely the following two impact:

  1. Last user withdrawals can revert
  2. Some users will gain more rewards at the expense of others.

Both examples present used involve relatively low amounts, so I'm unsure what is the exact impact

Comparing this issue attack path

by using increaseLockAmount with small amount values by which the total difference difference b/w the user's calculated balanceOfYsCvgAt and the accounted amount in totalSuppliesTracking can be increased

and #132

-> Alice locks some small amount for lockDuration = 64 so that it increases totalSupply by exactly 1 -> Alice proceeds to lock X times using the values:

Comparing this issue impact

This breaks the shares accounting of the treasury rewards. Some user's will get more than the actual intended rewards while the last withdrawals will result in a revert

and #132

Under specific circumstances (if the attacker is the only one to have allocated to YS during a TDE), an attacker is able to claim arbitrarily more rewards than is due to him, stealing rewards from other participants in the protocol

My opinion is both issues should remain valid medium severity issue based on impact highlighted in both issues.

CergyK

After some discussion with @nevillehuang, agree that issues should stay duplicated and valid high/medium given following reasons:

  • Highest impact is: loss of arbitrary amount of present/future rewards (see #132 for explanation)
  • Necessary condition of very low YS allocation is unlikely but not impossible since YS is not central in Convergence (YS allocation could be empty and the system would be working as expected)

deadrosesxyz

To summarize:

  • Almost certainly in regular conditions, there will be no issue for any user whatsover.
  • In some rare case, (see original escalation) there could be a small overdistribution of rewards (matter of a few wei). In the rarer case all users claim their rewards, the last unstaker will be unable to do so due to lack of funds. This is even more unlikely considering any time a user claims rewards, the amount they claim is rounded down, (due to built-in round down in solidity) leading to making the overdistribution smaller/inexistent. But even if all the conditions are met, the issue can be fixed by simply sending a few wei to the contract.
  • The highest impact described in #132 requires for the total balance to not simply be very low but in fact to be just a few wei. Not only it has to be a few wei, but it has to be a few wei for at least 12 weeks (until TDE payout). It is absolutely unrealistic to have a few wei total balance for 12 weeks.

Issue should remain Low severity

10xhash

I agree that the fix of sending minor amounts of all reward tokens won't cost the team any considerable loss financially. But apart from the fix, the impact under reasonable conditions of user not being able to withdraw their rewards is certainly a major one. If issues existing on the contract are judged based on the ease of fixing/preventing, I think a lot more issues would exist under this category. Wouldn't it make almost all functionality breaks in up-gradable contracts low severity due to the fix being an upgrade?

Czar102

Due to the additional impact noted (thank you @CergyK) I think the loss can be sufficient to warrant a medium severity for this issue (loss of funds, but improbable assumptions are made).

Czar102

Result: Medium Has duplicates

sherlock-admin2

Escalations have been resolved successfully!

Escalation status:

walk-on-me

This issue has been solved here :

https://github.com/Cvg-Finance/sherlock-cvg/pull/4

Follow the comments :

IAm0x52

Fix looks good. Order of operations has been updated to consistently reflect the proper value

2023-11-convergence-judging's People

Contributors

sherlock-admin avatar sherlock-admin2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

2023-11-convergence-judging's Issues

0xbrett8571 - Duplicate Bond Asset Withdrawals in BondDepository

0xbrett8571

high

Duplicate Bond Asset Withdrawals in BondDepository

Summary

withdraw function in BondDepository allows withdrawing bonded assets to a user, but does not burn or invalidate the user's bond NFT. This could allow duplicate withdrawals using the same NFT.

It does not consider the associated bond NFT or update its state to reflect the withdrawal.

This is because its only responsibility is withdrawing from a linked SdtStakingPositionService.

So it overlooks the need to invalidate the bond NFT as well.

Vulnerability Detail

The withdraw function transfers bonded assets to the user but does not burn or update the bond NFT. This allows users to withdraw multiple times with the same NFT.

  • This withdraw function is used to withdraw staked assets from an associated SdtStakingPositionService contract: Line 86
ISdtStakingPositionService(msg.sender).stakingAsset().transfer(receiver, amount); 
  • It takes the amount to withdraw and sends it to the receiver address.

  • However, it does not interact with or burn the user's bond NFT in any way.

  • The BondDepository contract is responsible for managing bond NFTs via functions like deposit, redeem, claim.

  • But this withdraw function is narrowly focused on just transferring staked assets.

The issue is:

  • Because the bond NFT is not invalidated, a user could withdraw their bonded assets using this function.

  • Then later call redeem or claim again with the same NFT and withdraw a second time.

  • Essentially they can withdraw double the assets while still holding their original bond NFT.

Impact

  • Users can withdraw multiples of their original bonded assets.
  • Loss of protocol reserves due to duplicate withdrawals.
  • Impact scales exponentially with number of duplicate withdrawals.

Let's look at an example:

  1. Alice deposits 100 TOKEN into a Bond and gets Bond NFT 1

  2. Later she calls withdraw to get her 100 TOKEN back

  3. The 100 TOKEN is transferred out via withdraw

  4. But Alice still holds Bond NFT 1 which represents a claim on those 100 TOKEN

  5. Alice can now call redeem or claim again with NFT 1 to withdraw another 100 TOKEN

  6. She has withdrawn 200 TOKEN total while only depositing 100 TOKEN originally

The risk is that Alice repeatedly calls withdraw and other functions using the same Bond NFT and withdraws many multiples of her original deposit.

For example, calling withdraw 10 times would allow her to withdraw 10x her original bonded assets.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBlackHole.sol#L83-L87

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBlackHole.sol#L86

The NFT is not invalidated after withdraw.

function withdraw(address receiver, uint256 amount) external {

  // Transfer assets 
  ISdtStakingPositionService(msg.sender).stakingAsset().transfer(receiver, amount);

  // NFT not burned or updated

}

Tool used

Manual Review

Recommendation

  • Consider burning or transferring bond NFT on withdraw.

  • Or separate narrow asset withdrawal function from broader NFT-aware withdraw.

bughuntoor - Transferring a CvgERC721 does not clear delegations

bughuntoor

high

Transferring a CvgERC721 does not clear delegations

Summary

After transferring a CvgERC721, the new owner may not be aware that the token is delegated and some people can vote with it

Vulnerability Detail

Users can choose to delegate their voting power. By doing so, they allow for other users to use the power associated with the token for whatever they like. Upon transfer the delegations are not cleared and old delegatees can still make use of the token. If the new owner is not aware, it could result in bad behaviour and cause unexpected voting results.

    function delegateVeCvg(uint256 _tokenId, address _to) external onlyTokenOwner(_tokenId) {
        require(veCvgDelegatees[_to].length < maxTokenIdsDelegated, "TOO_MUCH_VE_TOKEN_ID_DELEGATED");
        /** @dev Find if this tokenId is already delegated to an address. */
        address previousOwner = delegatedVeCvg[_tokenId];
        if (previousOwner != address(0)) {
            /** @dev If it is  we remove the previous delegation.*/
            uint256 _toIndex = getIndexForVeDelegatee(previousOwner, _tokenId);
            uint256 _delegateesLength = veCvgDelegatees[previousOwner].length;
            /** @dev Removing delegation.*/
            veCvgDelegatees[previousOwner][_toIndex] = veCvgDelegatees[previousOwner][_delegateesLength - 1];
            veCvgDelegatees[previousOwner].pop();
        }

        /** @dev Associate tokenId to a new delegated address.*/
        delegatedVeCvg[_tokenId] = _to;

        if (_to != address(0)) {
            /** @dev Add delegation to the new address.*/
            veCvgDelegatees[_to].push(_tokenId);
        }
        emit DelegateVeCvg(_tokenId, _to);
    }

Impact

Unexpected voting behaviour

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L248C1-L270C1

Tool used

Manual Review

Recommendation

Override the _transfer method to clear delegations

Duplicate of #175

bughuntoor - Removing a gauge while rewards are being distributed will result in incorrect distribution.

bughuntoor

high

Removing a gauge while rewards are being distributed will result in incorrect distribution.

Summary

Removing a gauge while rewards are being distributed will result in incorrect distribution.

Vulnerability Detail

If a gauge is removed after its weight has been accounted in the totalWeightLocked, it will result in incorrect amount of rewards distributed.
There would be 2 possible scenarios for the difference in rewards distributed .

  1. If all gauges have been accounted for in totalWeightLocked, when calling _distributeCvgRewards it will distribute a smaller amount of rewards than expected.
  2. If the last gauge has not yet been accounted for, it will not get accounted for at all (as it will take the id of the removed gauge). Depending on the weight the last gauge holds, it may result in both serious overdistribution or underdistribution of rewards.
    function _setTotalWeight() internal {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        IGaugeController _gaugeController = _cvgControlTower.gaugeController();
        uint128 _cursor = cursor;
        uint128 _totalGaugeNumber = uint128(gauges.length);

        /// @dev compute the theoric end of the chunk
        uint128 _maxEnd = _cursor + cvgRewardsConfig.maxLoopSetTotalWeight;
        /// @dev compute the real end of the chunk regarding the length of staking contracts
        uint128 _endChunk = _maxEnd < _totalGaugeNumber ? _maxEnd : _totalGaugeNumber;

        /// @dev if last chunk of the total weighted locked processs
        if (_endChunk == _totalGaugeNumber) {
            /// @dev reset the cursor to 0 for _distributeRewards
            cursor = 0;
            /// @dev set the step as DISTRIBUTE for reward distribution
            state = State.DISTRIBUTE;
        } else {
            /// @dev setup the cursor at the index start for the next chunk
            cursor = _endChunk;
        }

        totalWeightLocked += _gaugeController.get_gauge_weight_sum(_getGaugeChunk(_cursor, _endChunk));

        /// @dev emit the event only at the last chunk
        if (_endChunk == _totalGaugeNumber) {
            emit SetTotalWeight(_cvgControlTower.cvgCycle(), totalWeightLocked);
        }
    }

Impact

Incorrect distribution of rewards

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L244C1-L272C6

Tool used

Manual Review

Recommendation

Do not allow for gauges to be removed if _state != State.CHECKPOINT

ksksks - Potential loss of SDT token inside CvgSDT

ksksks

high

Potential loss of SDT token inside CvgSDT

Summary

Potential loss of SDT token inside CvgSDT

Vulnerability Detail

Upon mint, CvgSDT transfers SDT from msg.sender to veSdtMultisig

If veSdtMultisig is the zero address, this will lead to SDT being sent to the zero address.

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Token/CvgSDT.sol#L40

Impact

Loss of SDT

Code Snippet

Tool used

Manual Review

Recommendation

Check veSdtMultisig is not zero address

    function mint(address account, uint256 amount) external {
        address veSdtMultisig = cvgControlTower.veSdtMultisig();
        require(veSdtMultisig != address(0));
        sdt.transferFrom(msg.sender, veSdtMultisig, amount);
        _mint(account, amount);
    }

bughuntoor - Users can front-run calls to `change_gauge_weight` in order to acquire more weight for their gauge

bughuntoor

medium

Users can front-run calls to change_gauge_weight in order to acquire more weight for their gauge

Summary

Users can gain extra weight for their gauge by front-running change_gauge_weight

Vulnerability Detail

It can be expected that in some cases calls will be made to change_gauge_weight to increase or decrease a gauge's weight. The problem is users can be monitoring the mempool expecting such calls. Upon seeing such, any people who have voted for said gauge can just remove their vote prior to change_gauge_weight. Once it executes, they can vote again for their gauge, increasing its weight more than it was expected to be:
Example:

  1. Gauge has 1 user who has voted and contributed for 10_000 weight
  2. They see an admin calling change_gauge_weight with value 15_000.
  3. User front-runs it and removes all their weight. Gauge weight is now 0.
  4. Admin function executes. Gauge weight is now 15_000
  5. User votes once again for the gauge for the same initial 10_000 weight. Gauge weight is now 25_000.

Gauge weight was supposed to be changed from 10_000 to 15_000, but due to the user front-running, gauge weight is now 25_000

Impact

Accruing extra voting power

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/GaugeController.vy#L569

Tool used

Manual Review

Recommendation

Instead of having a set function, use increase/ decrease methods.

Duplicate of #122

0xbrett8571 - `LockingPositionService` doesn't update `ysCVG` supply correctly on duration increase, leading to potential rewards inflation.

0xbrett8571

high

LockingPositionService doesn't update ysCVG supply correctly on duration increase, leading to potential rewards inflation.

Summary

The increaseLockTimeAndAmount function in LockingPositionService does not properly update ysCVG supply accounting via _ysCvgCheckpoint when increasing lock duration. This can allow users to claim inflated ysCVG rewards.

Vulnerability Detail

The _ysCvgCheckpoint function updates the totalSuppliesTracking mapping that tracks ysCVG supply changes each cycle. It is called on minting new locks and increasing lock amount, but NOT when only increasing lock duration.

So if a user increases their lock duration but not amount, totalSuppliesTracking will not be updated to account for the increased duration. This leads to an incorrect ysCVG total supply.

Root cause

The increaseLockTimeAndAmount function allows increasing both the lock duration and amount of CVG locked for a locking position NFT. It properly updates the voting power and total CVG locked, but does miss updating the ysCVG supply accounting via _ysCvgCheckpoint when increasing duration. The root cause is that _ysCvgCheckpoint is only called when increasing the lock amount on lines 465-469.

_ysCvgCheckpoint(
                newEndCycle - actualCycle - 1, 
                (amount * lockingPosition.ysPercentage) / MAX_PERCENTAGE,
                actualCycle,
                newEndCycle - 1
);

However, it is not called when only increasing lock duration. This means the ysCVG supply accounting will be incorrect when increasing duration.

As shown, the _ysCvgCheckpoint function is responsible for updating the totalSuppliesTracking mapping which tracks the total ysCVG supply changes each cycle.

It gets called in two places:

  1. On minting a new locking position - This properly sets the initial ysCVG supply for the lock duration.

  2. When increasing lock amount - This updates totalSuppliesTracking for the increased amount.

The cause is that _ysCvgCheckpoint does NOT get called when only increasing lock duration.

So if a user increases duration but not amount, totalSuppliesTracking will not be updated to account for the longer lock period.

For example

  1. Alice mints a 12 week lock with 10% ysCVG.

    • _ysCvgCheckpoint called properly, totalSuppliesTracking updated for 12 weeks of ysCVG.
  2. 2 weeks later, Alice increases lock to 24 weeks (duration +12 weeks).

    • _ysCvgCheckpoint NOT called here because no amount increase.
  3. totalSuppliesTracking still only reflects original 12 weeks of ysCVG.

    • But Alice's lock is now 24 weeks, so she will be able to claim more ysCVG than originally accounted for.
  4. When Alice claims ysCVG rewards, they will be inflated because totalSuppliesTracking was never updated to 24 weeks.

So by not calling _ysCvgCheckpoint when increasing duration, the ysCVG supply can become inflated.

Impact

The total supply of ysCVG tokens will be incorrectly calculated when users increase their lock duration but not amount. Specifically, the totalSuppliesTracking mapping which accumulates ysCVG supply changes each cycle will not be updated.

This could allow users to claim more ysCVG rewards than they are entitled to if the actual ysCVG supply is lower than expected.

It could also lead to confusion around the circulating supply of ysCVG when dashboard totals do not match the on-chain data.

The risk is that a user could dramatically increase their lock duration without increasing amount, and claim excess ysCVG due to the supply discrepancy.

For example, if they extended a 12 week lock to a 2 year lock, they could claim far more ysCVG than initially minted for those first 12 weeks.

This could throw off the total ysCVG supply and circulating supply, and lead to loss of fees or rewards for other ysCVG holders.

The impact scales with the difference between the original and extended lock durations.

  • Users can claim more ysCVG rewards than they should be entitled to based on the actual supply.

  • Throws off the total ysCVG circulating supply tracked on dashboards.

  • Loss of fees or rewards for other ysCVG holders due to supply discrepancy.

  • The impact scales exponentially with the difference between original and extended lock duration.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L439-L505

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L465-L470

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L94

The _ysCvgCheckpoint call is missing when increasing duration:

function increaseLockTimeAndAmount(
  uint256 tokenId,
  uint256 durationAdd, // increasing duration
  uint256 amount, 
  address operator
) external {

  ...

  if (lockingPosition.ysPercentage != 0) {

    // Update ysCVG supply for increased amount
    _ysCvgCheckpoint(
      newEndCycle - actualCycle - 1,
      (amount * lockingPosition.ysPercentage) / MAX_PERCENTAGE,
      actualCycle,
      newEndCycle - 1
    );

  }

  ...

}

Tool used

Manual Review

Recommendation

Call _ysCvgCheckpoint when increasing duration to properly update totalSuppliesTracking.

_ysCvgCheckpoint(
  durationAdd, 
  0,
  actualCycle,
  newEndCycle - 1  
);

This will correct the ysCVG supply accounting when increasing lock duration.

pipidu83 - The ```pullRewards``` function insufficiently checks ERC20 transfers, leading to potential loss of funds / illegitimate rewards.

pipidu83

high

The pullRewards function insufficiently checks ERC20 transfers, leading to potential loss of funds / illegitimate rewards.

Summary

Success of transfers of rewards is not checked in the pullRewards function of the CvgSdtBuffer contract.
That means these transfers can silently fail, leading to _cvgSdtStaking potentially not being able to pull rewards.

Vulnerability Detail

The transfer and transferFrom functions of ERC20 tokens return a boolean value (true is transfer is successful, false if it is not), and will not revert in case of unsuccessful transfers.

By definition,

IERC20 _sdt = sdt;
IERC20 _cvgSdt = cvgSdt;
IERC20 _sdFrax3Crv = sdFrax3Crv;

meaning these 3 tokens are ERC20 so the above statement applies for them.

This means that these statements will not revert in case of unsuccessful transfers

_sdt.transfer(_processor, processorRewards);
_sdt.transfer(sdtRewardReceiver, sdtAmount);
_sdFrax3Crv.transferFrom(veSdtMultisig, _processor, processorRewards);
_sdFrax3Crv.transferFrom(veSdtMultisig, sdtRewardReceiver, sdFrax3CrvAmount);
 _cvgSdt.transfer(_processor, processorRewards);

and

_cvgSdt.transfer(sdtRewardReceiver, cvgSdtAmount);

This means that any (or all) of them can silently fail without the function reverting, meaning the user who initiated the function could receive a percentage of rewards without the rewards actually being pulled.

The sdtRewardAssets return value in that case will also be incorrect.

Impact

We mark the impact of this vulnerability as HIGH because it could lead to rewards not being received as they should and potential errors in contract accountancy.

It could also lead to rewards being successfully transferred to the _processor who initiated it without the reward tokens being successfully transferred to the Staking contract.

Code Snippet

Below is the pullRewards function definition

function pullRewards(address processor) external returns (ICommonStruct.TokenAmount[] memory) {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        ISdtStakingPositionService _cvgSdtStaking = _cvgControlTower.cvgSdtStaking();
        address sdtRewardReceiver = cvgControlTower.sdtRewardReceiver();

        address veSdtMultisig = _cvgControlTower.veSdtMultisig();
        IERC20 _sdt = sdt;
        IERC20 _cvgSdt = cvgSdt;
        IERC20 _sdFrax3Crv = sdFrax3Crv;

        require(msg.sender == address(_cvgSdtStaking), "NOT_CVG_SDT_STAKING");

        /// @dev disperse sdt fees
        _cvgControlTower.sdtFeeCollector().withdrawSdt();

        /// @dev claim sdFrax3CrvReward from feedistributor on behalf of the multisig
        feeDistributor.claim(veSdtMultisig);

        /// @dev Fetches balance of itself in SDT
        uint256 sdtAmount = _sdt.balanceOf(address(this));

        /// @dev Fetches balance of itself in CvgSdt
        uint256 cvgSdtAmount = _cvgSdt.balanceOf(address(this));

        /// @dev Fetches balance of veSdtMultisig in sdFrax3Crv
        uint256 sdFrax3CrvAmount = _sdFrax3Crv.balanceOf(veSdtMultisig);

        /// @dev TokenAmount array struct returned
        ICommonStruct.TokenAmount[] memory sdtRewardAssets = new ICommonStruct.TokenAmount[](3);
        uint256 counter;

        uint256 _processorRewardsPercentage = processorRewardsPercentage;
        address _processor = processor;

        /// @dev distributes if the balance is different from 0
        if (sdtAmount != 0) {
            /// @dev send rewards to claimer
            uint256 processorRewards = sdtAmount * _processorRewardsPercentage / DENOMINATOR;
            if (processorRewards != 0) {
                _sdt.transfer(_processor, processorRewards);
                sdtAmount -= processorRewards;
            }

            sdtRewardAssets[counter++] = ICommonStruct.TokenAmount({token: _sdt, amount: sdtAmount});
            ///@dev transfers all Sdt to the CvgSdtStaking
            _sdt.transfer(sdtRewardReceiver, sdtAmount);
        }
        /// @dev else reduces the length of the array to not return some useless 0 TokenAmount structs
        else {
            // solhint-disable-next-line no-inline-assembly
            assembly {
                mstore(sdtRewardAssets, sub(mload(sdtRewardAssets), 1))
            }
        }

        /// @dev distributes if the balance is different from 0
        if (sdFrax3CrvAmount != 0) {
            /// @dev send rewards to claimer
            uint256 processorRewards = sdFrax3CrvAmount * _processorRewardsPercentage / DENOMINATOR;
            if (processorRewards != 0) {
                _sdFrax3Crv.transferFrom(veSdtMultisig, _processor, processorRewards);
                sdFrax3CrvAmount -= processorRewards;
            }

            sdtRewardAssets[counter++] = ICommonStruct.TokenAmount({token: _sdFrax3Crv, amount: sdFrax3CrvAmount});
            ///@dev transfers from all tokens detained by veSdtMultisig
            _sdFrax3Crv.transferFrom(veSdtMultisig, sdtRewardReceiver, sdFrax3CrvAmount);
        }
        /// @dev else reduces the length of the array to not return some useless 0 TokenAmount structs
        else {
            // solhint-disable-next-line no-inline-assembly
            assembly {
                mstore(sdtRewardAssets, sub(mload(sdtRewardAssets), 1))
            }
        }

        /// @dev distributes if the balance is different from 0
        if (cvgSdtAmount != 0) {
            /// @dev send rewards to claimer
            uint256 processorRewards = cvgSdtAmount * _processorRewardsPercentage / DENOMINATOR;
            if (processorRewards != 0) {
                _cvgSdt.transfer(_processor, processorRewards);
                cvgSdtAmount -= processorRewards;
            }

            sdtRewardAssets[counter++] = ICommonStruct.TokenAmount({token: _cvgSdt, amount: cvgSdtAmount});
            ///@dev transfers all CvgSdt to the CvgSdtStaking
            _cvgSdt.transfer(sdtRewardReceiver, cvgSdtAmount);
        }
        /// @dev else reduces the length of the array to not return some useless 0 TokenAmount structs
        else {
            // solhint-disable-next-line no-inline-assembly
            assembly {
                mstore(sdtRewardAssets, sub(mload(sdtRewardAssets), 1))
            }
        }

        return sdtRewardAssets;
    }

Tool used

Manual Review

Recommendation

The fix for this vulnerability is relatively straightforward and we just need to require all these transfers to return true before moving forward with the rest of the logic.

Fixed version of the pullRewards function would then look like

function pullRewards(address processor) external returns (ICommonStruct.TokenAmount[] memory) {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        ISdtStakingPositionService _cvgSdtStaking = _cvgControlTower.cvgSdtStaking();
        address sdtRewardReceiver = cvgControlTower.sdtRewardReceiver();

        address veSdtMultisig = _cvgControlTower.veSdtMultisig();
        IERC20 _sdt = sdt;
        IERC20 _cvgSdt = cvgSdt;
        IERC20 _sdFrax3Crv = sdFrax3Crv;

        require(msg.sender == address(_cvgSdtStaking), "NOT_CVG_SDT_STAKING");

        /// @dev disperse sdt fees
        _cvgControlTower.sdtFeeCollector().withdrawSdt();

        /// @dev claim sdFrax3CrvReward from feedistributor on behalf of the multisig
        feeDistributor.claim(veSdtMultisig);

        /// @dev Fetches balance of itself in SDT
        uint256 sdtAmount = _sdt.balanceOf(address(this));

        /// @dev Fetches balance of itself in CvgSdt
        uint256 cvgSdtAmount = _cvgSdt.balanceOf(address(this));

        /// @dev Fetches balance of veSdtMultisig in sdFrax3Crv
        uint256 sdFrax3CrvAmount = _sdFrax3Crv.balanceOf(veSdtMultisig);

        /// @dev TokenAmount array struct returned
        ICommonStruct.TokenAmount[] memory sdtRewardAssets = new ICommonStruct.TokenAmount[](3);
        uint256 counter;

        uint256 _processorRewardsPercentage = processorRewardsPercentage;
        address _processor = processor;

        /// @dev distributes if the balance is different from 0
        if (sdtAmount != 0) {
            /// @dev send rewards to claimer
            uint256 processorRewards = sdtAmount * _processorRewardsPercentage / DENOMINATOR;
            if (processorRewards != 0) {
                (bool success, ) = _sdt.transfer(_processor, processorRewards);
                
                require(success, "transfer failed");
                sdtAmount -= processorRewards;
            }

            sdtRewardAssets[counter++] = ICommonStruct.TokenAmount({token: _sdt, amount: sdtAmount});
            ///@dev transfers all Sdt to the CvgSdtStaking
            (bool success, ) = _sdt.transfer(sdtRewardReceiver, sdtAmount);
             require(success, "transfer failed");
        }
        /// @dev else reduces the length of the array to not return some useless 0 TokenAmount structs
        else {
            // solhint-disable-next-line no-inline-assembly
            assembly {
                mstore(sdtRewardAssets, sub(mload(sdtRewardAssets), 1))
            }
        }

        /// @dev distributes if the balance is different from 0
        if (sdFrax3CrvAmount != 0) {
            /// @dev send rewards to claimer
            uint256 processorRewards = sdFrax3CrvAmount * _processorRewardsPercentage / DENOMINATOR;
            if (processorRewards != 0) {
                (bool success, ) = _sdFrax3Crv.transferFrom(veSdtMultisig, _processor, processorRewards);
                require(success, "transfer failed");

                sdFrax3CrvAmount -= processorRewards;
            }

            sdtRewardAssets[counter++] = ICommonStruct.TokenAmount({token: _sdFrax3Crv, amount: sdFrax3CrvAmount});
            ///@dev transfers from all tokens detained by veSdtMultisig
            (bool success, ) = _sdFrax3Crv.transferFrom(veSdtMultisig, sdtRewardReceiver, sdFrax3CrvAmount);
             require(success, "transfer failed");

        }
        /// @dev else reduces the length of the array to not return some useless 0 TokenAmount structs
        else {
            // solhint-disable-next-line no-inline-assembly
            assembly {
                mstore(sdtRewardAssets, sub(mload(sdtRewardAssets), 1))
            }
        }

        /// @dev distributes if the balance is different from 0
        if (cvgSdtAmount != 0) {
            /// @dev send rewards to claimer
            uint256 processorRewards = cvgSdtAmount * _processorRewardsPercentage / DENOMINATOR;
            if (processorRewards != 0) {
                (bool success, ) = _cvgSdt.transfer(_processor, processorRewards);
                require(success, "transfer failed");

                cvgSdtAmount -= processorRewards;
            }

            sdtRewardAssets[counter++] = ICommonStruct.TokenAmount({token: _cvgSdt, amount: cvgSdtAmount});
            ///@dev transfers all CvgSdt to the CvgSdtStaking
            (bool success, ) = _cvgSdt.transfer(sdtRewardReceiver, cvgSdtAmount);
            require(success, "transfer failed");

        }
        /// @dev else reduces the length of the array to not return some useless 0 TokenAmount structs
        else {
            // solhint-disable-next-line no-inline-assembly
            assembly {
                mstore(sdtRewardAssets, sub(mload(sdtRewardAssets), 1))
            }
        }

        return sdtRewardAssets;
}

Oxd1z - unchecked-transfer

Oxd1z

high

unchecked-transfer

Summary

The return value of an external transfer/transferFrom call is not checked

Vulnerability Detail

CvgSDT.mint(address,uint256) ignores return value by sdt.transferFrom(msg.sender,cvgControlTower.veSdtMultisig(),amount)

Impact

If the transferFrom call fails, but the mint function continues without checking the return value, it might result in a successful transaction from the contract's perspective. However, the overall transaction may be reverted when miners process it, causing confusion and potential issues for users.
If the transferFrom function fails (e.g., due to insufficient allowance or other conditions), and the return value is not checked, the mint function may proceed as if the transfer was successful. This could result in a loss of funds for the user or unexpected behavior.
An unchecked transfer could potentially lead to unauthorized minting or other malicious activities.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Token/CvgSDT.sol#L39-L42

Tool used

Manual Review

Recommendation

ensure that the transfer/transferFrom return value is checked.

Duplicate of #38

ksksks - SdtBlackHole.pullSdStakingBrides unchecked processor address can result in transfer bribes to 0 address

ksksks

high

SdtBlackHole.pullSdStakingBrides unchecked processor address can result in transfer bribes to 0 address

Summary

SdtBlackHole.pullSdStakingBrides unchecked processor address can result in transfer bribes to 0 address

Vulnerability Detail

SdtBlackHole.pullSdStakingBrides does not check that _processor is not address 0.

This can lead to transfer of bribes to 0 address

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBlackHole.sol#L119

Impact

Loss of bribes

Code Snippet

Tool used

Manual Review

Recommendation

Check _processor is not address 0.

        require(_processor != address(0));

Duplicate of #22

bughuntoor - increaseLockTime does not calculate new `mgCvg` voting power

bughuntoor

high

increaseLockTime does not calculate new mgCvg voting power

Summary

increaseLockTime does not calculate new mgCvg voting power

Vulnerability Detail

When users create a voting escrow, they receive mgCvg balance which is calculated by the amount they've escrowed and the duration they've escrowed it for:
_mgCvgCreated = (amountVote * lockDuration) / (MAX_LOCK * MAX_PERCENTAGE);

The users then can call increaseLockTime to increase their voting power. As their lock time is increased, their mgCvg voting power should also be increased, but this actually does not happen nowhere within the increaseLockTime.

    function increaseLockTime(
        uint256 tokenId,
        uint256 durationAdd
    ) external checkCompliance(tokenId, address(0)) onlyWalletOrWhiteListedContract {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        /** @dev Retrieve actual staking cycle. */
        uint128 actualCycle = _cvgControlTower.cvgCycle();

        LockingPosition storage lockingPosition = lockingPositions[tokenId];
        uint256 oldEndCycle = lockingPosition.lastEndCycle + 1;
        uint256 newEndCycle = oldEndCycle + durationAdd;

        /** @dev Not possible extend a lock in duration after it's expiration. */
        require(oldEndCycle > actualCycle, "LOCK_TIME_OVER");

        /** @dev Not possible to have an active lock longer than the MAX_LOCK. */
        require(newEndCycle - actualCycle - 1 <= MAX_LOCK, "MAX_LOCK_96_CYCLES");

        /** @dev As the oldEnd cycle is a xTDE_DURATION. */
        /** @dev We just need to verify that the time we add is a xTDE_DURATION to ensure new lock is ending on a xTDE_DURATION. */
        require(durationAdd % TDE_DURATION == 0, "NEW_END_MUST_BE_TDE_MULTIPLE");

        /** @dev YsCvg TotalSupply Part, access only if some % has been given to ys on the NFT. */
        if (lockingPosition.ysPercentage != 0) {
            /** @dev Retrieve the balance registered at the cycle where the ysBalance is supposed to drop. */
            uint256 _ysToReport = balanceOfYsCvgAt(tokenId, oldEndCycle - 1);
            /** @dev Add this value to the tracking on the oldEndCycle. */
            totalSuppliesTracking[oldEndCycle].ysToAdd += _ysToReport;
            /** @dev Report this value in the newEndCycle in the Sub part. */
            totalSuppliesTracking[newEndCycle].ysToSub += _ysToReport;
        }

        /** @dev Vote part, access here only if some % has been given to ve/mg on the NFT. */
        if (lockingPosition.ysPercentage != MAX_PERCENTAGE) {
            /** @dev Increase Locking time to a new timestamp, computed with the cycle. */
            _cvgControlTower.votingPowerEscrow().increase_unlock_time(
                tokenId,
                block.timestamp + ((newEndCycle - actualCycle) * 7 days)
            );
        }

        /** @dev Update the new end cycle on the locking position. */
        lockingPosition.lastEndCycle = uint96(newEndCycle - 1);

        emit IncreaseLockTime(tokenId, lockingPosition, oldEndCycle - 1);
    }

In a scenario where two users have escrowed the same amount of tokens for the same amount of time, the one who has first locked them for a shorter period and then increased their lock time, will have significantly less voting power, despite both users locking the same amount of tokens for the same amount of time.

Impact

Loss of voting power

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L384C4-L429C6

Tool used

Manual Review

Recommendation

Increase the user's mgCvg upon calling increaseLockTime

Duplicate of #3

bughuntoor - Reducing a gauge's weight might result to full DoS within GaugeController

bughuntoor

high

Reducing a gauge's weight might result to full DoS within GaugeController

Summary

Calling change_gauge_weight and reducing a gauge's weight will result into DoS within GaugeController.

Vulnerability Detail

Let's look at the _get_weight function responsible to return a gauge's weight.

@internal
def _get_weight(gauge_addr: address) -> uint256:
    t: uint256 = self.time_weight[gauge_addr]
    if t > 0:
        pt: Point = self.points_weight[gauge_addr][t]
        for i in range(500):
            if t > block.timestamp:
                break
            t += WEEK
            d_bias: uint256 = pt.slope * WEEK
            if pt.bias > d_bias:
                pt.bias -= d_bias
                d_slope: uint256 = self.changes_weight[gauge_addr][t]
                pt.slope -= d_slope
            else:
                pt.bias = 0
                pt.slope = 0
            self.points_weight[gauge_addr][t] = pt
            if t > block.timestamp:
                self.time_weight[gauge_addr] = t
        return pt.bias
    else:
        return 0

The bias is the current voting power allocated and the slope is the amount it decreases by every week. Based on when users' Voting escrows expire, the changes_weight[gauge_addr][t] tracks the amount by which the slope must be reduced every week.
When changing a gauge's weight within the _change_gauge_weight function the only thing we change is the gauge's bias. (we cannot and do not change the slope).
Because of that, if we use change_weight to reduce a gauge's weight, this means that at some point when calling _get_weight we will get in the else part of the statement and set both pt.bias and pt.slope to 0. This would happen at a time earlier than supposed to since the weight has been reduced by the change_gauge_weight function (because the bias of the gauge is less than the sum of all biases allocated by the users). What this means is that even though pt.bias and pt.slope are 0, we still have some time t in the future for which self.changes_weight[gauge_addr][t] has a non-zero value.
Now if the gauge receives at least one new vote which has a slope change at time after timestamp t we will once again start entering the if part of the _get_weight function. Then we have 2 scenarios:

  1. The new vote's slope is < self.changes_weight[gauge_addr][t]. This means that as soon as we reach time t when calling _get_weight we will get in the if-part of the statement (as the pt.bias > d_bias:). However pt.slope will be < _d_slope . Meaning that the line pt.slope -= d_slope will cause a revert due to underflow.
  2. The new vote's slope is > self.changes_weight[gauge_addr][t]. The slope will once get reduced at self.changes_weight[gauge_addr][t]. Then, when the lock is expiring, when we try to once again reduce the slope, it will revert due to underflow

Note: for simplicity, the example was given with only one vote after the pt.slope and pt.bias have once been evened out to 0, though it would work with any number as long as at least one of the votes has a slope change after time t

The same issue also applies for _get_sum (which basically just sums the weight of gauges of same type).

All functions relying on _get_sum/ _get_weight will revert and it is irreversible. After they are DoS'd, a call to _change_gauge_weight will not fix it.

Impact

All functions within the GaugeController contract will be DoS'd

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/GaugeController.vy#L380C1-L386C29

Tool used

Manual Review

Recommendation

if pt.slope < d.slope, overwrite pt.slope to 0.

Duplicate of #94

ksksks - Potential loss of tokens in SdtBuffer.pullRewards

ksksks

high

Potential loss of tokens in SdtBuffer.pullRewards

Summary

Potential loss of tokens in SdtBuffer.pullRewards

Vulnerability Detail

sdtRewardReceiver can be zero address inside CvgControlTower.sol .

In such case, SdtBuffer.pullRewards may transfer tokens to address 0.

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBuffer.sol#L132

Impact

Loss of tokens

Code Snippet

Tool used

Manual Review

Recommendation

Check sdtRewardReceiver is not zero address

        address sdtRewardsReceiver = cvgControlTower.sdtRewardReceiver();
        require(sdtRewardsReceiver != address(0));

Duplicate of #22

ksksks - Potential loss of bribe in SdtBlackHole

ksksks

high

Potential loss of bribe in SdtBlackHole

Summary

Vulnerability Detail

SdtBlackHole transfers bribe to sdtRewardReceiver.

Inside CvgControlTower.sol there is no guarantee that sdtRewardReceiver will always have a non-zero address.

Hence when transfer is called inside SdtBlackHole and if sdtRewardReceiver is the zero address, this will result in loss of bribe tokens.

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBlackHole.sol#L124

Impact

Loss of bribe tokens

Code Snippet

Tool used

Manual Review

Recommendation

Check sdtRewardReceiver is non zero address.

        address sdtRewardReceiver = cvgControlTower.sdtRewardReceiver();
        require(sdtRewardReceiver != address(0));

Duplicate of #22

ZanyBonzy - Tokens with approval race protection or not returning a `bool` on `approve` might break token approvals.

ZanyBonzy

medium

Tokens with approval race protection or not returning a bool on approve might break token approvals.

Summary

Certain tokens on might revert on approval and cause unexpected behaviour.

Vulnerability Detail

Certain tokens, including USDT, KNC have an approval race protection mechanism in place, requiring the allowance to be set to either zero upon any update.
When the owner calls the approveTokens function with these kind of tokens in the array, the transactions revert and owner will not be able to approve the tokens.
Also, some (USDT for instance) do not return a bool on approve call. Those tokens are incompatible with the protocol because Solidity will check the return data size, which will be zero and will lead to a revert.
Finally, USDT approve will always revert due to the IERC20 interface mismatch.

Impact

Token approval will be blocked and a host of other unexpected behaviours.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/utils/SdtUtilities.sol#L216

    function approveTokens(TokenSpender[] calldata _tokenSpenders) external onlyOwner {
        for (uint256 i; i < _tokenSpenders.length; ) {
            IERC20(_tokenSpenders[i].token).approve(_tokenSpenders[i].spender, _tokenSpenders[i].amount);
            unchecked {
                ++i;
            }
        }
    }

Tool used

Manual Review

Recommendation

Approve to zero first, use forceApprove from SafeERC20, or safeIncreaseAllowance.

bughuntoor - Ys rewards should not be claimable when `cvgCycle() == cycleClaimed`

bughuntoor

high

Ys rewards should not be claimable when cvgCycle() == cycleClaimed

Summary

Users may lose out on rewards if they call claimRewards when cvgCycle() == cycleClaimed

Vulnerability Detail

Based on the amount of cvg tokens the users have locked and the duration they've locked them for, the users are allocated ys balance. Based on that balance they can claim rewards via the claimRewards function within YsDistributor. The current requirement to claim the rewards for a cycle is the following:

        uint256 cycleClaimed = tdeId * TDE_DURATION;

        /// @dev Cannot claim a TDE not available yet.
        require(_cvgControlTower.cvgCycle() >= cycleClaimed, "NOT_AVAILABLE");

However, this logic is faulty as it allows for claiming rewards when cvgCycle == cycleClaimed, which is faulty as rewards may not yet be finalized.
If a user calls it they will claim the rewards for the said cycle. However, if we look at the depositMultipleToken function above, we will see that the next call (happening within the same cycle) will distribute rewards towards this same cycle. Any users who have already called claimRewards will be locked out of these rewards and the rewards will be lost/ stuck within the contract forever.

    function depositMultipleToken(TokenAmount[] calldata deposits) external onlyTreasuryBonds {
        uint256 _actualCycle = cvgControlTower.cvgCycle();
        uint256 _actualTDE = _actualCycle % TDE_DURATION == 0 // @audit - if cvgCycle == cycleClaimed, then _actualCycle % TDE_DURATION == 0
            ? _actualCycle / TDE_DURATION  // @audit - this value will be used 
            : (_actualCycle / TDE_DURATION) + 1;

        address[] memory _tokens = depositedTokenAddressForTde[_actualTDE];
        uint256 tokensLength = _tokens.length;

        for (uint256 i; i < deposits.length; ) {
            IERC20 _token = deposits[i].token;
            uint256 _amount = deposits[i].amount;

            depositedTokenAmountForTde[_actualTDE][_token] += _amount;

Impact

Loss of rewards

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/YsDistributor.sol#L101C1-L105C49
https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/YsDistributor.sol#L176

Tool used

Manual Review

Recommendation

Change the >= in the require statement to >

-        require(_cvgControlTower.cvgCycle() >= cycleClaimed, "NOT_AVAILABLE");

+        require(_cvgControlTower.cvgCycle() > cycleClaimed, "NOT_AVAILABLE");

Duplicate of #171

ksksks - CvgSdtBuffer.pullRewards potential loss of SDT and sdFrax3Crv tokens

ksksks

high

CvgSdtBuffer.pullRewards potential loss of SDT and sdFrax3Crv tokens

Summary

CvgSdtBuffer.pullRewards transfers SDT and sdFrax3Crv token to sdtRewardReceiver without checking that it is not 0 address

Vulnerability Detail

CvgControlTower.sdtRewardReceiver may return 0 address.

CvgSdtBuffer.pullRewards transfers SDT and sdFrax3Crv to sdtRewardReceiver which may be 0 address.

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L121

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L142

Impact

Loss of SDT and sdFrax3Crv tokens

Code Snippet

Tool used

Manual Review

Recommendation

Check sdtRewardReceiver is not 0 address

        address sdtRewardReceiver = cvgControlTower.sdtRewardReceiver();
        require(sdtRewardReceiver != address(0));

Duplicate of #22

bughuntoor - `balanceOfYsCvgAt` returns wrong results when `extension[i].cycleId % TDE_DURATION == 0`

bughuntoor

medium

balanceOfYsCvgAt returns wrong results when extension[i].cycleId % TDE_DURATION == 0

Summary

balanceOfYsCvgAt returns wrong results extension[i].cycleId % TDE_DURATION == 0

Vulnerability Detail

Let's first look at _ysCvgCheckpoint

        uint256 ysTotalAmount = (lockDuration * cvgLockAmount) / MAX_LOCK;
        uint256 realStartCycle = actualCycle + 1;
        uint256 realEndCycle = endLockCycle + 1;
        /** @dev If the lock is not made on a TDE cycle,   we need to compute the ratio of ysCVG  for the current partial TDE */
        if (actualCycle % TDE_DURATION != 0) {
            /** @dev Get the cycle id of next TDE to be taken into account for this LockingPosition. */
            uint256 nextTdeCycle = (actualCycle / TDE_DURATION + 1) * TDE_DURATION + 1;
            /** @dev Represent the amount of ysCvg to be taken into account on the next TDE of this LockingPosition. */
            uint256 ysNextTdeAmount = ((nextTdeCycle - realStartCycle) * ysTotalAmount) / TDE_DURATION;

            totalSuppliesTracking[realStartCycle].ysToAdd += ysNextTdeAmount;

            /** @dev When a lock is greater than a TDE_DURATION */
            if (lockDuration >= TDE_DURATION) {
                /** @dev we add the calculations for the next full TDE */
                totalSuppliesTracking[nextTdeCycle].ysToAdd += ysTotalAmount - ysNextTdeAmount;
                totalSuppliesTracking[realEndCycle].ysToSub += ysTotalAmount;
            }
            /** @dev If the lock less than TDE_DURATION. */
            else {
                /** @dev We simply remove the amount from the supply calculation at the end of the TDE */
                totalSuppliesTracking[realEndCycle].ysToSub += ysNextTdeAmount;
            }
        }
        /** @dev If the lock is performed on a TDE cycle  */
        else {
            totalSuppliesTracking[realStartCycle].ysToAdd += ysTotalAmount;  //@audit - the user is accounted for this amount towards total supply 
            totalSuppliesTracking[realEndCycle].ysToSub += ysTotalAmount;
        }
    }

As we can see we have 2 scenarios - if actualCycle % TDE_DURATION != 0 and actualCycle % TDE_DURATION == 0
In the 2nd scenario, the user has a non-changing balance throughout the entirety of the lock duration, unlike in the 1st scenario, where the user has a partial balance up until nextTdeCycle.

    function balanceOfYsCvgAt(uint256 _tokenId, uint256 _cycleId) public view returns (uint256) {
        require(_cycleId != 0, "NOT_EXISTING_CYCLE");

        LockingPosition memory _lockingPosition = lockingPositions[_tokenId];
        LockingExtension[] memory _extensions = lockExtensions[_tokenId];
        uint256 _ysCvgBalance;

        /** @dev If the requested cycle is before or after the lock , there is no balance. */
        if (_lockingPosition.startCycle >= _cycleId || _cycleId > _lockingPosition.lastEndCycle) {
            return 0;
        }
        /** @dev We go through the extensions to compute the balance of ysCvg at the cycleId */
        for (uint256 i; i < _extensions.length; ) {
            /** @dev Don't take into account the extensions if in the future. */
            if (_extensions[i].cycleId < _cycleId) {
                LockingExtension memory _extension = _extensions[i];
                uint256 _firstTdeCycle = TDE_DURATION * (_extension.cycleId / TDE_DURATION + 1);
                uint256 _ysTotal = (((_extension.endCycle - _extension.cycleId) *
                    _extension.cvgLocked *
                    _lockingPosition.ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;
                uint256 _ysPartial = ((_firstTdeCycle - _extension.cycleId) * _ysTotal) / TDE_DURATION; // @audit - this value will be returned
                /** @dev For locks that last less than 1 TDE. */
                if (_extension.endCycle - _extension.cycleId <= TDE_DURATION) {
                    _ysCvgBalance += _ysPartial; // @audit - this value will be returned, because of the duration of the lock
                } else {
                    _ysCvgBalance += _cycleId <= _firstTdeCycle ? _ysPartial : _ysTotal;
                }
            }
            ++i;
        }
        return _ysCvgBalance;
    }

However, if we look in the balanceOfYsCvgAt this is not implemented.
In the case where a user has staked for a duration < TDE_DURATION and actualCycle % TDE_DURATION == 0 the call to balanceOfYsCvgAt will calculate a significantly lower value - it will return _ysPartial, even though the user is accounted for _ysTotal towards the total supply.

Impact

User will have significantly lower balance than expected

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L656C14-L656C30
https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L577

Tool used

Manual Review

Recommendation

Within balanceOfYsCvgAt check if _cycleId % TDE_DURATION and adjust accordingly

Oxd1z - encoded-packed-collision

Oxd1z

high

encoded-packed-collision

Summary

LockingPositionManager.tokenURI(uint256) calls abi.encodePacked() with multiple dynamic arguments:

  • string(abi.encodePacked(localBaseURI,Strings.toString(tokenId))

Vulnerability Detail

This can trigger hash collisions in the Eternal Storage pattern, alter the meaning of signatures, and result in collisions when used as a mapping key.

Impact

Hash collisions that compromise the system which may lead to loss of integrity, wrong authorization and even loss of funds.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionManager.sol#L94-L95

Tool used

Slither
Manual Review

Recommendation

According to solidity documentation, if you use abi.encodePacked for signatures, authentication or data integrity, make sure to always use the same types and check that at most one of them is dynamic. Unless there is a compelling reason, abi.encode should be preferred. Do not use more than one dynamic type in abi.encodePacked(). Instead,use abi.encode() preferably.

Duplicate of #137

bughuntoor - `increaseLockAndTime` does not correctly calculate `_newVotingPower`.

bughuntoor

high

increaseLockAndTime does not correctly calculate _newVotingPower.

Summary

increaseLockAndTime does not correctly calculate _newVotingPower.

Vulnerability Detail

User's mgCvg voting power is based on the amount they've escrowed and the lock time. Users can call increaseLockAndTime to both increase the amount locked and the time. The function is expected to properly increase the user's mgCvg balance, though this is nto the case

        if (lockingPosition.ysPercentage != MAX_PERCENTAGE) {
            /** @dev Update voting power through veCVG contract, link voting power to the nft tokenId. */
            uint256 amountVote = amount * (MAX_PERCENTAGE - lockingPosition.ysPercentage);
            _newVotingPower = (amountVote * (newEndCycle - actualCycle - 1)) / (MAX_LOCK * MAX_PERCENTAGE);
            lockingPosition.mgCvgAmount += _newVotingPower;

            _cvgControlTower.votingPowerEscrow().increase_unlock_time_and_amount(
                tokenId,
                block.timestamp + ((newEndCycle - actualCycle) * 7 days),
                amountVote / MAX_PERCENTAGE
            );
        }

As we can see the mgCvg amount is increased only by the newly locked amount multiplied by the new lock duration. However, it does not take in consideration the increase of the lock of the previously staked amount.
This results in loss of voting power for the users who invoke increaseLockAndTime

Impact

Loss of Voting power

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L475C1-L486C10

Tool used

Manual Review

Recommendation

Take into consideration the previously staked tokens.

bughuntoor - `increaseLockTime` wrongfully calculates ysBalance

bughuntoor

high

increaseLockTime wrongfully calculates ysBalance

Summary

increaseLockTime wrongfully calculates ysBalance

Vulnerability Detail

After putting cvg in an escrow, users receive ys balance. The ys balance is based on two things - the amount locked and the lock duration

        if (lockingPosition.ysPercentage != 0) {
            _ysCvgCheckpoint(
                lockingPosition.lastEndCycle - actualCycle,
                (amount * lockingPosition.ysPercentage) / MAX_PERCENTAGE,
                actualCycle,
                lockingPosition.lastEndCycle
            );
        }
    function _ysCvgCheckpoint(
        uint256 lockDuration,
        uint256 cvgLockAmount,
        uint256 actualCycle,
        uint256 endLockCycle
    ) internal {
        /** @dev Compute the amount of ysCVG on this Locking Position proportionally with the ratio of lockDuration and MAX LOCK duration. */
        uint256 ysTotalAmount = (lockDuration * cvgLockAmount) / MAX_LOCK;
        uint256 realStartCycle = actualCycle + 1;
        uint256 realEndCycle = endLockCycle + 1;
        /** @dev If the lock is not made on a TDE cycle,   we need to compute the ratio of ysCVG  for the current partial TDE */
        if (actualCycle % TDE_DURATION != 0) {
            /** @dev Get the cycle id of next TDE to be taken into account for this LockingPosition. */
            uint256 nextTdeCycle = (actualCycle / TDE_DURATION + 1) * TDE_DURATION + 1;
            /** @dev Represent the amount of ysCvg to be taken into account on the next TDE of this LockingPosition. */
            uint256 ysNextTdeAmount = ((nextTdeCycle - realStartCycle) * ysTotalAmount) / TDE_DURATION;

            totalSuppliesTracking[realStartCycle].ysToAdd += ysNextTdeAmount;

            /** @dev When a lock is greater than a TDE_DURATION */
            if (lockDuration >= TDE_DURATION) {
                /** @dev we add the calculations for the next full TDE */
                totalSuppliesTracking[nextTdeCycle].ysToAdd += ysTotalAmount - ysNextTdeAmount;
                totalSuppliesTracking[realEndCycle].ysToSub += ysTotalAmount;
            }
            /** @dev If the lock less than TDE_DURATION. */
            else {
                /** @dev We simply remove the amount from the supply calculation at the end of the TDE */
                totalSuppliesTracking[realEndCycle].ysToSub += ysNextTdeAmount;
            }
        }
        /** @dev If the lock is performed on a TDE cycle  */
        else {
            totalSuppliesTracking[realStartCycle].ysToAdd += ysTotalAmount;
            totalSuppliesTracking[realEndCycle].ysToSub += ysTotalAmount;
        }
    }

After the users have already created their escrow, they can increase the lock's duration by calling increaseLockTime. However, let's look at what happens with the ys balance when the increaseLockTime is called:

        if (lockingPosition.ysPercentage != 0) {
            /** @dev Retrieve the balance registered at the cycle where the ysBalance is supposed to drop. */
            uint256 _ysToReport = balanceOfYsCvgAt(tokenId, oldEndCycle - 1);
            /** @dev Add this value to the tracking on the oldEndCycle. */
            totalSuppliesTracking[oldEndCycle].ysToAdd += _ysToReport;
            /** @dev Report this value in the newEndCycle in the Sub part. */
            totalSuppliesTracking[newEndCycle].ysToSub += _ysToReport;
        }

As we can see, new value is not calculated. The only thing that happens changing the cycle when the ys totalSupply will decrease. This corrupts the totalSupplyOfYsCvg
This gives an unfair advantage to people who have already staked for a long time: and also puts people who significantly increase their lock at a disadvantage

Consider the following 2 scenarios

Scenario 1

  1. User has locked their tokens for a very short time (2 weeks)
  2. User increases their lock time to max - 96 weeks
  3. Despite the user having locked their tokens for 96 weeks, their ys balance is still based on only on the initial 2 weeks and is significantly smaller than what it is supposed to be

Scenario 2

  1. User has locked their tokens for max lock time - 96 weeks
  2. 95 weeks pass. User has 1 week left on their lock.
  3. The user decides to increase their lock time by just 1 week.
  4. Despite the user locking for only 1 additional week and having a lock which will last only 2 weeks, they have ys balance based on their 96 weeks lock.

Impact

Corrupted global accounting

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L407C1-L414C10

Tool used

Manual Review

Recommendation

Fix is non-trivial. The new balance must carefully be calculated and accounted for.

0xHelium - cvg token loss for users claiming cvgRewards

0xHelium

high

cvg token loss for users claiming cvgRewards

Summary

There is a presence of precision loss in SdtStakingPositionService.claimCvgRewards() that will lead to users receiving less staking rewards(cvg).

Vulnerability Detail

SdtStakingPositionService.claimCvgRewards() function will cause a loss of precision when calculating the claimable amount. This code is where the issue happens.

For example if

  • tokenStaked = 157
  • _cycleInfo[lastClaimedCycle].cvgRewardsAmount = 100
  • _cycleInfo[lastClaimedCycle].totalStaked = 1000
  • claimableAmount will be (157*100)/1000 // it will return 15 instead of 15.7 because of solidity truncation

Impact

Users will get less rewards than they should, on thye long run these small amount ( in our example 15.7-15 = 0.7) will be accumulated and will be huge.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Staking/StakeDAO/SdtStakingPositionService.sol#L338-L383

Tool used

Manual Review,
VsCode,
Remix

Recommendation

Use a multiplier for making operations that can lead to rounding down issues

Duplicate of #53

ksksks - SdtBuffer, CvgSdtBuffer transfer ownership to potentially 0 address

ksksks

medium

SdtBuffer, CvgSdtBuffer transfer ownership to potentially 0 address

Summary

SdtBuffer, CvgSdtBuffer transfer ownership to potentially 0 address

Vulnerability Detail

SdtBuffer and CvgSdtBuffer calls _transferOwnership(_cvgControlTower.treasuryDao()).

However _cvgControlTower.treasuryDao() may return 0 address if treasuryDao was set to 0 address.

This will result in loss of owneship of the contract

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBuffer.sol#L64

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L64

Impact

Loss of ownership, cannot call setProcessorRewardsPercentage

Code Snippet

Tool used

Manual Review

Recommendation

Check treasuryDao is not 0 address

        address treasuryDao = _cvgControlTower.treasuryDao();
        require(treasuryDao != address(0));
        _transferOwnership(treasuryDao);

Duplicate of #22

Oxd1z - calls-loop

Oxd1z

medium

calls-loop

Summary

Calls inside the loop might lead to a denial-of-service attack.

Vulnerability Detail

LockingPositionDelegate.manageOwnedAndDelegated(LockingPositionDelegate.OwnedAndDelegated) has external calls inside a loop: require(bool,string)(msg.sender == cvgControlTower.lockingPositionManager().ownerOf(_ownedAndDelegatedTokens.owneds[i]),TOKEN_NOT_OWNED)

Impact

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L337-L340

Tool used

Slither
Manual Review

Recommendation

Favor a pull over push strategy for external calls.

bughuntoor - If the multiple calls to `writeStakingRewards` cross a week's end, it will result in unfair distribution of rewards

bughuntoor

medium

If the multiple calls to writeStakingRewards cross a week's end, it will result in unfair distribution of rewards

Summary

If the multiple calls to writeStakingRewards cross a week's end, it will result in unfair distribution of rewards

Vulnerability Detail

The first call to writeStakingRewards calls checkpoints which makes sure all gauges are checkpointed up to the current week. However, there rises a issue if after _checkpoints the week end is crossed. This would allow for not up-to-date values of the gauges to be used. If the values are already added to the totalWeightLocked, its value will be inflated (as the gauge weights can only decrease in the time as votes are locked and time passes).

    function _setTotalWeight() internal {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        IGaugeController _gaugeController = _cvgControlTower.gaugeController();
        uint128 _cursor = cursor;
        uint128 _totalGaugeNumber = uint128(gauges.length);

        /// @dev compute the theoric end of the chunk
        uint128 _maxEnd = _cursor + cvgRewardsConfig.maxLoopSetTotalWeight;
        /// @dev compute the real end of the chunk regarding the length of staking contracts
        uint128 _endChunk = _maxEnd < _totalGaugeNumber ? _maxEnd : _totalGaugeNumber;

        /// @dev if last chunk of the total weighted locked processs
        if (_endChunk == _totalGaugeNumber) {
            /// @dev reset the cursor to 0 for _distributeRewards
            cursor = 0;
            /// @dev set the step as DISTRIBUTE for reward distribution
            state = State.DISTRIBUTE;
        } else {
            /// @dev setup the cursor at the index start for the next chunk
            cursor = _endChunk;
        }

        totalWeightLocked += _gaugeController.get_gauge_weight_sum(_getGaugeChunk(_cursor, _endChunk));

        /// @dev emit the event only at the last chunk
        if (_endChunk == _totalGaugeNumber) {
            emit SetTotalWeight(_cvgControlTower.cvgCycle(), totalWeightLocked);
        }
    }

Then if any gauges have manually been checkpointed before the subsequent call to _distributeCvgRewards , it would mean that the sum of all weights of the gauges will be less than totalWeightLocked, meaning there will be underdistribution of rewards. If no gauges have been manually checkpointed, it would simply mean unfair distribution of rewards (as the values are not up-to-date).

    function _distributeCvgRewards() internal {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        IGaugeController gaugeController = _cvgControlTower.gaugeController();

        uint256 _cvgCycle = _cvgControlTower.cvgCycle();

        /// @dev number of gauge in GaugeController
        uint128 _totalGaugeNumber = uint128(gauges.length);
        uint128 _cursor = cursor;

        uint256 _totalWeight = totalWeightLocked;
        /// @dev cursor of the end of the actual chunk
        uint128 cursorEnd = _cursor + cvgRewardsConfig.maxChunkDistribute;

        /// @dev if the new cursor is higher than the number of gauge, cursor become the number of gauge
        if (cursorEnd > _totalGaugeNumber) {
            cursorEnd = _totalGaugeNumber;
        }

        /// @dev reset the cursor if the distribution has been done
        if (cursorEnd == _totalGaugeNumber) {
            cursor = 0;

            /// @dev reset the total weight of the gauge
            totalWeightLocked = 0;

            /// @dev update the states to the control_tower sync
            state = State.CONTROL_TOWER_SYNC;
        }
        /// @dev update the global cursor in order to be taken into account on next chunk
        else {
            cursor = cursorEnd;
        }

        uint256 stakingInflation = stakingInflationAtCycle(_cvgCycle);
        uint256 cvgDistributed;
        InflationInfo[] memory inflationInfos = new InflationInfo[](cursorEnd - _cursor);
        address[] memory addresses = _getGaugeChunk(_cursor, cursorEnd);
        /// @dev fetch weight of gauge relative to the cursor
        uint256[] memory gaugeWeights = gaugeController.get_gauge_weights(addresses);
        for (uint256 i; i < gaugeWeights.length; ) {
            /// @dev compute the amount of CVG to distribute in the gauge
            cvgDistributed = (stakingInflation * gaugeWeights[i]) / _totalWeight;

            /// @dev Write the amount of CVG to distribute in the staking contract
            ICvgAssetStaking(addresses[i]).processStakersRewards(cvgDistributed);

            inflationInfos[i] = InflationInfo({
                gauge: addresses[i],
                cvgDistributed: cvgDistributed,
                gaugeWeight: gaugeWeights[i]
            });

            unchecked {
                ++i;
            }
        }

        emit EventChunkWriteStakingRewards(_cvgCycle, _totalWeight, inflationInfos);
    }

Note: since the requirement on calling checkpoint is that at least 7 days have passed since the last distribution, it would mean that the delta of the checkpoint and the end of the week will gradually decrease every week, up until we once have a distribution crossing over a week's end. The issue above is bound to happen given long-enough timeframe.,

Impact

Unfair distribution of rewards. Possible permanent loss of rewards.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L279C1-L338C6

Tool used

Manual Review

Recommendation

Add time constraints to writeStakingRewards in order to make sure it does not happen close to the end of the week

Duplicate of #178

avoloder - Possible to remove all gauges by providing only one address (gauge)

avoloder

medium

Possible to remove all gauges by providing only one address (gauge)

Summary

Due to the wrong index manipulation in the removeGauge function it is possible to remove all gauges by providing the address that has been removed before

Vulnerability Detail

For the sake of simplicity let's imagine that we have an array of addresses called gauges where gauges = [0x1, 0x2, 0x3] and we have a mapping of address => uint256 to track the ids of the gauges in the array. Also we used a function called addGauge(address gaugeAddress) to add these gauges https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L129-L133

This means that our map (gaugesId) is equal to (0x1=0, 0x2=1, 0x3=2). Now, when removing a gauge with the specific address (let's say 0x1) we call the "removeGauge" function. https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L139-L154

After the function above is executed our gauges will be [0x3, 0x2] however our gaugesID map will be (0x1=0, 0x2=1, 0x3=0). If there is a subsequent call to removeGauge with the same address that has been removed (0x1) it will cause the removal of the 0x3 since it is on the first place (id=0) of gauges array (the idGaugeToRemove will be 0).

This may lead to the accidental deletion of a wrong gauge.

Impact

This could result in the inadvertent removal of an incorrect gauge when the same address is mistakenly provided twice.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L139-L154

Tool used

Manual Review

Recommendation

Either check if the address exists in the gauges array before removing it (CvgRewards.sol) or assert the same thing in the GaugeController.vy (same like for adding the same gauge twice).
Another solution would be to handle the indices differently (starting with 1 instead of 0 and only assign 0 to the deleted ones).

Duplicate of #8

djanerch - No Storage Gap for Upgradeable Contract Might Lead to Storage Slot Collision

djanerch

medium

No Storage Gap for Upgradeable Contract Might Lead to Storage Slot Collision

Summary

Several contracts are intended to be upgradeable contracts in the code base but they don't have gap.

Vulnerability Detail

For upgradeable contracts, there must be storage gap to "allow developers to freely add new state variables in the future without compromising the storage compatibility with existing deployments" (quote OpenZeppelin). Otherwise it may be very difficult to write new implementation code. Without storage gap, the variable in child contract might be overwritten by the upgraded base contract if new variables are added to the base contract. This could have unintended and very serious consequences to the child contracts, potentially causing loss of user fund or cause the contract to malfunction completely.

Refer to the bottom part of this article: https://docs.openzeppelin.com/upgrades-plugins/1.x/writing-upgradeable

Impact

Several contracts are intended to be upgradeable contracts in the code base, including

=> LockingPositionManager.sol
=> LockingPositionService.sol
=> CvgRewards.sol
=> CvgSdtBuffer.sol
=> SdtBlackHole.sol
=> SdtBuffer.sol
=> SdtRewardReceiver.sol
=> SdtStakingPositionManager.sol
=> SdtStakingPositionService.sol

However, none of these contracts contain storage gap. The storage gap is essential for upgradeable contract because "It allows us to freely add new state variables in the future without compromising the storage compatibility with existing deployments". Refer to the bottom part of this article:

https://docs.openzeppelin.com/contracts/3.x/upgradeable

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionManager.sol#L23

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L26

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L21

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L25

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBlackHole.sol#L28

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBuffer.sol#L24

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Staking/StakeDAO/SdtRewardReceiver.sol#L32

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Staking/StakeDAO/SdtStakingPositionManager.sol#L21

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Staking/StakeDAO/SdtStakingPositionService.sol#L27

Tool used

Manual Review

Recommendation

Recommend adding appropriate storage gap at the end of upgradeable contracts such as the below. Please reference OpenZeppelin upgradeable contract templates.

uint256[50] private __gap;

djanerch - Risks of frontrunning due to unsafe approval processes

djanerch

high

Risks of frontrunning due to unsafe approval processes

Summary

Smart contract approval vulnerabilities discussed highlight the potential risks associated with unlimited approvals and approval frontrunning.

Vulnerability Detail

The vulnerability of approval frontrunning arises from the inherent timing dynamics of multiple approve calls. When a user initiates an approve request to modify the allowance after an initial approval, a window of opportunity is unintentionally created. Malicious actors can exploit this window by executing the transferFrom function before the user's transaction is included in the blockchain.

Impact

Scenario:
Attackers monitor the blockchain for pending transactions, particularly those involving approval calls. By strategically placing their transactions before a user's intended approval modification, they can front-run and execute unauthorized transfers, moving additional tokens than intended by the user.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Staking/StakeDAO/SdtRewardReceiver.sol#L253#L257

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/utils/SdtUtilities.sol#L214#L221

Tool used

Manual Review

Recommendation

To mitigate these vulnerabilities, developers are advised to avoid requiring unlimited approvals and instead implement an allowance system that limits approvals to the necessary amount. Utilizing functions like safeIncreaseAllowance and safeDecreaseAllowance from OpenZeppelin's SafeERC20 implementation can help prevent frontrunning attacks.

cawfree - Invariant Violation: `LockingPositionManager.sol#manageOwnedAndDelegated` `OwnedAndDelegated` properties are not collision-resistant.

cawfree

high

Invariant Violation: LockingPositionManager.sol#manageOwnedAndDelegated OwnedAndDelegated properties are not collision-resistant.

Summary

Due to missing validation rules, calls to manageOwnedAndDelegated on the LockingPositionManager will allow any external caller to specify _ownedAndDelegatedTokens calldata that can contain duplicate token identifiers, provided the caller is indeed the owner of these tokens.

Vulnerability Detail

When updating the tokenOwnedAndDelegated mapping via manageOwnedAndDelegated, a malicious caller is permitted to pass an arbitrary calldata value of OwnedAndDelegated.

The OwnedAndDelegated calldata struct corresponds to three caller-defined uint256[] arrays. The contents of these arrays are evaluated to determine the owneds, mgDelegateds and veDelegateds to be processed on behalf of the msg.sender.

As shown below, when interpreting the contents of these arrays, the LockingPositionDelegate only cares to ensure the caller is indeed the owner of these tokens, and not whether these tokens have been processed by a previous loop iteration:

/**
 * @notice Allow a user to manage the tokens id (owned and delegated) used to represent their voting power.
 * @dev This prevents bad actors who will spam an address by transferring or delegating a lot of VE/MG positions.
 * | This will prevent the oog when the voting/metagovernance power is calculated.
 * @param _ownedAndDelegatedTokens array of owned/veDelegated/mgDelegated tokenIds allowed
 */
function manageOwnedAndDelegated(OwnedAndDelegated calldata _ownedAndDelegatedTokens) external {
    /** @dev Clear the struct owneds and delegateds tokenId allowed for this user.*/
    delete tokenOwnedAndDelegated[msg.sender];

    /** @dev Add new owned tokenIds allowed for this user.*/
    for (uint256 i; i < _ownedAndDelegatedTokens.owneds.length;) { /// @audit i.e. [69, 69, 69]
        /** @dev Check if tokenId is owned by the user.*/
        require(
            msg.sender == cvgControlTower.lockingPositionManager().ownerOf(_ownedAndDelegatedTokens.owneds[i]),
            "TOKEN_NOT_OWNED"
        );
        tokenOwnedAndDelegated[msg.sender].owneds.push(_ownedAndDelegatedTokens.owneds[i]); /// @audit
        unchecked {
            ++i;
        }
    }
    /** @dev Add new mgCvg delegated tokenIds allowed for this user.*/
    for (uint256 i; i < _ownedAndDelegatedTokens.mgDelegateds.length;) {
        /** @dev Check if the user is a mgCvg delegatee for this tokenId.*/
        (, , uint256 _toIndex) = getMgDelegateeInfoPerTokenAndAddress(
            _ownedAndDelegatedTokens.mgDelegateds[i],
            msg.sender
        );
        require(_toIndex != 999, "NFT_NOT_MG_DELEGATED");
        tokenOwnedAndDelegated[msg.sender].mgDelegateds.push(_ownedAndDelegatedTokens.mgDelegateds[i]); /// @audit
        unchecked {
            ++i;
        }
    }
    /** @dev Add new veCvg delegated tokenIds allowed for this user.*/
    for (uint256 i; i < _ownedAndDelegatedTokens.veDelegateds.length;) {
        /** @dev Check if the user is the veCvg delegatee for this tokenId.*/
        require(msg.sender == delegatedVeCvg[_ownedAndDelegatedTokens.veDelegateds[i]], "NFT_NOT_VE_DELEGATED");
        tokenOwnedAndDelegated[msg.sender].veDelegateds.push(_ownedAndDelegatedTokens.veDelegateds[i]); /// @audit
        unchecked {
            ++i;
        }
    }
}

As we can see in all three instances, provided the msg.sender has sufficient access control, they can create arbitrarily long arrays of duplicate data and cache these as their current signalling tokenOwnedAndDelegated[msg.sender].

Impact

  1. When coupled with LockingPositionService#mgCvgVotingPowerPerAddress, it can be demonstrated that a user's voting power can be gamed through this manipulation:
(uint256[] memory tokenIdsOwneds, uint256[] memory tokenIdsDelegateds) = _lockingPositionDelegate
    .getTokenMgOwnedAndDelegated(_user);

/** @dev Sum voting power from delegated (allowed) tokenIds to _user. */
for (uint256 i; i < tokenIdsDelegateds.length; ) {
    uint256 _tokenId = tokenIdsDelegateds[i];
    (uint256 _toPercentage, , uint256 _toIndex) = _lockingPositionDelegate.getMgDelegateeInfoPerTokenAndAddress(
        _tokenId,
        _user
    );
    /** @dev Check if is really delegated, if not mg voting power for this tokenId is 0. */
    if (_toIndex < 999) {
        uint256 _tokenBalance = balanceOfMgCvg(_tokenId);
        _totalMetaGovernance += (_tokenBalance * _toPercentage) / MAX_PERCENTAGE;
    }

    unchecked {
        ++i;
    }
}

As you can see, since we allow the arrays of tokenIdsOwneds and tokenIdsDelegateds to grow unbounded, this exploit has the ability to undermine the maximum percentage allocation for a single token, which could be used to drastically amplify voting power.

  1. A similar error takes place in LockingPositionService#veCvgVotingPowerPerAddress, where again the accumulated _totalVotingPower can be gamed through duplicate array contents:
function veCvgVotingPowerPerAddress(address _user) external view returns (uint256) {
    uint256 _totalVotingPower;

    ILockingPositionDelegate _lockingPositionDelegate = cvgControlTower.lockingPositionDelegate();

    (uint256[] memory tokenIdsOwneds, uint256[] memory tokenIdsDelegateds) = _lockingPositionDelegate
        .getTokenVeOwnedAndDelegated(_user);

    /** @dev Sum voting power from delegated tokenIds to _user. */
    for (uint256 i; i < tokenIdsDelegateds.length; ) {
        uint256 _tokenId = tokenIdsDelegateds[i];
        /** @dev Check if is really delegated, if not ve voting power for this tokenId is 0. */
        if (_user == _lockingPositionDelegate.delegatedVeCvg(_tokenId)) {
            _totalVotingPower += balanceOfVeCvg(_tokenId);
        }

        unchecked {
            ++i;
        }
    }
}
  1. As stated by the developer, manageOwnedAndDelegated was created to help avoid OOG errors by caching the results of delegation evaluation. This assumption is likely made on the premise that real one-to-one token ownership would act as a dampening mechanism to avoiding excessively-long loops. However, since this exploit is not sufficiently constrained by scarce resource, a malicious user can re-introduce the feasibility of OOG reversion through alternatively using excessively long arrays.

Code Snippet

/**
 * @notice Allow a user to manage the tokens id (owned and delegated) used to represent their voting power.
 * @dev This prevents bad actors who will spam an address by transferring or delegating a lot of VE/MG positions.
 * | This will prevent the oog when the voting/metagovernance power is calculated.
 * @param _ownedAndDelegatedTokens array of owned/veDelegated/mgDelegated tokenIds allowed
 */
function manageOwnedAndDelegated(OwnedAndDelegated calldata _ownedAndDelegatedTokens) external {
    /** @dev Clear the struct owneds and delegateds tokenId allowed for this user.*/
    delete tokenOwnedAndDelegated[msg.sender];

    /** @dev Add new owned tokenIds allowed for this user.*/
    for (uint256 i; i < _ownedAndDelegatedTokens.owneds.length;) {
        /** @dev Check if tokenId is owned by the user.*/
        require(
            msg.sender == cvgControlTower.lockingPositionManager().ownerOf(_ownedAndDelegatedTokens.owneds[i]),
            "TOKEN_NOT_OWNED"
        );
        tokenOwnedAndDelegated[msg.sender].owneds.push(_ownedAndDelegatedTokens.owneds[i]);
        unchecked {
            ++i;
        }
    }
    /** @dev Add new mgCvg delegated tokenIds allowed for this user.*/
    for (uint256 i; i < _ownedAndDelegatedTokens.mgDelegateds.length;) {
        /** @dev Check if the user is a mgCvg delegatee for this tokenId.*/
        (, , uint256 _toIndex) = getMgDelegateeInfoPerTokenAndAddress(
            _ownedAndDelegatedTokens.mgDelegateds[i],
            msg.sender
        );
        require(_toIndex != 999, "NFT_NOT_MG_DELEGATED");
        tokenOwnedAndDelegated[msg.sender].mgDelegateds.push(_ownedAndDelegatedTokens.mgDelegateds[i]);
        unchecked {
            ++i;
        }
    }
    /** @dev Add new veCvg delegated tokenIds allowed for this user.*/
    for (uint256 i; i < _ownedAndDelegatedTokens.veDelegateds.length;) {
        /** @dev Check if the user is the veCvg delegatee for this tokenId.*/
        require(msg.sender == delegatedVeCvg[_ownedAndDelegatedTokens.veDelegateds[i]], "NFT_NOT_VE_DELEGATED");
        tokenOwnedAndDelegated[msg.sender].veDelegateds.push(_ownedAndDelegatedTokens.veDelegateds[i]);
        unchecked {
            ++i;
        }
    }
}

Tool used

Manual Review, Visual Studio Code, GitHub

Recommendation

Consider using an EnumerableSet instead of an uint256[] to prevent the existence of duplicates.

Duplicate of #126

pipidu83 - ```gaugeController``` can add twice the same ```gaugeAddress``` to the ```gauges``` array, leading to faulty behavior of the ```removeGauge``` function.

pipidu83

high

gaugeController can add twice the same gaugeAddress to the gauges array, leading to faulty behavior of the removeGauge function.

Summary

In the addGauge function, no check is made on the presence of gaugeAddress in the gauges array.

gaugeController'a attempt to remove the duplicated gaugeAddress would then lead to removing the wrong gauge from the array.

Vulnerability Detail

Let's say 2 gauges have been added and gaugeAddress3 is not one of them.
Now let the gaugeController call addGauge with the parameter gaugeAddress3.
We will then have gaugesId[gaugeAddress3] == 2 and gauges == [gaugeAddress1, gaugeAddress2, gaugeAddress3]

Now if gaugeController calls the addGauge function again with the same gaugeAddress3 parameter, we will then have gaugesId[gaugeAddress3] == 3 and gauges == [gaugeAddress1, gaugeAddress2, gaugeAddress3, gaugeAddress3].

We then want to remove gaugeAddress3 from the gauges array by calling the removeGauge with parameter gaugeAddress3.

We will then have idGaugeToRemove = gaugesId[gaugeAddress3] ie idGaugeToRemove == 3. lastGauge = gauges[gauges.length - 1] i.e. lastGauge == gaugeAddress3.

Next line sets gaugesId[lastGauge] to idGaugeToRemove i.e. gaugesId[gaugeAddress3] == 3, then we set gaugesId[gaugeAddress3] to 0.

Finally, we set gauges[idGaugeToRemove] = lastGauge i.e. we set gauges[3] to gaugeAddress3 and pop the last element of gauges, meaning our gauges array now looks like [gaugeAddress1, gaugeAddress2, gaugeAddress3].

However we now have gaugesId[gaugeAddress3] == 0 because of the gaugesId[gaugeAddress] = 0; line.

Let's now call removeGauge again with the same gaugeAddress3 parameter.

We then have similarly idGaugeToRemove == 0, lastGauge == gaugeAddress3.
We set gaugesId[gaugeAddress3] to 0 (it is already the case) and gaugesId[gaugeAddress3] to 0 (lines 145 and 147 do the same thing then).
Finally we set gauges[0] to gaugeAddress3 and we pop the last element, meaning our gauges array now looks like [gaugeAddress3, gaugeAddress2] meaning we removed the wrong gauge address!

Impact

We believe this vulnerability should be marked as HIGH as this leads to the removeGauge function to remove the wrong addresses as shown above, altering the list of gauges which is central to the good functioning of the contract.

Code Snippet

Below are the definitions of the addGauge and removeGauge functions

function addGauge(address gaugeAddress) external {
        require(address(cvgControlTower.gaugeController()) == msg.sender, "NOT_GAUGE_CONTROLLER");
        gauges.push(gaugeAddress);
        gaugesId[gaugeAddress] = gauges.length - 1;
     }

and

function removeGauge(address gaugeAddress) external {
        require(address(cvgControlTower.gaugeController()) == msg.sender, "NOT_GAUGE_CONTROLLER");
        uint256 idGaugeToRemove = gaugesId[gaugeAddress];
        address lastGauge = gauges[gauges.length - 1];

        /// @dev replace id of last gauge by deleted one
        gaugesId[lastGauge] = idGaugeToRemove;
        /// @dev Set ID of gauge as 0
        gaugesId[gaugeAddress] = 0;

        /// @dev moove last gauge address to the id of the deleted one
        gauges[idGaugeToRemove] = lastGauge;

        /// @dev remove last array element
        gauges.pop();
    }

Tool used

Manual Review / Visual Studio

Recommendation

The fix is relatively straightforward and we just need to check for the presence of gaugeAddress in the gaugesId mapping before adding it (note that we use gaugesId and not the gauges array directly as checking the presence of an element in the keys of a mapping is easier than checking the presence of an element in an array, as gaugesId[gaugeAddress] will simply return 0 if the element does not exist).

The addGauge function would then look like the below

function addGauge(address gaugeAddress) external {
        require(address(cvgControlTower.gaugeController()) == msg.sender, "NOT_GAUGE_CONTROLLER");
        require(gaugesId[gaugeAddress] == 0, "Address already added");
        gauges.push(gaugeAddress);
        gaugesId[gaugeAddress] = gauges.length - 1;
 }

djanerch - Unsafe usage of transfer and transferFrom

djanerch

medium

Unsafe usage of transfer and transferFrom

Summary

Using unsafe ERC20 methods without checking their results can silently fail transaction.

Vulnerability Detail

There are many Weird ERC20 Tokens that won't work correctly using the standard IERC20 interface.

Impact

ERC20 implementations are not always consistent. Some implementations of transfer and transferFrom could return ‘false’ on failure instead of reverting. It is safer to wrap such calls into require() statements to these failures.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBlackHole.sol#L83#L87

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtFeeCollector.sol#L104#L116

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Token/CvgSDT.sol#L39#L42

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/utils/SdtUtilities.sol#L81#L104

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/utils/SdtUtilities.sol#L119#L172

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/utils/SdtUtilities.sol#L184#L204

Tool used

Manual Review

Recommendation

Recommendation: Utilize OpenZeppelin’s SafeERC20 Library

To address these vulnerabilities, it is highly recommended to integrate OpenZeppelin’s SafeERC20 library into the smart contract. This library provides safeTransfer and safeTransferFrom functions designed to handle return value checks and accommodate tokens deviating from standard ERC-20 specifications.

Incorporating SafeERC20 reinforces the reliability of ERC-20 interactions within your smart contract, ensuring seamless compatibility with both compliant and non-compliant tokens. This proactive measure enhances the security and functionality of your protocol, minimizing the risk of transaction reverting due to inadequately handled return values.

Duplicate of #114

8olidity - Potential Misinterpretation of Delegate Token ID Index in `getIndexForVeDelegatee()` Function

8olidity

medium

Potential Misinterpretation of Delegate Token ID Index in getIndexForVeDelegatee() Function

Summary

The code snippet in question pertains to the function getIndexForVeDelegatee() within the LockingPositionDelegate contract. This function is responsible for finding the index of a delegated tokenId within the delegatee's token ID list. However, there is a potential vulnerability in the handling of the return value when the delegated tokenId is not found.

Vulnerability Detail

When the delegated tokenId being searched for is not found in the delegatee's token ID list. In such cases, the function returns 0 as the index value, which can be misleading. This can lead to incorrect assumptions or logic errors in functions that rely on the return value to determine the existence of a delegation.

Impact

This vulnerability depends on the specific use cases and functions that utilize the return value of getIndexForVeDelegatee(). If these functions do not account for the possibility of the index value being 0 due to both an actual delegation not found and the first element of the delegatee's token ID list being the target tokenId, it can result in incorrect logic, unexpected behavior, or potential security risks.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L194-L206

function getIndexForVeDelegatee(address _delegatee, uint256 _tokenId) public view returns (uint256) {
    uint256[] memory _tokenIds = veCvgDelegatees[_delegatee];
    uint256 _length = _tokenIds.length;

    for (uint256 i; i < _length;) {
        if (_tokenIds[i] == _tokenId) return i;
        unchecked {
            ++i;
        }
    }

    return 0;// @audit 
}

Tool used

Manual Review

Recommendation

When using the return value of getIndexForVeDelegatee(), check if the index value is 0 and also verify that the delegatee's token ID list is not empty to differentiate between a delegated tokenId not found and the first element of the list being the target tokenId.

bughuntoor - Killing a gauge will result in mismatch between a gauge type's sum and the gauges' weights summed

bughuntoor

medium

Killing a gauge will result in mismatch between a gauge type's sum and the gauges' weights summed

Summary

Killing a gauge will break accounting within the GaugeController

Vulnerability Detail

Upon admin's decision, a gauge can be killed within the GaugeController. This would result in the gauge's weight being set to 0 and users being unable to vote towards the gauge. However, it would also break all internal accounting and would cause mismatch between a gauge_type's sum and the actual real sum of the weight of all gauges of that type.

@internal
def _change_gauge_weight(addr: address, weight: uint256):
    # Change gauge weight
    # Only needed when testing in reality
    gauge_type: int128 = self.gauge_types_[addr] - 1
    old_gauge_weight: uint256 = self._get_weight(addr)
    type_weight: uint256 = self._get_type_weight(gauge_type)
    old_sum: uint256 = self._get_sum(gauge_type)
    _total_weight: uint256 = self._get_total()
    next_time: uint256 = (block.timestamp + WEEK) / WEEK * WEEK

    self.points_weight[addr][next_time].bias = weight
    self.time_weight[addr] = next_time

    new_sum: uint256 = old_sum + weight - old_gauge_weight
    self.points_sum[gauge_type][next_time].bias = new_sum
    self.time_sum[gauge_type] = next_time

    _total_weight = _total_weight + new_sum * type_weight - old_sum * type_weight
    self.points_total[next_time] = _total_weight
    self.time_total = next_time

    log NewGaugeWeight(addr, block.timestamp, weight, _total_weight)

When killing the gauge and effectively setting its weight to 0, we only change the bias of the gauge and the bias of the gauge_type. However, no changes to the slopes are made. This is not a problem for the gauge's slope as the gauge is killed and cannot have any voting power, but will actually result in breaking of internal accounting for the gauge_type. The gauge_type will keep on decreasing with the already killed gauge's slope. Meaning that over time, the gauge_type will be reduced twice the killed gauge's weight.

Any functions depending on the sum of the gauge_type will not work properly.

Impact

Anything depending on a gauge_type's sum will not work properly

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/GaugeController.vy#L567C1-L589C69

Tool used

Manual Review

Recommendation

Upon killing a gauge, loop through its slope changes and remove them for the gauge's self.changes_sum[gauge_type][t]

Duplicate of #94

bughuntoor - User can pass an array full of the same token id to `manageOwnedAndDelegated` and significantly increase their voting power

bughuntoor

high

User can pass an array full of the same token id to manageOwnedAndDelegated and significantly increase their voting power

Summary

User can significantly increase their voting power.

Vulnerability Detail

The manageOwnedAndDelegated function within the LockingPositionDelegate allows for a user to manually set the tokens they own and that they've been delegated. This is to prevent the issue arising from malicious actors who will spam an address with dust amount token ids/ delegations in attempt to cause an OOG.

    function manageOwnedAndDelegated(OwnedAndDelegated calldata _ownedAndDelegatedTokens) external {
        /** @dev Clear the struct owneds and delegateds tokenId allowed for this user.*/
        delete tokenOwnedAndDelegated[msg.sender];

        /** @dev Add new owned tokenIds allowed for this user.*/
        for (uint256 i; i < _ownedAndDelegatedTokens.owneds.length;) {
            /** @dev Check if tokenId is owned by the user.*/
            require(
                msg.sender == cvgControlTower.lockingPositionManager().ownerOf(_ownedAndDelegatedTokens.owneds[i]),
                "TOKEN_NOT_OWNED"
            );
            tokenOwnedAndDelegated[msg.sender].owneds.push(_ownedAndDelegatedTokens.owneds[i]);
            unchecked {
                ++i;
            }
        }
        /** @dev Add new mgCvg delegated tokenIds allowed for this user.*/
        for (uint256 i; i < _ownedAndDelegatedTokens.mgDelegateds.length;) {
            /** @dev Check if the user is a mgCvg delegatee for this tokenId.*/
            (, , uint256 _toIndex) = getMgDelegateeInfoPerTokenAndAddress(
                _ownedAndDelegatedTokens.mgDelegateds[i],
                msg.sender
            );
            require(_toIndex != 999, "NFT_NOT_MG_DELEGATED");
            tokenOwnedAndDelegated[msg.sender].mgDelegateds.push(_ownedAndDelegatedTokens.mgDelegateds[i]);
            unchecked {
                ++i;
            }
        }
        /** @dev Add new veCvg delegated tokenIds allowed for this user.*/
        for (uint256 i; i < _ownedAndDelegatedTokens.veDelegateds.length;) {
            /** @dev Check if the user is the veCvg delegatee for this tokenId.*/
            require(msg.sender == delegatedVeCvg[_ownedAndDelegatedTokens.veDelegateds[i]], "NFT_NOT_VE_DELEGATED");
            tokenOwnedAndDelegated[msg.sender].veDelegateds.push(_ownedAndDelegatedTokens.veDelegateds[i]);
            unchecked {
                ++i;
            }
        }
    }

However, this introduces a new, much bigger problem, as there are no checks for duplicate values within the passed OwnedAndDelegated struct. This would allow for a user who for example has only one NFT, to pass it multiple times. It will pass the ownership check everytime and will be added every time to the tokenOwnedAndDelegated[msg.sender].owneds of the user

Same thing works for delegations too.

When LockingPositionService gets the voting power of the user it also does not check for duplicate values, allowing for the vuln to be exploited

    function veCvgVotingPowerPerAddress(address _user) external view returns (uint256) {
        uint256 _totalVotingPower;

        ILockingPositionDelegate _lockingPositionDelegate = cvgControlTower.lockingPositionDelegate();

        (uint256[] memory tokenIdsOwneds, uint256[] memory tokenIdsDelegateds) = _lockingPositionDelegate
            .getTokenVeOwnedAndDelegated(_user);

        /** @dev Sum voting power from delegated tokenIds to _user. */
        for (uint256 i; i < tokenIdsDelegateds.length; ) {
            uint256 _tokenId = tokenIdsDelegateds[i];
            /** @dev Check if is really delegated, if not ve voting power for this tokenId is 0. */
            if (_user == _lockingPositionDelegate.delegatedVeCvg(_tokenId)) {
                _totalVotingPower += balanceOfVeCvg(_tokenId);
            }

            unchecked {
                ++i;
            }
        }

        ILockingPositionManager _lockingPositionManager = cvgControlTower.lockingPositionManager();

        /** @dev Sum voting power from _user owned tokenIds. */
        for (uint256 i; i < tokenIdsOwneds.length; ) {
            uint256 _tokenId = tokenIdsOwneds[i];
            /** @dev Check if is really owned AND not delegated to another user,if not ve voting power for this tokenId is 0. */
            if (
                _lockingPositionDelegate.delegatedVeCvg(_tokenId) == address(0) &&
                _user == _lockingPositionManager.ownerOf(_tokenId)
            ) {
                _totalVotingPower += balanceOfVeCvg(_tokenId);
            }

            unchecked {
                ++i;
            }
        }

        return _totalVotingPower;
    }

Impact

Users can significantly increase their voting power by just passing their own NFT id multiple times

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L330C1-L368C6

Tool used

Manual Review

Recommendation

Check for duplicate values within manageOwnedAndDelegated

Duplicate of #126

ksksks - CvgSdtBuffer.pullRewards - unchecked processor address leads to transfer to 0 address

ksksks

high

CvgSdtBuffer.pullRewards - unchecked processor address leads to transfer to 0 address

Summary

CvgSdtBuffer.pullRewards - unchecked processor address leads to transfer to 0 address

Vulnerability Detail

CvgSdtBuffer.pullRewards does not check that processor is not address 0.

This can lead to sending gauge rewards to 0 address.

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L115

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L136

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/CvgSdtBuffer.sol#L157

Impact

Transfer of SDT, sdFrax3Crv and CvgSdt to 0 address

Code Snippet

Tool used

Manual Review

Recommendation

        require(processor != address(0));

Duplicate of #22

Oxd1z - uninitialized local

Oxd1z

medium

uninitialized local

Summary

Uninitialized local variable

Vulnerability Detail

LockingPositionDelegate.getMgDelegateeInfoPerTokenAndAddress(uint256,address)._toPercentage, is a local variable never initialized

Impact

using uninitialized variables can result in unpredictable and inconsistent behavior of the smart contract. This can make it difficult for developers to reason about the code and may lead to unexpected outcomes.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L170

Tool used

Slither
Manual Review

Recommendation

Initialize all the variables. If a variable is meant to be initialized to zero, explicitly set it to zero to improve code readability.

0xGoodess - maxTokenIdsDelegated can be used to ddos a delegetion

0xGoodess

medium

maxTokenIdsDelegated can be used to ddos a delegetion

Summary

maxTokenIdsDelegated can be used to ddos a delegetee

Vulnerability Detail

delegateVeCvg and delegateMgCvg would make use of maxTokenIdsDelegated (which is set to ~25) to limit the number of tokenId delegation to the designated address.

However anyone with a tokenId can delegate to a destination, effectively mean a delegeted address can be Ddos.

Consider a simple scenario:

  1. Alice wants to delegate her voting power (veCvg) to Bob.
  2. attacker Josh created 25 positions of different tokenId, and frontrun with delegateVeCvg to fill up the veCvgDelegatees[_to] array with a length of 25.
  3. Alice can no longer delegate to Bob since Bob reaches the delegation maxTokenIdsDelegated cap.
  4. Currently there is no method for Alice to do anything.
    function delegateVeCvg(uint256 _tokenId, address _to) external onlyTokenOwner(_tokenId) {
        require(veCvgDelegatees[_to].length < maxTokenIdsDelegated, "TOO_MUCH_VE_TOKEN_ID_DELEGATED");
        /** @dev Find if this tokenId is already delegated to an address. */
        address previousOwner = delegatedVeCvg[_tokenId];
        if (previousOwner != address(0)) {
            /** @dev If it is  we remove the previous delegation.*/
            uint256 _toIndex = getIndexForVeDelegatee(previousOwner, _tokenId);
            uint256 _delegateesLength = veCvgDelegatees[previousOwner].length;
            /** @dev Removing delegation.*/
            veCvgDelegatees[previousOwner][_toIndex] = veCvgDelegatees[previousOwner][_delegateesLength - 1];
            veCvgDelegatees[previousOwner].pop();
        }

        /** @dev Associate tokenId to a new delegated address.*/
        delegatedVeCvg[_tokenId] = _to;

        if (_to != address(0)) {
            /** @dev Add delegation to the new address.*/
            veCvgDelegatees[_to].push(_tokenId);
        }
        emit DelegateVeCvg(_tokenId, _to);
    }
    function delegateMgCvg(uint256 _tokenId, address _to, uint96 _percentage) external onlyTokenOwner(_tokenId) {
        require(_percentage <= 100, "INVALID_PERCENTAGE");

        uint256 _delegateesLength = delegatedMgCvg[_tokenId].length;
        require(_delegateesLength < maxMgDelegatees, "TOO_MUCH_DELEGATEES");

        uint256 tokenIdsDelegated = mgCvgDelegatees[_to].length;
        require(tokenIdsDelegated < maxTokenIdsDelegated, "TOO_MUCH_MG_TOKEN_ID_DELEGATED");

        (uint256 _toPercentage, uint256 _totalPercentage, uint256 _toIndex) = getMgDelegateeInfoPerTokenAndAddress(
            _tokenId,
            _to
        );
        bool _isUpdate = _toIndex != 999;
        uint256 _newTotalPercentage = _isUpdate
            ? (_totalPercentage + _percentage - _toPercentage)
            : (_totalPercentage + _percentage);
        require(_newTotalPercentage <= 100, "TOO_MUCH_PERCENTAGE");

        require(_isUpdate || _percentage > 0, "CANNOT_REMOVE_NOT_DELEGATEE");

        /** @dev Delegating.*/
        if (_percentage > 0) {
            MgCvgDelegatee memory delegatee = MgCvgDelegatee({delegatee: _to, percentage: _percentage});

            /** @dev Updating delegatee.*/
            if (_isUpdate) {
                delegatedMgCvg[_tokenId][_toIndex] = delegatee;
            } else {
                /** @dev Adding new delegatee.*/
                delegatedMgCvg[_tokenId].push(delegatee);
                mgCvgDelegatees[_to].push(_tokenId);
            }
...

Impact

Delegatee can be Ddos.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L249-L266

Tool used

Manual Review

Recommendation

Consider remove the cap, or create a 2-step approval so that only the approved person can delegate with prior approval/allowance.

djanerch - Gas limit DoS via unbounded operations

djanerch

medium

Gas limit DoS via unbounded operations

Summary

If a function requires more gas than the block gas limit to complete its execution, it will inevitably fail. These vulnerabilities typically occur in loops that iterate over dynamic data structures.

Vulnerability Detail

Certain functions in contracts take arrays as input and iterate over them without checking their sizes. This oversight can lead to reaching the block gas limit and resulting in a reverted transaction.

Impact

Functions vulnerable to gas limits can become uncallable, potentially locking funds or freezing the contract state.

Code Snippet

Tool used

Manual Review

Recommendation

To ensure that functions like these are bounded and prevent array exhaustion, include proper input validation mechanisms in your smart contract. Follow these general guidelines:

  1. Check Array Length:

    • Before iterating over arrays, verify that the length of the array is within reasonable bounds to prevent exhaustion. Utilize the require statement for this purpose.
    function claimMultipleLocking(ClaimTokenTde[] calldata claimTdes) external {
        require(claimTdes.length <= MAX_ARRAY_LENGTH, "Array length exceeds maximum");
        // rest of the function
    }

    Define MAX_ARRAY_LENGTH as a constant with an appropriate value.

  2. Limit Iteration:

    • Use a for loop to iterate over the array elements, ensuring that the loop index is incremented properly within the loop body. Avoid using unbounded loops relying on external conditions.
    function claimMultipleLocking(ClaimTokenTde[] calldata claimTdes) external {
        for (uint256 i = 0; i < claimTdes.length; i++) {
            require(claimTdes[i].tdeIds.length <= MAX_ARRAY_LENGTH, "Inner array length exceeds maximum");
            // rest of the loop body
        }
    }

    Ensure that inner arrays are also bounded.

  3. Gas Limit Consideration:

    • Recognize that large arrays or nested loops can consume a significant amount of gas, and there's a gas limit for each Ethereum block. If the array size or computation is too large, the function might fail to execute. Consider breaking down the task into smaller transactions if necessary.

Always tailor these validations to your specific use case and the constraints of your smart contract. Adjust the MAX_ARRAY_LENGTH and other parameters based on your system's requirements and limitations.

bughuntoor - Removing gauges during reward distribution may lead to DoS

bughuntoor

medium

Removing gauges during reward distribution may lead to DoS

Summary

Removing gauges during reward distribution may lead to DoS due to underflow.

Vulnerability Detail

All functions within writeStakingRewards keep a cursor to now up to which gauge have they checkpointed thus far (in case there are too many gauges). However, as gauges can be removed while checkpoints are happening, this could allow for a situation where cursor > _endChunk This would mean that the call to _getGaugeChunk(cursor, _endChunk) will revert due to the following line of code:

    function _getGaugeChunk(uint256 from, uint256 to) internal view returns (address[] memory) {
        address[] memory chunk = new address[](to - from);

All of the 4 functions within writeStakingRewards have this exact functionality. If this happens within either of them, it will cause DoS within the contract.

    function _checkpoints() internal {
        require(lastUpdatedTimestamp + 7 days <= block.timestamp, "NEED_WAIT_7_DAYS");

        ICvgControlTower _cvgControlTower = cvgControlTower;
        IGaugeController _gaugeController = _cvgControlTower.gaugeController();
        uint128 _cursor = cursor;
        uint128 _totalGaugeNumber = uint128(gauges.length);

        /// @dev if first chunk, to don't break gauges votes if someone votes between 2 writeStakingRewards chunks we need to lock the gauge votes on GaugeController
        if (_cursor == 0) {
            /// @dev Lock votes
            _gaugeController.set_lock(true);
        }

        /// @dev compute the theoretical end of the chunk
        uint128 _maxEnd = _cursor + cvgRewardsConfig.maxChunkCheckpoint;
        /// @dev compute the real end of the chunk regarding the length of the tAssetArray
        uint128 _endChunk = _maxEnd < _totalGaugeNumber ? _maxEnd : _totalGaugeNumber;

        /// @dev if last chunk of the checkpoint process
        if (_endChunk == _totalGaugeNumber) {
            /// @dev reset the cursor to 0 for _setTotalWeight
            cursor = 0;
            /// @dev set the step as LOCK_TOTAL_WEIGHT for reward distribution
            state = State.LOCK_TOTAL_WEIGHT;
        } else {
            /// @dev setup the cursor at the index start for the next chunk
            cursor = _endChunk;
        }

        /// @dev updates the weight of the chunked gauges
        _gaugeController.gauge_relative_weight_writes(_getGaugeChunk(_cursor, _endChunk));

        /// @dev emit the event only at the last chunk
        if (_endChunk == _totalGaugeNumber) {
            emit Checkpoints(_cvgControlTower.cvgCycle());
        }
    }

Impact

DoS within CvgRewards.sol

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L351

Tool used

Manual Review

Recommendation

add check if cursor > _endChunk cursor = _endChunk

Duplicate of #8

ksksks - SdtBuffer.pullRewards - unchecked process address leads to transfer to 0 address

ksksks

high

SdtBuffer.pullRewards - unchecked process address leads to transfer to 0 address

Summary

SdtBuffer.pullRewards - unchecked processor address leads to transfer to 0 address

Vulnerability Detail

SdtBuffer.pullRewards does not check that processor is not address 0.

This can lead to sending gauge rewards to 0 address.

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/StakeDAO/SdtBuffer.sol#L127

Impact

Gauge rewards sent to address 0

Code Snippet

Tool used

Manual Review

Recommendation

        require(processor != address(0));

Duplicate of #22

0xHelium - CvgSDT token loss for users claiming claimCvgSdtRewards or claimCvgSdtMultiple

0xHelium

high

CvgSDT token loss for users claiming claimCvgSdtRewards or claimCvgSdtMultiple

Summary

There is a presence of precision loss in SdtStakingPositionService._claimCvgSdtRewards() that will lead to users receiving less staking rewards (CvgSDT).

Vulnerability Detail

SdtStakingPositionService._claimCvgSdtRewards() internal function that is called by claimCvgSdtRewards and claimCvgSdtMultiple will cause a loss of precision when calculating the _cvgClaimable amount. This code is where the issue happens.

For example if

  • tokenStaked = 157
  • _cycleInfo[lastClaimedCycle].cvgRewardsAmount = 100
  • totalStaked = 1000
  • claimableAmount will be (157*100)/1000 // it will return 15 instead of 15.7 because of solidity truncation

Impact

Users will get less rewards than they should, on the long run these small amount ( in our example 15.7-15 = 0.7) will be accumulated and will be huge.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Staking/StakeDAO/SdtStakingPositionService.sol#L466

Tool used

Manual Review,
VsCode

Recommendation

Use a multiplier for making operations that can lead to rounding down issues

Duplicate of #53

bughuntoor - `mgCvg` balances are wrongfully calculated

bughuntoor

medium

mgCvg balances are wrongfully calculated

Summary

Users with significant difference in their locks may have the same mgCVG voting power

Vulnerability Detail

The mgCvgCreated is based on the amount a user has used for voting and their lockDuration. However, due to rounding down within the VotingEscrow contract, same users may get unfairly rewarded in comparison to others.
Let's look at the code responsible for the mgCvgCreated amount within mintPosition

        if (ysPercentage != MAX_PERCENTAGE) {
            uint256 amountVote = amount * (MAX_PERCENTAGE - ysPercentage);

            /** @dev Timestamp of the end of locking. */
            _cvgControlTower.votingPowerEscrow().create_lock(
                tokenId,
                amountVote / MAX_PERCENTAGE,
                block.timestamp + (lockDuration + 1) * 7 days
            );
            /// @dev compute the amount of mgCvg
            _mgCvgCreated = (amountVote * lockDuration) / (MAX_LOCK * MAX_PERCENTAGE);

            /// @dev Automatically add the veCVG and mgCVG in the balance taken from Snapshot.
            if (isAddToManagedTokens) {
                _cvgControlTower.lockingPositionDelegate().addTokenAtMint(tokenId, receiver);
            }
        }

As we know, the voting escrow contract round downs the lock time to the nearest week. However, this is not accounted for when calculating the mgCvgCreated. The amount is entirely based on the lockDuration.
Consider the following scenario:
If two users with equal stake both call mintPosition with the same lockDuration of (let's say) 2 weeks), but one calls it at the beginning of the week and the other one calls it at the end of the week. Because of the rounding down to the nearest week, one of the users will have locked their tokens for ~2 weeks, while the other one will have locked them for ~1 week. However, both users will receive the same amount of mgCvg. This is unfair for both users.

Impact

Unfair calculation of mgCvg

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L266C1-L282C10

Tool used

Manual Review

Recommendation

Base the mgCvg calculated on the block.timestamp

Duplicate of #136

bughuntoor - Certain functions should not be usable when `GaugeController` is locked.

bughuntoor

medium

Certain functions should not be usable when GaugeController is locked.

Summary

Possible unfair over/under distribution of rewards

Vulnerability Detail

When writeStakingRewards is invoked for the first time it calls _checkpoints which sets the lock in the GaugeController to true. What this does is it doesn't allow for any new vote changes. The idea behind it is that until the rewards are fully distributed there are no changes in the gauges' weights so the distribution of rewards is correct.
However, there are multiple unrestricted functions which can alter the outcome of the rewards and result in not only unfair distribution, but also to many overdistributed or underdistributed rewards.

    function _setTotalWeight() internal {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        IGaugeController _gaugeController = _cvgControlTower.gaugeController();
        uint128 _cursor = cursor;
        uint128 _totalGaugeNumber = uint128(gauges.length);

        /// @dev compute the theoric end of the chunk
        uint128 _maxEnd = _cursor + cvgRewardsConfig.maxLoopSetTotalWeight;
        /// @dev compute the real end of the chunk regarding the length of staking contracts
        uint128 _endChunk = _maxEnd < _totalGaugeNumber ? _maxEnd : _totalGaugeNumber;

        /// @dev if last chunk of the total weighted locked processs
        if (_endChunk == _totalGaugeNumber) {
            /// @dev reset the cursor to 0 for _distributeRewards
            cursor = 0;
            /// @dev set the step as DISTRIBUTE for reward distribution
            state = State.DISTRIBUTE;
        } else {
            /// @dev setup the cursor at the index start for the next chunk
            cursor = _endChunk;
        }

        totalWeightLocked += _gaugeController.get_gauge_weight_sum(_getGaugeChunk(_cursor, _endChunk));

        /// @dev emit the event only at the last chunk
        if (_endChunk == _totalGaugeNumber) {
            emit SetTotalWeight(_cvgControlTower.cvgCycle(), totalWeightLocked);
        }
    }

If any of change_gauge_weight change_type_weight or is called after the totalWeightLocked is calculated, it will result in incorrect distribution of rewards. When _distributeCvgRewards is called, some gauges may not have the same value that has been used to calculate the totalWeightLocked and this may result in distribution too many or too little rewards. It also gives an unfair advantage/disadvantage to the different gauges.

    function _distributeCvgRewards() internal {
        ICvgControlTower _cvgControlTower = cvgControlTower;
        IGaugeController gaugeController = _cvgControlTower.gaugeController();

        uint256 _cvgCycle = _cvgControlTower.cvgCycle();

        /// @dev number of gauge in GaugeController
        uint128 _totalGaugeNumber = uint128(gauges.length);
        uint128 _cursor = cursor;

        uint256 _totalWeight = totalWeightLocked;
        /// @dev cursor of the end of the actual chunk
        uint128 cursorEnd = _cursor + cvgRewardsConfig.maxChunkDistribute;

        /// @dev if the new cursor is higher than the number of gauge, cursor become the number of gauge
        if (cursorEnd > _totalGaugeNumber) {
            cursorEnd = _totalGaugeNumber;
        }

        /// @dev reset the cursor if the distribution has been done
        if (cursorEnd == _totalGaugeNumber) {
            cursor = 0;

            /// @dev reset the total weight of the gauge
            totalWeightLocked = 0;

            /// @dev update the states to the control_tower sync
            state = State.CONTROL_TOWER_SYNC;
        }
        /// @dev update the global cursor in order to be taken into account on next chunk
        else {
            cursor = cursorEnd;
        }

        uint256 stakingInflation = stakingInflationAtCycle(_cvgCycle);
        uint256 cvgDistributed;
        InflationInfo[] memory inflationInfos = new InflationInfo[](cursorEnd - _cursor);
        address[] memory addresses = _getGaugeChunk(_cursor, cursorEnd);
        /// @dev fetch weight of gauge relative to the cursor
        uint256[] memory gaugeWeights = gaugeController.get_gauge_weights(addresses);
        for (uint256 i; i < gaugeWeights.length; ) {
            /// @dev compute the amount of CVG to distribute in the gauge
            cvgDistributed = (stakingInflation * gaugeWeights[i]) / _totalWeight;

            /// @dev Write the amount of CVG to distribute in the staking contract
            ICvgAssetStaking(addresses[i]).processStakersRewards(cvgDistributed);

            inflationInfos[i] = InflationInfo({
                gauge: addresses[i],
                cvgDistributed: cvgDistributed,
                gaugeWeight: gaugeWeights[i]
            });

            unchecked {
                ++i;
            }
        }

        emit EventChunkWriteStakingRewards(_cvgCycle, _totalWeight, inflationInfos);
    }

Impact

Unfair distribution of rewards. Over/underdistributing rewards.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Rewards/CvgRewards.sol#L244C1-L272C6

Tool used

Manual Review

Recommendation

Add a lock to change_gauge_weight change_type_weight

Oxd1z - arbitrary-send-erc20

Oxd1z

medium

arbitrary-send-erc20

Summary

If an arbitrary from address can be passed to the transferFrom function without proper validation, it may allow an attacker to initiate token transfers from any address to the address(this) (the contract's address). This can result in unauthorized transfers of CVG tokens.

Vulnerability Detail

CvgAirdrop.claim(bytes32[]) uses arbitrary from in transferFrom: cvg.transferFrom(treasuryAirdrop,address(this),CLAIM)

Impact

Since this vulnerability is in the context of an airdrop claim (CvgAirdrop.claim), an attacker might manipulate or abuse the airdrop distribution by claiming tokens on behalf of arbitrary addresses.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Airdrop/CvgAirdrop.sol#L64-L74

Tool used

Manual Review

Recommendation

Use msg.sender as from in transferFrom.

Krishnakumarskr - LockingPositionService::increaseLockTime() should also increase both the ysCvg and mgCvg value

Krishnakumarskr

medium

LockingPositionService::increaseLockTime() should also increase both the ysCvg and mgCvg value

Summary

increaseLockTime() function should increase both the mgCvg and ysCvg values too. Otherwise, users who increased the lock time will get less share of tokens on reward distribution and less voting power than the people who minted a position with the same lock time and CVG amount.

Vulnerability Detail

The value of ysCvg is calculated by the formula https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L584
The value of mgCvg is calculated by the formula https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L276

We can see that both the ysCvg and mgCvg value is dependent on lockDuration which means the longer the locking period the more the ysCvg and mgCvg value. And, it makes sense to have this because the people who choose to lock for a longer time have higher incentives in shares and voting power.

But, the increaseLockTime() function does not increment the ysCvg and mgCvg values. Here, the values of ysCvg and mgCvg should be re-calculated and updated on state variables according to the new lock duration.

Impact

Not updating the two values will result in grievances of a user who increased the time lock because that particular person gets fewer shares on reward distribution and fewer voting power.
Most importantly, since the totalSuppliesTracking is updated on the old and new cycle, his portion of a share will be claimed by other users.
Though it's not an attack by external actors, it's a critical bug in the protocol that's causing grievances for users.

Code Snippet

Assume the current cycle is 5

  • Alice is minting a position with a lock duration of 55 and CVG amount of 100e18 and ysPercent 50
  • Her ysCvg and mgCvg value is 28645833333333333333
  • Now Bob minting a position with a lock duration of 43 and CVG amount of 100e18 and ysPercent 50(Both same as Alice)
  • His ysCvg and mgCvg value is 22395833333333333333
  • But then immediately Bob decided to increment the lock duration by '12', which resulted in a total of 60. (43 + 5 + 12).
  • After the increment his ysCvg and mgCvg value remains the same.
  • But, Bob now has the same CVG and lock duration as Alice but he has less ysCvg and mgCvg value.
  • Because of this his share amount on claiming reward will also be reduced. And, his remaining portion of the share is spread over to other users claiming at that TDE.
    (Below code is modified from unlock-test.spec.ts.)
    it("Check mgCvg values", async () => {
        //Alice and Bob CVG token approval
        await (await cvgContract.connect(user1).approve(lockingPositionService, LOCKING_POSITIONS[0].cvgAmount)).wait();
        await (await cvgContract.connect(user2).approve(lockingPositionService, LOCKING_POSITIONS[0].cvgAmount)).wait();

        //Alice minting position for 100e18, for locking period 55 and ysPercent is 50
        const res = await (
            await lockingPositionService.connect(user1).mintPosition(55, LOCKING_POSITIONS[0].cvgAmount, 50, user1, true)
        ).wait();

        //Bob minting position for  100e18, for locking period 43 and ysPercent is 50
        const res2 = await (
            await lockingPositionService.connect(user2).mintPosition(43, LOCKING_POSITIONS[0].cvgAmount, 50, user2, true)
        ).wait();

        console.log('Before time lock increase...')
        //Alice's token Id
        console.log(await lockingPositionService.lockingPositions(1)); //lastEndCycle: 60, mgCvgAmount: 28645833333333333333
        //Bob's token id
        console.log(await lockingPositionService.lockingPositions(2)); //lastEndCycle: 48, mgCvgAmount: 22395833333333333333


        //Bob increasing his locking period
        await lockingPositionService.connect(user2).increaseLockTime(2, 12);

        console.log('After time lock increase...');
        await increaseCvgCycle(contractUsers, 43);
        const user1YsBalance = await lockingPositionService.balanceOfYsCvgAt(1, 36);
        const user2YsBalance = await lockingPositionService.balanceOfYsCvgAt(2, 36);

        console.log(user1YsBalance, user2YsBalance);


        const totalYsSupplyHistory = await lockingPositionService.totalSupplyYsCvgHistories(36);

        console.log('Share of user1', ((BigNumber.from(user1YsBalance).mul(ethers.parseUnits('10', 20))).div(BigNumber.from(totalYsSupplyHistory))).toString())
        // ^ 561224489795918367347
        console.log('Share of user2', ((BigNumber.from(user2YsBalance).mul(ethers.parseUnits('10', 20))).div(BigNumber.from(totalYsSupplyHistory))).toString())
        // ^ 438775510204081632652

        //Alice's token Id
        console.log(await lockingPositionService.lockingPositions(1)); //lastEndCycle: 60, mgCvgAmount: 28645833333333333333
        //Bob's token id
        console.log(await lockingPositionService.lockingPositions(2)); //lastEndCycle: 60, mgCvgAmount: 22395833333333333333
});
  • In the above code we see that Alice has more share 561224489795918367347 than Bob 438775510204081632652. But, it was supposed to be equal and this imbalance happened because of not updating ysCvg and mgCvg on increaseLockTime()

Tool used

Manual Review

Recommendation

Re-calculate the mgCvgAmount and call _ysCvgCheckpoint on increaseTimeLock

         newMgCvgAmount = (amountVote * (newEndCycle - oldEndCycle)) /(MAX_LOCK * MAX_PERCENT)
         lockingPosition.mgCvgAmount += newMgCvgAmount;
         _ysCvgCheckpoint(durationAdd, 
              (lockingPosition.cvgLocked * lockingPosition.ysPercentage) / MAX_PERCENTAGE,
              actualCycle,
              newEndCycle
         );

bughuntoor - `balanceOfYsCvgAt` returns wrong value if `cycleId == _firstTdeCycle`

bughuntoor

high

balanceOfYsCvgAt returns wrong value if cycleId == _firstTdeCycle

Summary

balanceOfYsCvgAt returns wrong value if cycleId == _firstTdeCycle

Vulnerability Detail

In order to understand the issue we need to first look at how ysCvg is checkpointed.

    function _ysCvgCheckpoint(
        uint256 lockDuration,
        uint256 cvgLockAmount,
        uint256 actualCycle,
        uint256 endLockCycle
    ) internal {
        /** @dev Compute the amount of ysCVG on this Locking Position proportionally with the ratio of lockDuration and MAX LOCK duration. */
        uint256 ysTotalAmount = (lockDuration * cvgLockAmount) / MAX_LOCK;
        uint256 realStartCycle = actualCycle + 1;
        uint256 realEndCycle = endLockCycle + 1;
        /** @dev If the lock is not made on a TDE cycle,   we need to compute the ratio of ysCVG  for the current partial TDE */
        if (actualCycle % TDE_DURATION != 0) {
            /** @dev Get the cycle id of next TDE to be taken into account for this LockingPosition. */
            uint256 nextTdeCycle = (actualCycle / TDE_DURATION + 1) * TDE_DURATION + 1;
            /** @dev Represent the amount of ysCvg to be taken into account on the next TDE of this LockingPosition. */
            uint256 ysNextTdeAmount = ((nextTdeCycle - realStartCycle) * ysTotalAmount) / TDE_DURATION;

            totalSuppliesTracking[realStartCycle].ysToAdd += ysNextTdeAmount;

            /** @dev When a lock is greater than a TDE_DURATION */
            if (lockDuration >= TDE_DURATION) {
                /** @dev we add the calculations for the next full TDE */
                totalSuppliesTracking[nextTdeCycle].ysToAdd += ysTotalAmount - ysNextTdeAmount;
                totalSuppliesTracking[realEndCycle].ysToSub += ysTotalAmount;
            }
            /** @dev If the lock less than TDE_DURATION. */
            else {
                /** @dev We simply remove the amount from the supply calculation at the end of the TDE */
                totalSuppliesTracking[realEndCycle].ysToSub += ysNextTdeAmount;
            }
        }
        /** @dev If the lock is performed on a TDE cycle  */
        else {
            totalSuppliesTracking[realStartCycle].ysToAdd += ysTotalAmount;
            totalSuppliesTracking[realEndCycle].ysToSub += ysTotalAmount;
        }
    }

Here we need to make 2 key takeaways:

  1. The totalsupply at the current cycle is equal to the totalsupply at the previous cycle + totalSuppliesTracking[currentCycle].ysToAdd - totalSuppliesTracking[currentCycle].ysToSub
  2. If a user's lock duration is over 12 weeks (TDE_DURATION), ys balance starts at a significantly reduced value ((nextTdeCycle - realStartCycle) * ysTotalAmount) / TDE_DURATION;) and increases to ysTotalAmount at nextTdeCycle

Though let's check how the user's balance is calculated in balanceOfYsCvgAt:

    function balanceOfYsCvgAt(uint256 _tokenId, uint256 _cycleId) public view returns (uint256) {
        require(_cycleId != 0, "NOT_EXISTING_CYCLE");

        LockingPosition memory _lockingPosition = lockingPositions[_tokenId];
        LockingExtension[] memory _extensions = lockExtensions[_tokenId];
        uint256 _ysCvgBalance;

        /** @dev If the requested cycle is before or after the lock , there is no balance. */
        if (_lockingPosition.startCycle >= _cycleId || _cycleId > _lockingPosition.lastEndCycle) {
            return 0;
        }
        /** @dev We go through the extensions to compute the balance of ysCvg at the cycleId */
        for (uint256 i; i < _extensions.length; ) {
            /** @dev Don't take into account the extensions if in the future. */
            if (_extensions[i].cycleId < _cycleId) {
                LockingExtension memory _extension = _extensions[i];
                uint256 _firstTdeCycle = TDE_DURATION * (_extension.cycleId / TDE_DURATION + 1);
                uint256 _ysTotal = (((_extension.endCycle - _extension.cycleId) *
                    _extension.cvgLocked *
                    _lockingPosition.ysPercentage) / MAX_PERCENTAGE) / MAX_LOCK;
                uint256 _ysPartial = ((_firstTdeCycle - _extension.cycleId) * _ysTotal) / TDE_DURATION;
                /** @dev For locks that last less than 1 TDE. */
                if (_extension.endCycle - _extension.cycleId <= TDE_DURATION) {
                    _ysCvgBalance += _ysPartial;
                } else {
                    _ysCvgBalance += _cycleId <= _firstTdeCycle ? _ysPartial : _ysTotal;  // @audit - important line 
                }
            }
            ++i;
        }
        return _ysCvgBalance;
    }

Let's look specifically look at the case where _extension.endCycle - _extension.cycleId >= TDE_DURATION) (when we reach the else statement)
In the case where _cycleId == firstTdeCycle, the returned value will be _ysPartial, Though as we examined above, the ys balance has increased at that exact cycle. This means that in this case balanceOfYsCvgAt will return a significantly reduced value.

In an example scenario where the user is the only ys staker and _extension.endCycle - _extension.cycleId <= TDE_DURATION, there will be a mismatch between the results from calling balanceOfYsCvgAt with _firstTdeCycle as an argument and totalSupplyOfYsCvgAt for the same cycle.

Impact

balanceOfYsCvgAt will return significantly reduced value any time it is called with parameter cycleId == _firstTdeCycle (up to 11/12 or ~91% reduced value)

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L681

Tool used

Manual Review

Recommendation

Change the <= to <

-                    _ysCvgBalance += _cycleId <= _firstTdeCycle ? _ysPartial : _ysTotal;
 
+                    _ysCvgBalance += _cycleId < _firstTdeCycle ? _ysPartial : _ysTotal;

bughuntoor - Users can game the governance voting by delegating back-and-forth

bughuntoor

high

Users can game the governance voting by delegating back-and-forth

Summary

With the current delegation structure, governance voting can be gamed.

Vulnerability Detail

A key part of governance voting is that either snapshot values should be used or after a vote, the votes should be locked. However, with the current implementation, there are no snapshots of users` mgCvg voting power. Considering, there are no pauses to the delegations, this means that at any voting time, the user can delegate power from one of his wallets to their 2nd one, vote, re-delegate all power back to their first one and vote again, artificially increasing their voting power. Given enough wallets, the user can get basically infinite voting power.

Example scenario:

  1. User A has X voting power.
  2. User A votes once
  3. User A delegates their power to their other wallet
  4. User A votes from their other wallet
  5. Repeat.

Impact

Users can get pretty much infinite voting power

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/LockingPositionDelegate.sol#L278C1-L279C1

Tool used

Manual Review

Recommendation

Add a lock to delegateMgCvg so users cannot re-delegate during active votes. Or consider adding snapshot values to the contract.

0xHelium - User can mint cvgSdt without paying sdt in exchange

0xHelium

high

User can mint cvgSdt without paying sdt in exchange

Summary

The mint function under CvgSDT.sol is making an external call to sdt.transferfrom contract to transfer sdt from the sender to multisig, however the transferfrom function is improperly implemented because it does not check for the return value. sdt contract transferfrom return a boolean under success or failure, so if an user call the mint function under CvgSDT and something unexpected happens instead of reverting the sdt.transferFrom function will just throw false. Failling to check the retrun value allow malicious users to potentially mint CvgSDT for free

Vulnerability Detail

There is an unchecked return value from external copntract call, and because of the monetary value protocol will lose of malicious users exploit this vulnerability i consider this to be a high ( users will be getting free CvgSDT).
The corresponding sdt transferfrom function can be found here: sdt_token

Impact

CvgSDT contract will mint token to users addresses from free

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Token/CvgSDT.sol#L40

Tool used

VsCode
Manual Review

Recommendation

Check the return value of sdt.transferfrom before minting CvgSDT to caller address

ksksks - CvgERC721TimeLockingUpgradeable setLock can be disabled by the owner

ksksks

medium

CvgERC721TimeLockingUpgradeable setLock can be disabled by the owner

Summary

CvgERC721TimeLockingUpgradeable setLock can be disabled by the owner

Vulnerability Detail

setLock requires that timestamp meets the 2 conditions

  1. timestamp >= block.timestamp + BUFFER,
  2. timestamp - block.timestamp < maxLockingTime

Above conditions cannot be satisfied if maxLockingTime <= BUFFER

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Token/CvgERC721TimeLockingUpgradeable.sol#L61-L62

Impact

NFT owner cannot set lock

Code Snippet

Tool used

Manual Review

Recommendation

Require maxLockingTIme > BUFFER and maxLockingTime should be initialized to a value > BUFFER

    uint256 public maxLockingTime = BUFFER + 1;
    function setMaxLockingTime(uint256 newMaxLockingTime) external onlyOwner {
        require(newMaxLockingTime > BUFFER);
        maxLockingTime = newMaxLockingTime;
    }

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Token/CvgERC721TimeLockingUpgradeable.sol#L53-L55

bughuntoor - Reducing a gauge's weight might actually give it a significant advantage

bughuntoor

high

Reducing a gauge's weight might actually give it a significant advantage

Summary

Reducing a gauge might give it an unfair advantage in comparison with other gauges

Vulnerability Detail

Based on their voting escrows, users can vote for gauges within GaugeController. The gauges are allocated the corresponding bias and slope.
Currently, there is an admin-privileged function change_gauge_weight which can be used to change a gauge's weight (or more specifically, its bias)

@internal
def _change_gauge_weight(addr: address, weight: uint256):
    # Change gauge weight
    # Only needed when testing in reality
    gauge_type: int128 = self.gauge_types_[addr] - 1
    old_gauge_weight: uint256 = self._get_weight(addr)
    type_weight: uint256 = self._get_type_weight(gauge_type)
    old_sum: uint256 = self._get_sum(gauge_type)
    _total_weight: uint256 = self._get_total()
    next_time: uint256 = (block.timestamp + WEEK) / WEEK * WEEK

    self.points_weight[addr][next_time].bias = weight
    self.time_weight[addr] = next_time

    new_sum: uint256 = old_sum + weight - old_gauge_weight
    self.points_sum[gauge_type][next_time].bias = new_sum
    self.time_sum[gauge_type] = next_time

    _total_weight = _total_weight + new_sum * type_weight - old_sum * type_weight
    self.points_total[next_time] = _total_weight
    self.time_total = next_time

    log NewGaugeWeight(addr, block.timestamp, weight, _total_weight)

It can be expected that in some circumstances it can be used to decrease a gauge's weight. However, it might end up as an actual boost under some circumstances.
Let's look into what happens after a gauge's weight is reduced.
Within _get_weight which calculates the gauge's weight, we will reach a state where pt.bias < d_bias (because the bias has been arbitrary decreased by an admin).

@internal
def _get_weight(gauge_addr: address) -> uint256:
    """
    @notice Fill historic gauge weights week-over-week for missed checkins
            and return the total for the future week.
    @param gauge_addr Address of the gauge
    @return Gauge weight
    """
    t: uint256 = self.time_weight[gauge_addr]
    if t > 0:
        pt: Point = self.points_weight[gauge_addr][t]
        for i in range(500):
            if t > block.timestamp:
                break
            t += WEEK
            d_bias: uint256 = pt.slope * WEEK
            if pt.bias > d_bias:
                pt.bias -= d_bias
                d_slope: uint256 = self.changes_weight[gauge_addr][t]
                pt.slope -= d_slope
            else:
                pt.bias = 0
                pt.slope = 0
            self.points_weight[gauge_addr][t] = pt
            if t > block.timestamp:
                self.time_weight[gauge_addr] = t
        return pt.bias
    else:
        return 0

This would result in entering the else statement, which would then set both pt.bias and pt.slope to 0. Note that even after this happens, we still have timestamps t for which self.changes_weight[gauge_addr][t] holds a non-zero value, and the gauge's slope is intended to decrease then.
Now, if users decide to vote for the gauge, the gauge will have normal weight until timestamp t. Then, its slope will artificially be decreased. Since the slope is decreased, this would mean that the gauge's bias will decrease at a slower rate, actually giving the gauge more weight over time:

Let's put it into an example:

  1. Gauge has 5,000 weight which should decrease over 4 weeks (bias = 5000, slope = 1250 / WEEK)
  2. Admins decide to reduce the gauge's weight to 0. (reduce by 5,000).
  3. A week goes by. The next call to _get_weight would set both the gauge's bias and slope to 0.
  4. User votes 50,000 weight which should decrease over 40 weeks. (bias = 50000, slope = 1250 / WEEK) (note: the 4 weeks since the original user's vote have still not yet passed).
  5. After 3 weeks the call to _get_weight reduces the slope by the initial voter's slope (1250 / WEEK), therefore making the current slope = 0. The bias is now 46,250 (50,000 - 3 * 1250)
  6. Now for the next 47 weeks, the gauge's weight is actually not decaying. The slope is 0 and the gauge's weight remains the same.

If we look at the gauge 1 week before the 2nd voter's lock expires, the gauge will have weight of 46,250, because of the admins 'reducing' the gauges weight. If they hadn't 'reduced' it, the gauge's weight would be 1,250.

Impact

Reducing a gauge's weight results in actually giving it extra weight.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/main/sherlock-cvg/contracts/Locking/GaugeController.vy#L568

Tool used

Manual Review

Recommendation

Fix is non-trivial. Would like to work with the team on coming up with a fix

Duplicate of #94

blackpanther - Potential Security Vulnerability in onlyWalletOrWhiteListedContract Modifier of LockingPositionService Contract due to tx.origin Usage

blackpanther

medium

Potential Security Vulnerability in onlyWalletOrWhiteListedContract Modifier of LockingPositionService Contract due to tx.origin Usage

Summary

The use of tx.origin in LockingPositionService allows any caller, including potential malicious actors. To enhance security, it is recommended to replace tx.origin with msg.sender, as the latter provides the direct caller's address. While tx.origin may be semi-legitimized for tracking contract interactions, its use is discouraged due to security risks. Additionally, using tx.origin for blocking specific addresses can be addressed through alternative means. It's crucial to note that the use of tx.origin is deprecated and should be avoided.

Vulnerability Detail

Insecure usage of tx.origin in LockingPositionService contract's onlyWalletOrWhiteListedContract modifier may expose security vulnerabilities, allowing potential manipulation by malicious actors.

Impact

MEDIUM

The vulnerability in the onlyWalletOrWhiteListedContract modifier of the LockingPositionService contract using tx.origin can lead to unauthorized access, compromising the security of the contract.

Code Snippet

https://github.com/sherlock-audit/2023-11-convergence/blob/e894be3e36614a385cf409dc7e278d5b8f16d6f2/sherlock-cvg/contracts/Locking/LockingPositionService.sol#L184

function _onlyWalletOrWhiteListedContract() internal view {
    require(
        msg.sender == tx.origin || isContractLocker[msg.sender],
        "NOT_CONTRACT_OR_WL"
    );
}

Tool used

Manual Review

Recommendation

Remove msg.sender == tx.origin from the require check in _onlyWalletOrWhiteListedContract. The updated code ensures that the function and modifier solely rely on msg.sender for enhanced security:

function _onlyWalletOrWhiteListedContract() internal view {
    require(
        isContractLocker[msg.sender],
        "NOT_CONTRACT_OR_WL"
    );
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.