GithubHelp home page GithubHelp logo

2023-10-brahma-findings's Introduction

Brahma Audit

Unless otherwise discussed, this repo will be made public after audit completion, sponsor review, judging, and issue mitigation window.

Contributors to this repo: prior to report publication, please review the Agreements & Disclosures issue.


Audit findings are submitted to this repo

Sponsors have three critical tasks in the audit process:

  1. Respond to issues.
  2. Weigh in on severity.
  3. Share your mitigation of findings.

Let's walk through each of these.

High and Medium Risk Issues

Wardens submit issues without seeing each other's submissions, so keep in mind that there will always be findings that are duplicates. For all issues labeled 3 (High Risk) or 2 (Medium Risk), these have been pre-sorted for you so that there is only one primary issue open per unique finding. All duplicates have been labeled duplicate, linked to a primary issue, and closed.

Judges have the ultimate discretion in determining validity and severity of issues, as well as whether/how issues are considered duplicates. However, sponsor input is a significant criterion.

Respond to issues

For each High or Medium risk finding that appears in the dropdown at the top of the chrome extension, please label as one of these:

  • sponsor confirmed, meaning: "Yes, this is a problem and we intend to fix it."
  • sponsor disputed, meaning either: "We cannot duplicate this issue" or "We disagree that this is an issue at all."
  • sponsor acknowledged, meaning: "Yes, technically the issue is correct, but we are not going to resolve it for xyz reasons."

Add any necessary comments explaining your rationale for your evaluation of the issue.

Note that when the repo is public, after all issues are mitigated, wardens will read these comments; they may also be included in your C4 audit report.

Weigh in on severity

If you believe a finding is technically correct but disagree with the listed severity, select the disagree with severity option, along with a comment indicating your reasoning for the judge to review. You may also add questions for the judge in the comments. (Note: even if you disagree with severity, please still choose one of the sponsor confirmed or sponsor acknowledged options as well.)

For a detailed breakdown of severity criteria and how to estimate risk, please refer to the judging criteria in our documentation.

QA reports, Gas reports, and Analyses

All warden submissions in these three categories are submitted as bulk listings of issues and recommendations:

  • QA reports include all low severity and non-critical findings from an individual warden.
  • Gas reports include all gas optimization recommendations from an individual warden.
  • Analyses contain high-level advice and review of the code: the "forest" to individual findings' "trees.”

For QA reports, Gas reports, and Analyses, sponsors are not required to weigh in on severity or risk level. We ask that sponsors:

  • Leave a comment for the judge on any reports you consider to be particularly high quality. (These reports will be awarded on a curve.)
  • For QA and Gas reports only: add the sponsor disputed label to any reports that you think should be completely disregarded by the judge, i.e. the report contains no valid findings at all.

Once labelling is complete

When you have finished labelling findings, drop the C4 team a note in your private Discord backroom channel and let us know you've completed the sponsor review process. At this point, we will pass the repo over to the judge to review your feedback while you work on mitigations.

Share your mitigation of findings

Note: this section does not need to be completed in order to finalize judging. You can continue work on mitigations while the judge finalizes their decisions and even beyond that. Ultimately we won't publish the final audit report until you give us the OK.

For each finding you have confirmed, you will want to mitigate the issue before the contest report is made public.

If you are planning a Code4rena mitigation review:

  1. In your own Github repo, create a branch based off of the commit you used for your Code4rena audit, then
  2. Create a separate Pull Request for each High or Medium risk C4 audit finding (e.g. one PR for finding H-01, another for H-02, etc.)
  3. Link the PR to the issue that it resolves within your contest findings repo.

Most C4 mitigation reviews focus exclusively on reviewing mitigations of High and Medium risk findings. Therefore, QA and Gas mitigations should be done in a separate branch. If you want your mitigation review to include QA or Gas-related PRs, please reach out to C4 staff and let’s chat!

If several findings are inextricably related (e.g. two potential exploits of the same underlying issue, etc.), you may create a single PR for the related findings.

If you aren’t planning a mitigation review

  1. Within a repo in your own GitHub organization, create a pull request for each finding.
  2. Link the PR to the issue that it resolves within your contest findings repo.

This will allow for complete transparency in showing the work of mitigating the issues found in the contest. If the issue in question has duplicates, please link to your PR from the open/primary issue.

2023-10-brahma-findings's People

Contributors

c4-submissions avatar code423n4 avatar c4-bot-4 avatar c4-bot-9 avatar c4-bot-10 avatar c4-bot-3 avatar knownfactc4 avatar c4-bot-5 avatar c4-bot-6 avatar c4-bot-7 avatar c4-bot-8 avatar c4-bot-1 avatar c4-bot-2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar Scooby avatar Perseverance Success avatar

Watchers

Ashok avatar ladboy233 avatar  avatar

2023-10-brahma-findings's Issues

executeTransaction() in ExecutorPluguin.sol can be called by anyone could causing loss of funds.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/ExecutorPlugin.sol#L68
https://github.com/code-423n4/2023-10-brahma/blob/dd0b41031b199a0aa214e50758943712f9f574a0/contracts/src/core/ExecutorPlugin.sol#L111-L114

Vulnerability details

Impact

The executeTransaction() function in ExecutorPlugin.sol is intended for executing transactions using the executors. A critical issue arises because this function is marked as external and can be invoked by anyone with arbitrary parameters, even if the sender is not the owner of the subAccount.

This vulnerability allows an attacker to craft a malicious transaction with parameters such as to set to the attacker's address, from set to the targeted subAccount and the executor registred for this subAcount passing the isExecutor check in validateExecutionRequest(). This could causing loss of funds for the users.

Proof of Concept

To execute the test copy this code in ExecuteTransaction.t.sol:

function testAnyoneCanCallExecuteTransaction() public {
        
        //Create an address for the attacker
        address attacker = makeAddr("attacker");

        //Execute transaction 
        _setupRegisterExecAndExecPluginModule(address(mcsv));
        DigestData memory _txn = DigestData({
            to: address(attacker),
            txnData: abi.encodeCall(IGnosisSafe.enableModule, makeAddr("module")),
            operation: Enum.Operation.Call,
            value: 0,
            from: address(subSafe), //Attacked subwallet
            nonce: executorPlugin.executorNonce(address(subSafe), executorAddress),
            policyCommit: commit,
            expiryEpoch: uint32(block.timestamp + 1000)
        });
        Types.Executable memory _executable =
            Types.Executable({callType: Types.CallType.CALL, target: _txn.to, value: _txn.value, data: _txn.txnData});

        bytes32 _txnStructHash = keccak256(
            abi.encode(
                TRANSACTION_PARAMS_TYPEHASH,
                _txn.to,
                _txn.value,
                keccak256(_txn.txnData),
                uint8(_txn.operation),
                _txn.from,
                address(mcsv),
                _txn.nonce
            )
        );

        bytes memory sigValidator = ModSignature._buildExecutionValiditySignature(
            _txn, _txnStructHash, validatorPrivKey, address(policyValidator)
        );

        ExecutorPlugin.ExecutionRequest memory _execReq = ExecutorPlugin.ExecutionRequest({
            account: address(_txn.from),
            exec: _executable,
            executor: address(mcsv),
            executorSignature: hex"",
            validatorSignature: abi.encodePacked(sigValidator, uint32(65), _txn.expiryEpoch)
        });

        vm.expectRevert();
        vm.prank(attacker);
        executorPlugin.executeTransaction(_execReq);
        
    }

The expected result would be that the transaction reverts but is not the case how you will see.

Tools Used

Manual review

Recommended Mitigation Steps

Check if the msg.sender is the owner of the subAccount, for achieve that you can add this modifcation to executeTransaction():

function executeTransaction(ExecutionRequest calldata execRequest) external nonReentrant returns (bytes memory) {
       +WalletRegistry _walletRegistry = WalletRegistry(AddressProviderService._getRegistry(_WALLET_REGISTRY_HASH));
       +if (!_walletRegistry.isOwner(msg.sender, _subAccount)) revert NotOwnerWallet();

        _validateExecutionRequest(execRequest);
                                                                 
        bytes memory txnResult = _executeTxnAsModule(execRequest.account, execRequest.exec);

        TransactionValidator(AddressProviderService._getAuthorizedAddress(_TRANSACTION_VALIDATOR_HASH))
            .validatePostExecutorTransaction(msg.sender, execRequest.account);

        return txnResult;
    }

Assessed type

Access Control

registerWallet() in WalletRegistry.sol can be called by anyone allowing malicious wallets to be registered.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/registries/WalletRegistry.sol#L35-L40

Vulnerability details

Impact

In WalletRegistry.sol, the registerWallet() is designed to save newly created wallets in isWallet mapping. A critical issue arises because it is an external function that can be invoked by anyone. Although there are comments specifying that this function should only be called by SafeDeployer.sol or the wallet itself, there is no actual check to verify the msg.sender.

@dev Can only be called by safe deployer or the wallet itself
 function registerWallet() external {    
        if (isWallet[msg.sender]) revert AlreadyRegistered();
        if (subAccountToWallet[msg.sender] != address(0)) revert IsSubAccount();
        isWallet[msg.sender] = true;
        emit RegisterWallet(msg.sender);
    }

This lack of verification opens the door for potential attackers to register a custom wallet, enabling them to exploit vulnerabilities depending on the code of the wallet. In other words, an attacker can register a malicious wallet and perform harmful actions.

Proof of Concept

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/registries/WalletRegistry.sol#L35-L40

Tools Used

Manuel review

Recommended Mitigation Steps

Add a check for comprove that the msg.sender is SafeDeployer.sol like registerSubAccount() does:

function registerWallet() external {
       +if (msg.sender != AddressProviderService._getAuthorizedAddress(_SAFE_DEPLOYER_HASH)) revert InvalidSender();
        if (isWallet[msg.sender]) revert AlreadyRegistered();
        if (subAccountToWallet[msg.sender] != address(0)) revert IsSubAccount();
        isWallet[msg.sender] = true;
        emit RegisterWallet(msg.sender);
    }

Assessed type

Access Control

Using `msg.sender` may lead to caller confusion vulnerabilities in proxy calls.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeModerator.sol#L71-L73
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeModerator.sol#L71-L73
https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/TransactionValidator.sol#L105-L107

Vulnerability details

Impact

The use of msg.sender for key operations like validatePostTransaction introduces risk of caller confusion vulnerabilities if the contract is called via a proxy.

Proof of Concept

The use of msg.sender in the validatePostTransaction call does introduce a potential risk of caller confusion and the problematic code is this line: #L71-72

TransactionValidator(...).validatePostTransaction(txHash, success, msg.sender);

Using msg.sender here makes the assumption that msg.sender will always be the address initiating the original transaction.

However, if the SafeModerator contract is called via a proxy contract, msg.sender will be the proxy address rather than the original caller.

This could allow the proxy contract to manipulate the msg.sender value passed to validatePostTransaction and bypass validity checks.

Let me give an in-depth analysis of the risks of using msg.sender instead of tx.origin:

The validatePostTransaction call uses msg.sender to pass the address being validated by the TransactionValidator:

TransactionValidator(...).validatePostTransaction(..., msg.sender)

This assumes msg.sender will always be the original caller who initiated the Safe transaction. However, if the SafeModerator is called via a proxy contract, msg.sender will be the proxy address, not the original caller.

This means the proxy contract can manipulate what gets passed to the TransactionValidator. A malicious proxy could pass any address it wants to validatePostTransaction by changing msg.sender.

This bypasses the entire purpose of the post-transaction validation. The TransactionValidator will validate the proxy address rather than the original caller address that actually submitted the Safe transaction.

An attacker could exploit this by deploying a malicious proxy that calls SafeModerator, and then passes its own address or some other address to the TransactionValidator.

The TransactionValidator would then approve that address, even though it wasn't the original Safe transaction sender. This completely defies the intent of the validity checks.

Here are the specific lines that demonstrate the issue:

In SafeModerator.sol:#L71-72

function checkAfterExecution(bytes32 txHash, bool success) external view override {

  TransactionValidator(...).validatePostTransaction(txHash, success, msg.sender);

}

Here we can see msg.sender being directly passed as the third argument to validatePostTransaction.

Then in TransactionValidator.sol#L105-L106

function validatePostTransaction(

  bytes32 txHash, 

  bool success,

  address supposedSender

) public {

  // Validate supposedSender...

}

This shows that supposedSender is expected to be the original Safe transaction sender.

However, by passing msg.sender, if called via a proxy contract, supposedSender would be the proxy address instead of the real initiator.

These two code snippets demonstrate concretely how msg.sender usage here can lead to the caller confusion issue I described.

Tools Used

Manual Review

Recommended Mitigation Steps

Using tx.origin instead would prevent this issue, as it always refers to the original external account that initiated the full call stack, regardless of any proxy contracts.

Replacing msg.sender with tx.origin would ensure the proper originating account is validated in TransactionValidator, preventing any caller confusion risk.

The root cause is the use of msg.sender without considering a proxy calling context. Switching to tx.origin is the right fix here.

Assessed type

Access Control

Missing validation allows crafted signatures to be exploited.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L156-L167
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L162
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L146

Vulnerability details

Impact

The _decompileSignatures function assumes the signature data is properly formatted. It does not validate the length of the signatures or the extracted expiryEpoch/validatorSignature before using them. This could allow an attacker to craft malicious signature data to bypass validation.

Proof of Concept

The _decompileSignatures function that could be problematic.

function _decompileSignatures(bytes calldata _signatures)
        internal
        pure
        returns (uint32 expiryEpoch, bytes memory validatorSignature)
    {
        uint256 length = _signatures.length;
        if (length < 8) revert InvalidSignatures();

        uint32 sigLength = uint32(bytes4(_signatures[length - 8:length - 4]));
        expiryEpoch = uint32(bytes4(_signatures[length - 4:length]));
        validatorSignature = _signatures[length - 8 - sigLength:length - 8];
    }

The issue is that it only checks that the _signatures byte array is at least 8 bytes long on this line: 162

if (length < 8) revert InvalidSignatures();

It does not verify that the actual extracted sigLength or expiryEpoch match the expected 4 byte length before using them.

An attacker could craft _signatures data where length >= 8 but sigLength and expiryEpoch point to arbitrary data out of bounds, which could lead to unexpected behavior when decompiling the signatures.

The potential impact of the _decompileSignatures signature decompiling issue:

The _decompileSignatures function is intended to extract the validator's signature and expiry epoch from the full packed transaction signatures passed in. It assumes this packed data follows a strict format:

signatures = abi.encodePacked(
  safeSignatures, 
  validatorSignature, 
  validatorSigLength, 
  expiryEpoch
)

Where:

  • safeSignatures - arbitrary length signatures from Safe owners
  • validatorSignature - arbitrary length signature from trusted validator
  • validatorSigLength - 4 byte uint32 with length of validatorSignature
  • expiryEpoch - 4 byte uint32 with epoch expiry

The problem arises because _decompileSignatures only checks that the full signatures byte array is >= 8 bytes in length. It does not verify the extracted validatorSigLength or expiryEpoch match the expected 4 byte length.

For example, an attacker could craft the signatures like this:

signatures = abi.encodePacked(
  "0x1234", // some dummy safeSigs 
  "0xdeadbeef", // validatorSig
  "0xffff0000", // sigLength = huge!
  "0xffffffff" // expiryEpoch = huge! 
)

Although signatures is > 8 bytes, the decompiled validatorSigLength and expiryEpoch will read from out of bounds data.

When later passed to SignatureCheckerLib, the huge sigLength could cause it to read/write out of bounds memory and potentially manipulate variables or crash the contract.

The uncontrolled expiryEpoch could allow bypassing the expiry check if set to a far future date.

To summarize, this lack of validation allows attackers to potentially:

  • Crash the contract via out of bounds memory access
  • Manipulate signature verification by overwriting memory
  • Bypass the transaction expiry check
  • Lead to other unexpected behaviors

Proper bounds checking of the decompiled data before use would prevent these attack vectors. Overall this is a medium severity issue that could lead to DoS or bypass of critical signature validation logic.

Here is a Solidity script that shows how the _decompileSignatures function can be exploited by providing malformed signature data:

// Exploit demo for _decompileSignatures

contract Exploit {

  // Simulate _decompileSignatures function
  function exploit(bytes memory signatures) public pure returns (uint32, bytes memory) {
    uint256 length = signatures.length;
    if (length < 8) revert();
    
    uint32 sigLength = uint32(bytes4(signatures[length - 8:length - 4])); 
    uint32 expiryEpoch = uint32(bytes4(signatures[length - 4:length]));
    
    bytes memory validatorSig = signatures[length - 8 - sigLength:length - 8];
    
    return (expiryEpoch, validatorSig);
  }

  function testExploit(bytes memory signatures) public pure {
    (uint32 expiry, bytes memory sig) = exploit(signatures);
    
    // Use potentially malicious expiry and sig
  }

}

// Craft malicious signatures
bytes memory signatures = abi.encodePacked(
  hex"1234", // some dummy safeSigs
  hex"deadbeef", // validatorSig 
  hex"ffffffffffffffffffffffffffffffff", // HUGE sigLength 
  hex"ffffffffffffffffffffffffffffffff" // HUGE expiryEpoch
);

Exploit(signatures); 

// Now expiryEpoch and validatorSig point to out of bounds data
// This could lead to overwrite of memory or other vulnerabilities

This s how an attacker could create signatures data that passes the length check but contains invalid sigLength and expiryEpoch values that could be exploited.

Tools Used

Manual Review

Recommended Mitigation Steps

I would recommend adding validation on sigLength and expiryEpoch before using them, for example:

if (sigLength != 4 || expiryEpoch > uint32(type(uint32).max)) {
  revert InvalidSignatures(); 
}

This would prevent any mismatch between the stated lengths and actual data.

Assessed type

Invalid Validation

Module transactions will always fail because incompatible with Safe 1.5.0

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/SafeModerator.sol#L80-L86
https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/SafeModeratorOverridable.sol#L86-L92

Vulnerability details

Impact

GnosisSafe has concept of module. Trusted modules of GnosisSafe contract can execute arbitrary transactions without signatures once enabled. That's the main purpose of protocol: to allow execute some defined action (like swap on DEX) without signatures via registering such an action in Policy commitment. And then execute this action via Executor or Main Account without need of delegatee's signatures. And also allows to register SubAccounts and manage them via Main Account.

Therefore ExecutorPlugin and Main Account are enabled as modules. Main problem is that all transactions executed via module will fail - it means that SubAccount is just another GnosisSafe without additional utility.
When assessing the severity I take into consideration protocol specific: protocol is about adding new functionality

Users also have access to SafeSub-accounts that reduce their risk from the protocol by isolating their interactions.

But core functionality of protocol regarding ExecutorPlugin and SubAccounts doesn't work. Due to this reasoning I submit it as High, despite there is no financial loss

Proof of Concept

Let's take a look on how tx from module is executed. Firstly guard.checkModuleTransaction() is called which returns bytes32 guardHash, and then transaction is executed:
https://github.com/safe-global/safe-contracts/blob/810fad9a074837e1247ca24ac9e7f77a5dffce19/contracts/base/ModuleManager.sol#L82-L104

    function execTransactionFromModule(
        address to,
        uint256 value,
        bytes memory data,
        Enum.Operation operation
    ) public virtual returns (bool success) {
        // Only whitelisted modules are allowed.
        require(msg.sender != SENTINEL_MODULES && modules[msg.sender] != address(0), "GS104");
        // Execute transaction without further confirmations.
        address guard = getGuard();

        bytes32 guardHash;
        if (guard != address(0)) {
@>          guardHash = Guard(guard).checkModuleTransaction(to, value, data, operation, msg.sender);
        }
        success = execute(to, value, data, operation, type(uint256).max);

        if (guard != address(0)) {
            Guard(guard).checkAfterExecution(guardHash, success);
        }
        if (success) emit ExecutionFromModuleSuccess(msg.sender);
        else emit ExecutionFromModuleFailure(msg.sender);
    }

In context of Brahma, Guard contract is SafeModerator or SafeModeratorOverridable. They implement incorrect interface IGuard, so that function checkModuleTransaction() returns nothing
https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/interfaces/external/IGnosisSafe.sol#L110-L116

    function checkModuleTransaction(
        address to,
        uint256 value,
        bytes memory data,
        Enum.Operation operation,
        address module
    ) external;

Here is PoC in Remix to ensure that function reverts when tries to decode output while function doesn't return anything

// SPDX-License-Identifier: MIT

pragma solidity ^0.8.0;

interface WithoutReturn {
      function checkModuleTransaction() external;
}

interface WithReturn {
      function checkModuleTransaction() external returns(bytes32);
}

contract Test {

  function test() external returns(bytes32) {

    address calledContract = address(new ContractWithoutReturn());

    // Reverts on decoding
    bytes32 result = WithReturn(calledContract).checkModuleTransaction();
    return result;
  }
}

contract ContractWithoutReturn is WithoutReturn {

  function checkModuleTransaction() external {}

}

As a result, function GnosisSafe.execTransactionFromModule() will always revert.

Tools Used

Manual Review, Remix

Recommended Mitigation Steps

Update interface IGuard:

    function checkModuleTransaction(
        address to,
        uint256 value,
        bytes memory data,
        Enum.Operation operation,
        address module
-   ) external;
+   ) external returns(bytes32);

Note

Protocol is meant to use 1.5.0; that feature with callback to guard in module transaction was introduced in 1.5.0 recently. I also advice to add tests for different safe Versions; now only 1.3.0 is used in tests

Sponsor said they took interface from this PR because GnosisSafe 1.5.0. is still in dev
And further in discussion they noted mistake in interface

Assessed type

Error

Frontrunning DoS of SafeDeployer functions (deployConsoleAccount() and deploySubAccount())

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/SafeDeployer.sol#L56-L255

Vulnerability details

Impact

Any console account or sub account deployment can be DoSed; an attacker constantly exploiting this vulnerability upon protocol deployment could cause full DoS of the protocol. Alternatively, an attacker can prevent further deployment of console accounts and sub accounts. Victims will also lose all gas provided for their deployment transactions.

This vulnerability report is primarily for Ethereum Mainnet, but it can also be exploited on other chains where frontrunning is possible.

Proof of Concept

In SafeDeployer.sol, the arguments passed to GnosisProxyFactory.createProxyWithNonce() are completely predetermined by the address of the sender and the arguments passed to deployConsoleAccount()/deploySubAccount(). This includes the case where address collision occurs, as SafeDeployer.sol uses an incrementing nonce in the salt to handle collision. The address of the deployed safe is also completely predetermined with these arguments (combined with the address of the GnosisProxyFactory and the contract code, but these two factors are not relevant here). Therefore, an attacker can frontrun a call to deployConsoleAccount() or deploySubAccount() with calls to GnosisProxyFactory.createProxyWithNonce() using identical arguments, causing repeated address collision in the victim's call to deployConsoleAccount()/deploySubAccount() such that the transaction is guaranteed to run out of gas and revert. There is a significant gas cost associated with failed Safe deployments due to address collision, which will be discussed in the next section.

For reference if desired, GnosisProxyFactory code from etherscan (Safe: Proxy Factory 1.3.0 https://etherscan.io/address/0xa6b71e26c5e0845f74c812102ca7114b6a896ab2#code):

    function deployProxyWithNonce(
        address _singleton,
        bytes memory initializer,
        uint256 saltNonce
    ) internal returns (GnosisSafeProxy proxy) {
        // If the initializer changes the proxy address should change too. Hashing the initializer data is cheaper than just concatinating it
        bytes32 salt = keccak256(abi.encodePacked(keccak256(initializer), saltNonce));
        bytes memory deploymentData = abi.encodePacked(type(GnosisSafeProxy).creationCode, uint256(uint160(_singleton)));
        // solhint-disable-next-line no-inline-assembly
        assembly {
            proxy := create2(0x0, add(0x20, deploymentData), mload(deploymentData), salt)
        }
        require(address(proxy) != address(0), "Create2 call failed");
    }

    function createProxyWithNonce(
        address _singleton,
        bytes memory initializer,
        uint256 saltNonce
    ) public returns (GnosisSafeProxy proxy) {
        proxy = deployProxyWithNonce(_singleton, initializer, saltNonce);
        if (initializer.length > 0)
            // solhint-disable-next-line no-inline-assembly
            assembly {
                if eq(call(gas(), proxy, 0, add(initializer, 0x20), mload(initializer), 0, 0), 0) {
                    revert(0, 0)
                }
            }
        emit ProxyCreation(proxy, _singleton);
    }

Gas Cost Analysis

GnosisProxyFactory uses the CREATE2 opcode, the details of which can be found here: https://eips.ethereum.org/EIPS/eip-1014. Note that this EIP states that "The CREATE2 has the same gas schema as CREATE ... CreateGas is deducted before evaluation of the resulting address and the execution of init_code". Since the Ethereum Yellowpaper lists the gas cost for the CREATE opcode as 32000 gas, the victim's transaction will use at least 32000 more gas for each address collision that occurs. However, testing in Foundry (see first test provided) and Remix IDE reveals that a failed CREATE2 transaction due to address collision consumes a huge amount of gas (well over 1 million) regardless of deployment bytecode size. For comparison, the attacker's frontrunning transaction costs around 300,000 gas (see first test provided).

The exact amount of gwei that the attacker needs to spend depends on the network fee, the max priority fee set by the user, and the amount of gwei provided by the user for the transaction. The attacker needs to set their max priority fee above the user's max priority fee such that their transaction is guaranteed priority, and enough attacking transactions need to be submitted such that the user's transaction will run out of gas; although considering the test, only one frontrunning transaction should be necessary. Since the attacker's transaction costs around 300,000 gas and a non-colliding account deployment transaction costs around 400,000 gas (see second test provided), the attacker will spend close to the same amount of transaction fees as the victim.

Even if the failed-due-to-collision CREATE2 gas cost as simulated by Foundry and Remix IDE is somehow incorrect, each frontrunning transaction will still grief at least 32000 gas from the user's transaction, as previously mentioned. The attack would still be very viable since a frontrunning transaction from the attacker costs around 300,000 gas. In this case, if the victim supplies 1 dollar worth of gwei for the deployment transaction, then the attacker can likely DoS the call for 10 dollars or less.

Tests to Demonstrate Address Collision and Gas Usage

Run the following tests in the DeployConsoleAccount.t.sol SafeDeployer_DeployConsoleAccountTest contract:

    function testDoS() public { 
        address[] memory owners = new address[](3);
        owners[0] = makeAddr("owners[0]");
        owners[1] = makeAddr("owners[1]");
        owners[2] = makeAddr("owners[2]");
        
        uint gasUsedByAttacker = gasleft(); //record and log gas used by attacker
        vm.prank(address(123)); // Use attacker address
        //attacker frontruns tx with call directly to SafeProxyFactory, passing same arguments
        address(0xa6B71E26C5e0845f74c812102Ca7114b6a896AB2).call(abi.encodeWithSignature('createProxyWithNonce(address,bytes,uint256)', address(0xd9Db270c1B5E3Bd161E8c8503c55cEABeE709552), hex'b63e800d00000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000002000000000000000000000000a238cbeb142c10ef7ad8442c6d1f9e89e07e77610000000000000000000000000000000000000000000000000000000000000180000000000000000000000000f48f2b2d2a534e402487b3ee7c18c33aec0fe5e400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000030000000000000000000000009ec04f4f292adca2fa064f539c13622c1b52c34c00000000000000000000000074c7b92aba2859c96ae525a640f400de30ccb075000000000000000000000000ca5a7a8187978d6611f4e03d772f9ae9691793cb00000000000000000000000000000000000000000000000000000000000000a48d80ff0a00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000059000a2c4efe801858bf0ed06757253aefe0ef60f68e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000433ff495a0000000000000000000000000000000000000000000000000000000000000000000000', uint256(51872976573597302582219086198521676103224648860020643595669342518710604661707)));
        gasUsedByAttacker -= gasleft();
        console.log(gasUsedByAttacker); //around 300,000
        //SafeProxyCreationFailed() error will be thrown because EVM reverts with OutOfGas error (run test with -vvvv to see details)
        vm.expectRevert(abi.encodeWithSelector(SafeDeployer.SafeProxyCreationFailed.selector));
        //the below tx normally costs about 400,000 gas (see second test)
        safeDeployer.deployConsoleAccount{gas: 4_000_000}(owners, 2, bytes32(0), keccak256("salt"));
    }
    function testDoSGas() public { 
        address[] memory owners = new address[](3);
        owners[0] = makeAddr("owners[0]");
        owners[1] = makeAddr("owners[1]");
        owners[2] = makeAddr("owners[2]");
        uint gas = gasleft(); //record and log gas used if there's no attack
        safeDeployer.deployConsoleAccount(owners, 2, bytes32(0), keccak256("salt"));
        gas -= gasleft();
        console.log(gas); //about 400,000
        //nonce should be 1, since there was no frontrun
        assertEq(safeDeployer.ownerSafeCount(keccak256(abi.encode(owners))), 1);
    }

Tools Used

Remix IDE, Foundry, Etherscan

Recommended Mitigation Steps

Use an oracle (e.g. Chainlink VRF) to add a random value in the salt. This will effectively remove the attacker's ability to cause address collision.

Assessed type

DoS

[M-03] Reentrancy in the PolicyRegistry contract

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/registries/PolicyRegistry.sol#L35-L59

Vulnerability details

Impact

The reentrancy vulnerability uses the attack contract to call into the victim contract several times before the victim contract's balance updates.
Hence allowing the attacker to withdraw e.g. 2 ether when they only deposited 1 ether.
Which means double entry counting duplicate withdrawals for only one genuine withdrawal.

Proof of Concept

Vulnerable function to reentrancy called updatePolicy

    function updatePolicy(address account, bytes32 policyCommit) external {
        if (policyCommit == bytes32(0)) {
            revert PolicyCommitInvalid();
        }


        WalletRegistry walletRegistry = WalletRegistry(AddressProviderService._getRegistry(_WALLET_REGISTRY_HASH));


        bytes32 currentCommit = commitments[account];


        // solhint-disable no-empty-blocks
        if (
            currentCommit == bytes32(0)
                && msg.sender == AddressProviderService._getAuthorizedAddress(_SAFE_DEPLOYER_HASH)
        ) {
            // In case invoker is safe  deployer
        } else if (walletRegistry.isOwner(msg.sender, account)) {
            //In case invoker is updating on behalf of sub account
        } else if (msg.sender == account && walletRegistry.isWallet(account)) {
            // In case invoker is a registered wallet
        } else {
            revert UnauthorizedPolicyUpdate();
        }
        // solhint-enable no-empty-blocks
        _updatePolicy(account, policyCommit);
    }

Reentrancy Exploit

/// SPDX-License-Identifier: BUSL-1.1

/// Copyright (C) 2023 Brahma.fi

pragma solidity 0.8.19;

import "./PolicyRegistry.sol";

contract tPolicyRegistry {

   PolicyRegistry public x1;

   address account = address(0x5B38Da6a701c568545dCfcB03FcB875f56beddC4); // me
   constructor(PolicyRegistry _x1) {

      x1 = PolicyRegistry(_x1);

   }

   function testReentrancy(PolicyRegistry _x1) external payable {
      bytes32 policyCommit = bytes32(address(_x1));
      x1.updatePolicy(account, policyCommit);

   }

   receive() external payable {
      msg.sender.transfer(payable(address(_x1)).balance);
   }

   }

Tools Used

VS Code.

Recommended Mitigation Steps

All functions that are not internal and are making a call should have a reentrancy guard added to them.
Checks-Effects-Interactions should be applied to the functions.
Balance updates should be made at the beginning of the call.
The actual call should be made at the end of the function.
So that the balance is already updated first and reentrancy is not possible.

Assessed type

Reentrancy

QA Report

See the markdown file with the details of this report here.

Analysis

See the markdown file with the details of this report here.

registerWallet() lacks any access controls

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/registries/WalletRegistry.sol#L31-L40

Vulnerability details

Impact

registerWallet() is an external function but it is completely open. Spamming it might lead to a slowdown of the system. Especially as it will be deployed on a bunch of chains.

Proof of Concept

On line 20 of RegisterWallet.t.sol:
address attacker = makeAddr("attacker");

At the end of the file a new test case:

    function testRegisterWallet_ShouldRevertInvalidSender() public {
        vm.expectRevert(abi.encodeWithSelector(WalletRegistry.InvalidSender.selector));
        vm.prank(attacker);
        walletRegistry.registerWallet();
    }

Test fails as it calls registerWallet() with any random address

Tools Used

VSCode, Forge

Recommended Mitigation Steps

Extra access controls or a preapproved address mapping that can call this function.

Assessed type

Access Control

[M-02] Read-Only Reentrancy in the PolicyValidator contract

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L54-L80
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L100-L142

Vulnerability details

Impact

Detailed description of the impact of this finding.
The reentrancy vulnerability uses the attack contract to call into the victim contract several times before the victim contract's balance updates.
Hence allowing the attacker to withdraw e.g. 2 ether when they only deposited 1 ether.
Which means double entry counting duplicate withdrawals for only one genuine withdrawal.

Proof of Concept

Reentrancy Vulnerable isPolicySignatureValid function (part 1)

// Line 54-80
    function isPolicySignatureValid(
        address account,
        address to,
        uint256 value,
        bytes memory data,
        Enum.Operation operation,
        bytes calldata signatures
    ) external view returns (bool) {
        // Get nonce from safe
        uint256 nonce = IGnosisSafe(account).nonce();


        // Build transaction struct hash
        bytes32 transactionStructHash = TypeHashHelper._buildTransactionStructHash(
            TypeHashHelper.Transaction({
                to: to,
                value: value,
                data: data,
                operation: uint8(operation),
                account: account,
                executor: address(0),
                nonce: nonce
            })
        );


        // Validate signature
        return isPolicySignatureValid(account, transactionStructHash, signatures);
    }

Reentrancy Vulnerable isPolicySignatureValid function (part 2)

// Line 100-142
    function isPolicySignatureValid(address account, bytes32 transactionStructHash, bytes calldata signatures)
        public
        view
        returns (bool)
    {
        // Get policy hash from registry
        bytes32 policyHash =
            PolicyRegistry(AddressProviderService._getRegistry(_POLICY_REGISTRY_HASH)).commitments(account);
        if (policyHash == bytes32(0)) {
            revert NoPolicyCommit();
        }


        // Get expiry epoch and validator signature from signatures
        (uint32 expiryEpoch, bytes memory validatorSignature) = _decompileSignatures(signatures);


        // Ensure transaction has not expired
        if (expiryEpoch < uint32(block.timestamp)) {
            revert TxnExpired(expiryEpoch);
        }


        // Build validation struct hash
        bytes32 validationStructHash = TypeHashHelper._buildValidationStructHash(
            TypeHashHelper.Validation({
                transactionStructHash: transactionStructHash,
                policyHash: policyHash,
                expiryEpoch: expiryEpoch
            })
        );


        // Build EIP712 digest with validation struct hash
        bytes32 txnValidityDigest = _hashTypedData(validationStructHash);


        address trustedValidator = AddressProviderService._getAuthorizedAddress(_TRUSTED_VALIDATOR_HASH);


        // Empty Signature check for EOA signer
        if (trustedValidator.code.length == 0 && validatorSignature.length == 0) {
            // TrustedValidator is an EOA and no trustedValidator signature is provided
            revert InvalidSignature();
        }


        // Validate signature
        return SignatureCheckerLib.isValidSignatureNow(trustedValidator, txnValidityDigest, validatorSignature);
    }

Read-Only Reentrancy Exploit

/// SPDX-License-Identifier: BUSL-1.1

/// Copyright (C) 2023 Brahma.fi

pragma solidity 0.8.19;

import "./PolicyValidator.sol";

contract tPolicyValidator {

   PolicyValidator public x1;

   address public to = address(0x5B38Da6a701c568545dCfcB03FcB875f56beddC4); // me

   constructor(PolicyValidator _x1) {

   x1 = PolicyValidator(_x1);

   }

   function testReentrancy(PolicyValidator _x1) external view returns (bool) {

      bytes memory data = bytes("0x01");
      uint256 value = uint256(1e18);
      operation = uint8(1);
      address account = address(_x1);
      uint256 nonce = IGnosisSafe(account).nonce();
      bytes calldata signatures = bytes("0xBrahma.fi");
      address executor = address(_x1);

      bytes32 transactionStructHash = bytes32(abi.encode(TypeHashHelper.Transaction({
                to: to,
                value: value,
                data: data,
                operation: uint8(operation),
                account: account,
                executor: address(msg.sender),
                nonce: nonce
            })));

      return(x1.isPolicySignatureValid(account, to, value, data, operation, signatures), x1.isPolicySignatureValid(account, transactionStructHash, signatures));

   }

   receive() external payable {
      msg.sender.transfer(payable(address(_x1)).balance);
   }

   }

Tools Used

VS Code.

Recommended Mitigation Steps

All functions that are not internal and are making a call should have a reentrancy guard added to them.
Checks-Effects-Interactions should be applied to the functions.
Balance updates should be made at the beginning of the call.
The actual call should be made at the end of the function.
So that the balance is already updated first and reentrancy is not possible.

Assessed type

Reentrancy

Analysis

See the markdown file with the details of this report here.

anyone can register wallet - WalletRegistry.sol:registerWallet()

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/registries/WalletRegistry.sol#L31-L40

Vulnerability details

Impact

/**
* @notice Registers a wallet
* @dev Can only be called by safe deployer or the wallet itself
*/

Proof of Concept

not checking for msg.sender is safe deployer
if (msg.sender != AddressProviderService._getAuthorizedAddress(_SAFE_DEPLOYER_HASH)) revert InvalidSender();

Tools Used

manual review, foundry.

Recommended Mitigation Steps

add safe deployer checking.

Assessed type

Other

[M-04] Reentrancy in the ExecutorRegistry contract

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/registries/ExecutorRegistry.sol#L38-L44
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/registries/ExecutorRegistry.sol#L53-L59

Vulnerability details

Impact

The reentrancy vulnerability uses the attack contract to call into the victim contract several times before the victim contract's balance updates.
Hence allowing the attacker to withdraw e.g. 2 ether when they only deposited 1 ether.
Which means double entry counting duplicate withdrawals for only one genuine withdrawal.

Proof of Concept

Vulnerable function to reentrancy called registerExecutor

// Line 38-44
    function registerExecutor(address _subAccount, address _executor) external {
        WalletRegistry _walletRegistry = WalletRegistry(AddressProviderService._getRegistry(_WALLET_REGISTRY_HASH));
        if (!_walletRegistry.isOwner(msg.sender, _subAccount)) revert NotOwnerWallet();


        if (!subAccountToExecutors[_subAccount].add(_executor)) revert AlreadyExists();
        emit RegisterExecutor(_subAccount, msg.sender, _executor);
    }

Vulnerable function to reentrancy called deRegisterExecutor

// Line 53-59
    function deRegisterExecutor(address _subAccount, address _executor) external {
        WalletRegistry _walletRegistry = WalletRegistry(AddressProviderService._getRegistry(_WALLET_REGISTRY_HASH));
        if (_walletRegistry.subAccountToWallet(_subAccount) != msg.sender) revert NotOwnerWallet();


        if (!subAccountToExecutors[_subAccount].remove(_executor)) revert DoesNotExist();
        emit DeRegisterExecutor(_subAccount, msg.sender, _executor);
    }

Reentrancy Exploit

/// SPDX-License-Identifier: BUSL-1.1

/// Copyright (C) 2023 Brahma.fi

pragma solidity 0.8.19;

import "./ExecutorRegistry.sol";

contract tExecutorRegistry {

   ExecutorRegistry public x1;

   address executor = address(0x5B38Da6a701c568545dCfcB03FcB875f56beddC4); // me

   constructor(ExecutorRegistry _x1) {

      x1 = ExecutorRegistry(_x1);

   }

   function testReenter(ExecutorRegistry _x1) external payable {
      address subAccount = address(_x1);
      x1.registerExecutor(subAccount, executor);
      x1.deRegisterExecutor(subAccount, executor);
   }

   receive() external payable {
      msg.sender.transfer(payable(address(_x1)).balance);
   }

   }

Tools Used

VS Code.

Recommended Mitigation Steps

All functions that are not internal and are making a call should have a reentrancy guard added to them.
Checks-Effects-Interactions should be applied to the functions.
Balance updates should be made at the beginning of the call.
The actual call should be made at the end of the function.
So that the balance is already updated first and reentrancy is not possible.

Assessed type

Reentrancy

QA Report

See the markdown file with the details of this report here.

Lack of authentication lets anyone bypass `SafeModerator` security checks using `TransactionValidator`.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeModerator.sol#L46-L47
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeModerator.sol#L71-L73

Vulnerability details

Impact

There is no authentication on the TransactionValidator contract call. An attacker could potentially call it directly and bypass checks. The call should be restricted to the SafeModerator only.

There is no direct call to the TransactionValidator contract in the SafeModerator code shown.

I suspect the issue is in this line:

TransactionValidator(AddressProviderService._getAuthorizedAddress(_TRANSACTION_VALIDATOR_HASH))
.validatePreTransaction(

The TransactionValidator is being instantiated and called directly without any authentication or access control.

This means contract or external caller could invoke those validation functions directly instead of going through SafeModerator.

Proof of Concept

The SafeModerator contract is intended to act as a security guard for transactions, validating them against the TransactionValidator before execution. This is a critical security control.

However, there is no access control implemented on the call to TransactionValidator. It is simply instantiated and called directly:

TransactionValidator(AddressProviderService._getAuthorizedAddress(_TRANSACTION_VALIDATOR_HASH)).validatePreTransaction(...)

This means any contract or external account can call TransactionValidator directly, bypassing the checks in SafeModerator completely.

An attacker could simply instantiate TransactionValidator directly and invoke validatePreTransaction() and validatePostTransaction() with whatever params they want. The lack of authentication makes these critical validation functions open to the public.

This completely defeats the purpose of having the SafeModerator guard in the first place. Any malicious actor could directly call the validator with crafted inputs to get a transaction approved, even if it violates business logic or policies.

The impact of this is quite severe - it effectively nullifies the security controls added by the SafeModerator. Unauthorized transactions could be executed on the Safe by completely bypassing the guard via direct calls to the validator.

Let me provide direct evidence from the code to demonstrate the lack of authentication on the TransactionValidator call:

In the SafeModerator contract, in the checkTransaction() function, we see:

TransactionValidator(AddressProviderService._getAuthorizedAddress(_TRANSACTION_VALIDATOR_HASH)) 
.validatePreTransaction(...)

This directly instantiates the TransactionValidator contract and calls validatePreTransaction() without any authentication check.

There are no modifiers like onlyOwner or similar access control to restrict calling this function. It is a direct public call.

Later in the checkAfterExecution() function, we also see:

TransactionValidator(AddressProviderService._getAuthorizedAddress(_TRANSACTION_VALIDATOR_HASH))
.validatePostTransaction(...)

Again, directly calling the TransactionValidator without any authentication.

These two code examples clearly demonstrate that TransactionValidator is being invoked directly without any access control. There is no authentication on who can call these functions.

This enables an attacker to trivially bypass SafeModerator's checks by calling TransactionValidator directly. They would be able to construct arbitrary input parameters that could pass validation.

Proper access control is absolutely critical for the security of the overall system. Without it, the SafeModerator's checks can be trivially bypassed, undermining the entire security model.

Code Snippet

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeModerator.sol#L46-L47

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeModerator.sol#L71-L73

Tools Used

Manual Review

Recommended Mitigation Steps

SafeModerator should have an onlyOwner or similar modifier that restricts calling the TransactionValidator to only SafeModerator.

For example:

modifier onlySelf() {
  require(msg.sender == address(this), "Unauthorized caller");
  _;
}

function validateTx() onlySelf() public {
  TransactionValidator(...).validatePreTransaction(...) 
}

This would restrict the TransactionValidator calls to only SafeModerator, preventing bypassing the checks.

Assessed type

Access Control

QA Report

See the markdown file with the details of this report here.

Insufficient validation of signatures length allows memory corruption and signature bypass.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L54-L61
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L156-L160

Vulnerability details

Impact

If signatures is malformed or shorter than expected, _decompileSignatures() could end up reading invalid memory locations, leading to unexpected behavior. Validation on the signatures input Length in isPolicySignatureValid() is needed.

Proof of Concept

The isPolicySignatureValid() function takes the signatures input parameter as bytes calldata: PolicyValidator.sol#isPolicySignatureValid

function isPolicySignatureValid(
  // ...
  bytes calldata signatures
) external view returns (bool) {

  // ...

}

The signatures parameter is expected to contain encoded data for:

  • Safe owners' signatures
  • Validator's EIP712 signature
  • Validator signature length
  • Expiry epoch

This is then parsed and validated in _decompileSignatures(): PolicyValidator.sol#_decompileSignatures

function _decompileSignatures(bytes calldata _signatures) 
  internal
  pure
  returns (uint32 expiryEpoch, bytes memory validatorSignature) 
{

  // Parse signatures into components

}

If signatures is malformed or shorter than expected, _decompileSignatures() could end up reading invalid memory locations, leading to unexpected behavior.

For example, if signatures is only 4 bytes long, bytes4(_signatures[length - 8:length - 4]) would read bytes at a negative index, resulting in memory corruption.

This then could allow an attacker to bypass parts of the signature validation, as the extracted values would be incorrect and could end up:

  • Invalid signatures input could lead to out-of-bounds memory reads
  • This corrupts the parsed validator signature and expiry values
  • Malformed values may allow bypassing signature validation

Let me walk through a proof of concept exploit demonstrating the impact of the insufficient validation of the signatures parameter in isPolicySignatureValid():

Proof of Concept Attack

  1. Attacker calls isPolicySignatureValid with a malformed signatures parameter that is only 4 bytes long

  2. Inside _decompileSignatures(), the following code attempts to parse the validator signature length from the last 8 bytes of signatures:

uint32 sigLength = uint32(bytes4(_signatures[length - 8:length - 4])); 

Since signatures is only 4 bytes, this reads bytes at index "-4 to index 0", outside the bounds of the passed data.

  1. The out-of-bounds read results in sigLength containing garbage data, but no validation error is thrown.

  2. Next, the validator signature is extracted using the bad sigLength value:

validatorSignature = _signatures[length - 8 - sigLength:length - 8];

This will again read invalid memory based on the corrupted sigLength, resulting in an incorrect validatorSignature.

  1. Finally, the expiry epoch is parsed from bytes -4 to 0, instead of the last 4 bytes as intended.

  2. The end result is validatorSignature and expiryEpoch contain arbitrary invalid data, instead of the expected signature and expiry.

  3. The rest of the validation passes using the corrupted values, allowing an attacker to bypass the policy signature check.

Insufficient validation of the signatures input length can lead to memory corruption and a bypass of the signature validation.

Tools Used

Manual Review

Recommended Mitigation Steps

To add input validation on the signatures length, I would recommend adding a check at the beginning of the function:

function isPolicySignatureValid(
  // ...
  bytes calldata signatures  
) external view returns (bool) {

  if (signatures.length < 8) {
    revert("Invalid signatures length"); 
  }

  // ... rest of function

}

Requiring a minimum length for the signatures input prevents potential issues from passing in empty or malformed data.

Assessed type

Invalid Validation

QA Report

See the markdown file with the details of this report here.

`getMessageHash` lacks input validation, complex hashing, weak `signedMessages` check.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L68-L71
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L42-L53
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L68
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L43-L44

Vulnerability details

Impact

Potential issues with signature validation logic in getMessageHash and usage in isValidSignature.
The getMessageHash and isValidSignature methods are critical for securely verifying signatures on transactions. Any flaws could allow an attacker to bypass policies or spoof signatures to steal funds.

Proof of Concept

Here is the key signature validation logic:

In getMessageHash: #Line 68-70

function getMessageHashForSafe(GnosisSafe safe, bytes memory message) public view returns (bytes32) {

  bytes32 safeMessageHash = keccak256(abi.encode(SAFE_MSG_TYPEHASH, keccak256(message)));

  return keccak256(abi.encodePacked(bytes1(0x19), bytes1(0x01), safe.domainSeparator(), safeMessageHash));

}

And in isValidSignature:

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L42-L53

The issues here: are

  • No validation of _data in getMessageHash - could lead to hash collision vulnerabilities if manipulated

  • Overly complex hashing logic - room for errors or hash collisions

  • signedMessages check doesn't ensure threshold is met

The getMessageHash and isValidSignature methods are critical for securely verifying signatures on transactions. Any flaws could allow an attacker to bypass policies or spoof signatures to steal funds.

Some key risks are:

  • No input validation in getMessageHash - an attacker could manipulate _data to intentionally cause hash collisions, resulting in identical hashes and signatures for different transactions. This defeats the uniqueness property required for security.

  • Overly complex hashing logic - the multi-step hashing and encoding process has a higher risk of implementation bugs or hash collisions compared to a simple hash function. Extra care needs to be taken in review.

  • Ineffective signedMessages check - this only checks if >0 signatures exist, not if the threshold is met. Could allow partial signatures.

  • No checking return values from checkSignatures. Failed or errored responses could be missed.

These flaws could allow signatures to be reused across different transactions, bypass thresholds, or spoof signatures entirely.

Attackers could bypass policies to steal funds, destroy contracts, etc. Auditing would be hindered since different transactions appear signed by owners.

Let me substantiate the analysis around the signature validation logic risks by pointing to the specific lines of code:

Lack of input validation in getMessageHash: 68

function getMessageHashForSafe(GnosisSafe safe, bytes memory message) public view returns (bytes32) {

  // message passed directly to hash without validation
  
}

Overly complex hashing

// Multiple steps including keccak256, abi.encode, abi.encodePacked

Weak signedMessages check: 43-44

if (_signature.length == 0) {
  // Checks for > 0, not threshold
  require(safe.signedMessages(messageHash) != 0, "Hash not approved");

}

Tools Used

Manual Review

Recommended Mitigation Steps

Assessed type

Invalid Validation

SubAccount can be registered for invalid wallet

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/registries/WalletRegistry.sol#L49-L55

Vulnerability details

Impact

SubAccount can be registered for invalid wallet.

Proof of Concept

See WalletRegistry#registerWallet

    function registerWallet() external {
        if (isWallet[msg.sender]) revert AlreadyRegistered();
        if (subAccountToWallet[msg.sender] != address(0)) revert IsSubAccount();
        isWallet[msg.sender] = true;
        emit RegisterWallet(msg.sender);
    }

See WalletRegistry#registerSubAccount

    function registerSubAccount(address _wallet, address _subAccount) external {
        if (msg.sender != AddressProviderService._getAuthorizedAddress(_SAFE_DEPLOYER_HASH)) revert InvalidSender();
        if (subAccountToWallet[_subAccount] != address(0)) revert AlreadyRegistered();
        subAccountToWallet[_subAccount] = _wallet;
        walletToSubAccountList[_wallet].push(_subAccount);
        emit RegisterSubAccount(_wallet, _subAccount);
    }

A wallet should be registered in order to be valid.

However, the authorized address may register a sub-account by calling registerSubAccount function for a wallet that is not yet registered.

Tools Used

Manual

Recommended Mitigation Steps

Ensure the wallet is valid before allowing the sub-account to be added

    function registerSubAccount(address _wallet, address _subAccount) external {
        if (msg.sender != AddressProviderService._getAuthorizedAddress(_SAFE_DEPLOYER_HASH)) revert InvalidSender();
+       if (!isWallet[_wallet]) revert UnregisteredWallet(); // Also please ensure there is a new error created in the code
        if (subAccountToWallet[_subAccount] != address(0)) revert AlreadyRegistered();
        subAccountToWallet[_subAccount] = _wallet;
        walletToSubAccountList[_wallet].push(_subAccount);
        emit RegisterSubAccount(_wallet, _subAccount);
    }

Assessed type

Invalid Validation

DoS the entire contract because of an incorrect modifier

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L33
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L82

Vulnerability details

Impact

When the operation _self = address(this) is executed in the constructor, the only possible value of _self is defined, since it is immutable and in
The modifier is validated: if (address(this) == _self), therefore it will always return true and will always revert, since _self is always worth the value of the contract.
This affects the enableModule() and setGuard() functions.

Recommended Mitigation Steps

Validate if(msg.sender == _self) revert OnlyDelegateCall();

Assessed type

DoS

Doesn’t Follow EIP-1271 Standard

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/lib/safe-contracts/contracts/interfaces/ISignatureValidator.sol#L6
https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/lib/safe-contracts/contracts/interfaces/ISignatureValidator.sol#L19
https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/ConsoleFallbackHandler.sol#L84-L86

Vulnerability details

Impact

As Per EIP-1271 standard EIP1271_MAGIC_VALUE should be 0x1626ba7e instead of 0x20c13b0b and function name should be isValidSignature(bytes32,bytes) instead of isValidSignature(bytes,bytes). Due to this, using outdated magic value will result in failures of all transactions signed by contract following EIP1271.

Proof of Concept

File: ConsoleFallbackHandler.sol
83:    function isValidSignature(bytes32 _dataHash, bytes calldata _signature) external view returns (bytes4) {
84:        ISignatureValidator validator = ISignatureValidator(msg.sender);
85:        bytes4 value = validator.isValidSignature(abi.encode(_dataHash), _signature); // @audit wrong here
86:        return (value == EIP1271_MAGIC_VALUE) ? UPDATED_MAGIC_VALUE : bytes4(0); // @audit wrong here
  • On verifying signatures, the outdated EIP1271_MAGIC_VALUE is used EIP1271_MAGIC_VALUE = 0x20c13b0b and function name should be isValidSignature(bytes32,bytes) instead of isValidSignature(bytes,bytes). see here
File: ISignatureValidator.sol
contract ISignatureValidatorConstants {
    // bytes4(keccak256("isValidSignature(bytes,bytes)")
    bytes4 internal constant EIP1271_MAGIC_VALUE = 0x20c13b0b; // @audit should be 0x1626ba7e instead of 0x20c13b0b
}


abstract contract ISignatureValidator is ISignatureValidatorConstants {
    /**
     * @dev Should return whether the signature provided is valid for the provided data
     * @param _data Arbitrary length data signed on the behalf of address(this)
     * @param _signature Signature byte array associated with _data
     *
     * MUST return the bytes4 magic value 0x20c13b0b when function passes.
     * MUST NOT modify state (using STATICCALL for solc < 0.5, view modifier for solc > 0.5)
     * MUST allow external calls
     */
    function isValidSignature(bytes memory _data, bytes memory _signature) public view virtual returns (bytes4); // @audit should be isValidSignature(bytes32,bytes) instead of isValidSignature(bytes,bytes) 
}

Tools Used

Manual review

Recommended Mitigation Steps

Follow EIP-1271 standard.

Assessed type

Other

Lack of `PolicyValidator` validation allows for malicious compliance bypass.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L49-L50
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L51
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L51
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L49-L50

Vulnerability details

Impact

Reliance on PolicyValidator for compliance but no checks on it. An attacker could provide a malicious validator that approves anything.

Proof of Concept

The ConsoleFallbackHandler relies on the PolicyValidator for policy compliance, but does not properly validate it:

The PolicyValidator instance is created here: Line 49-50

PolicyValidator policyValidator =  
                PolicyValidator(AddressProviderService._getAuthorizedAddress(_POLICY_VALIDATOR_HASH));

And used to check signature compliance: 51

require(policyValidator.isPolicySignatureValid(msg.sender, messageHash, _signature), "Policy not approved");

However, there are no checks on the PolicyValidator address or contract code.

As you can see, this allows an attacker to provide a malicious PolicyValidator through AddressProviderService that simply approves any signature, bypassing policies.

For example:

contract MaliciousValidator {

  function isPolicySignatureValid(address, bytes32, bytes memory) external view returns (bool) {
    return true; 
  }

}

With this malicious validator, all transactions would be approved regardless of the account policies. Funds could be arbitrarily stolen, transferred, contracts deleted etc without the signatures correctly aligning to the policy rules.

This undermines the entire security model. Proper verification of the PolicyValidator contract is therefore essential to avoid this attack vector. The current reliance on AddressProviderService only is insufficient.

The comments also indicate the intention that PolicyValidator enforces compliance:

// For validating signatures, `PolicyValidator` is invoked to ensure that 
// the intent of `messageHash` signed by the owners of the safe, complies with its committed
// policy

The core purpose of the ConsoleFallbackHandler is to ensure all signature validations and transactions comply with the account's defined policies. This is critical for security.

The PolicyValidator contract is relied upon to actually check the signatures against the expected policies: 51

require(policyValidator.isPolicySignatureValid(msg.sender, messageHash, _signature), "Policy not approved"); 

However, the PolicyValidator instance comes from AddressProviderService: 49-50

PolicyValidator policyValidator =  
                PolicyValidator(AddressProviderService._getAuthorizedAddress(_POLICY_VALIDATOR_HASH));

And there are no checks to validate:

  • The address is a known, good PolicyValidator contract

  • The code at that address matches the expected implementation

This means if AddressProviderService was compromised to point to a malicious contract, that contract could completely bypass all the account policies.

Tools Used

Manual Review

Recommended Mitigation Steps

Proper validations should be added on PolicyValidator:

  1. Check address against known good validator contract

  2. Verify implementation code matches expected validator

  3. Require correct EIP-712 interface implementation

Adding checks like these will prevent a malicious PolicyValidator from being used to bypass account policies in ConsoleFallbackHandler.

Assessed type

Invalid Validation

Lack of validation in constructor allows critical address manipulation attacks.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L29
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L29
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L49-L50
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L51

Vulnerability details

Impact

The constructor takes an address for the AddressProviderService but does not check/validate it. An attacker could provide a malicious address and gain control.

Proof of Concept

The constructor that takes the AddressProviderService address without validation is: #L29

constructor(address _addressProvider) AddressProviderService(_addressProvider) {}

The _addressProvider parameter is passed directly to the AddressProviderService constructor without any validation.

The AddressProviderService contract holds the key addresses used throughout the system, like the PolicyValidator contract address. By passing an unvalidated address into the constructor, an attacker could provide a malicious AddressProviderService that returns malicious contract addresses.

This would allow the attacker to gain control in a few ways:

  1. Provide a malicious PolicyValidator address.

The PolicyValidator contract address is retrieved from AddressProviderService using _getAuthorizedAddress. During isValidSignature, the ConsoleFallbackHandler calls out to the PolicyValidator to validate compliance.

If a malicious PolicyValidator is provided via AddressProviderService, it could approve any signature as valid, bypassing the intended policy checks.

This defeats the entire purpose of the ConsoleFallbackHandler policy validation. An attacker could arbitrarily execute transactions even if they violate the expected policy.

  1. Manipulate other critical contract addresses.

Besides PolicyValidator, AddressProviderService provides other important addresses like the executor plugin.

A malicious AddressProviderService could point the executor plugin to an attacker controlled contract. This would give them control over all approved transaction execution.

Or it could manipulate the address for the GnosisSafe contract itself. This would allow routing all requests and funds to the attacker's safe instead.

  1. Prevent legitimate contract usage

A malicious AddressProviderService could also simply return 0x0 or invalid addresses for critical contracts like PolicyValidator.

This would effectively disable ConsoleFallbackHandler functionality and break the entire system. No legitimate policy validation or execution could occur.

So, by controlling AddressProviderService an attacker has wide control over critical address configuration.
They can redirect to malicious contracts, disrupt functionality, or bypass policies completely.

Proper validation of the provider address in the constructor is therefore crucial for security.

References from the code to back up the security analysis:

Constructor allowing arbitrary AddressProviderService:

Line 29

constructor(address _addressProvider) AddressProviderService(_addressProvider) {}

No validation of _addressProvider.

AddressProviderService returning PolicyValidator address:

#Line 49-50

PolicyValidator policyValidator = 
                PolicyValidator(AddressProviderService._getAuthorizedAddress(_POLICY_VALIDATOR_HASH));

PolicyValidator called to validate compliance: #Line 51

require(policyValidator.isPolicySignatureValid(msg.sender, messageHash, _signature), "Policy not approved");

So if an attacker controlled AddressProviderService via the constructor, they could make _getAuthorizedAddress return a malicious PolicyValidator that just returns true from isPolicySignatureValid.

This would allow arbitrary signatures/transactions to be approved, bypassing policy checks.

Tools Used

Manual Review

Recommended Mitigation Steps

The contract could add validation on _addressProvider to ensure it is a known valid address, for example:

constructor(address _addressProvider) {
  require(
    _addressProvider == <trusted_address>,
    "Invalid AddressProviderService" 
  );
  AddressProviderService(_addressProvider); 
}

This would prevent an attacker from providing an arbitrary address and gaining control.

Assessed type

Access Control

The policyHashValid boolean controls whether the guard is enabled on the safe. But this could be manipulated by the caller to avoid enabling the guard. The guard enable logic should be unconditional.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/SafeDeployer.sol#L110-L166

Vulnerability details

Impact

Gnosis Safes can be deployed without the guard enabled, even though it should be required. This undermines the security of the safes. Not enabling the guard leaves the Gnosis Safe vulnerable to unauthorized withdrawals or other malicious actions. Without the guard enabled, there is no additional layer of protection on transactions requiring multiple approvals.

Proof of Concept

The policyHashValid boolean controls whether the guard is enabled on the safe. But this could be manipulated by the caller to avoid enabling the guard. The guard enable logic should be unconditional.

In the _setupConsoleAccount function, the guard enable logic is conditional based on _policyHashValid:

  function _setupConsoleAccount(address[] memory _owners, uint256 _threshold, bool _policyHashValid)
         private
         view
         returns (bytes memory)
 {
   if (_policyHashValid) {
     // Enable guard
   } 
 }

The deployConsoleAccount function takes _policyCommit as a parameter. This is checked against 0 to set _policyHashValid:

 function deployConsoleAccount(address[] calldata _owners, uint256 _threshold, bytes32 _policyCommit, bytes32 
 _salt)
   external
   returns (address _safe)
 {

   bool _policyHashValid = _policyCommit != bytes32(0);

   // ...

   _setupConsoleAccount(_owners, _threshold, _policyHashValid);

 }

So the caller can simply pass 0 for _policyCommit to avoid enabling the guard.

A detailed explanation:

policyHashValid boolean allows the caller to avoid enabling the guard on the safe, which is a security risk. Here is a deep explanation of how it works and a proof of concept

The deployConsoleAccount function takes a policyCommit parameter which is a commitment to the policy hash. It then checks if policyHashValid (_policyCommit != 0) before setting up the safe.

If policyHashValid is true, it enables the guard by calling:

 IGnosisSafe.setGuard(AddressProviderService._getAuthorizedAddress(_SAFE_MODERATOR_OVERRIDABLE_HASH))

This sets the SafeModerator contract as a guard on the Safe, restricting it from performing potentially harmful transactions.

However, if the caller passes 0 for the policyCommit, policyHashValid will be false and it will skip enabling the guard.

This allows the caller to deploy a Safe without any restrictions from the guard contract.

A proof of concept attack:

1. Attacker calls deployConsoleAccount with:
• Any valid owners array
• Valid threshold
• _policyCommit = 0x0
• Any salt
2. deployConsoleAccount skips enabling the guard due to policyHashValid being false
3. Attacker now has a newly deployed unrestricted Safe to use maliciously

Tools Used

Manual

Recommended Mitigation Steps

The guard enable logic should be unconditional, regardless of the _policyHashValid value. A suggestive example is :

 function _setupConsoleAccount(address[] memory _owners, uint256 _threshold, bool _policyHashValid)
         private
         view
         returns (bytes memory)
     {
         // Enable guard unconditionally
         txns[1] = Types.Executable({
             callType: Types.CallType.DELEGATECALL,
             target: AddressProviderService._getAuthorizedAddress(_SAFE_ENABLER_HASH),
             value: 0,
             data: abi.encodeCall(
                 IGnosisSafe.setGuard, 
 (AddressProviderService._getAuthorizedAddress(_SAFE_MODERATOR_OVERRIDABLE_HASH))
                 )
         });
     }

This ensures the guard is always enabled on Console accounts during deployment.

Assessed type

Other

Possibility of an infinite loop until out-of-gas in _createSafe()

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/SafeDeployer.sol#L219

Vulnerability details

Impact

There is a possibility of going into an infinite loop within the _createSafe function until out-of-gas.

Proof of Concept

The condition while (_safe == address(0)) in _createSafe function contains a do-while loop that continues as long as the _safe address remains zero. If the IGnosisProxyFactory.createProxyWithNonce function repeatedly fails due to reasons other than nonce conflicts, The _safe address will never be assigned a non-zero value, resulting in an infinite loop until out-of-gas. Point form explaination below.

  • It enters a do-while loop that continues until a Gnosis Safe contract is successfully created.
  • Inside the loop, it tries to create a Gnosis Safe proxy using the IGnosisProxyFactory interface.
  • If the creation is successful and a new safe is deployed, it sets the _safe variable to the address of the deployed safe and exits the loop.
  • If an unexpected error occurs during the creation process not matching the predefined reason _SAFE_CREATION_FAILURE_REASON or not due to nonce conflict, the _safe address wont be updated to non-zero value.
  • So as mentioned in point 4, If the _safe address not updated to non-zero value, The condition while (_safe == address(0)) is true and will lead into a infinite loop resulting to the function temporally.

Tools Used

Manuel Review

Recommended Mitigation Steps

Implement a maximum retry count to limit the number of attempts to create a Gnosis Safe contract. Basically adding an additional condition to break the loop when the _safe address remains zero after the maximum number of retries.

      } while (_safe == address(0) && retryCount <= maxRetries);

Assessed type

Loop

Malicious `AddressProviderService` can lead to bypass of PolicyValidator's signature checks.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L49-L51
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L49-L50
https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/PolicyValidator.sol#L54-L80

Vulnerability details

Impact

isValidSignature calls out to the PolicyValidator contract. If a malicious AddressProviderService address was provided in the ConsoleFallbackHandler constructor, an attacker could make _getAuthorizedAddress return any contract address under their control.

Proof of Concept

isValidSignature makes an external call out to the PolicyValidator contract which needs to be validated: #Line 49 - 51

PolicyValidator policyValidator =  
                PolicyValidator(AddressProviderService._getAuthorizedAddress(_POLICY_VALIDATOR_HASH));

require(policyValidator.isPolicySignatureValid(msg.sender, messageHash, _signature), "Policy not approved");

The PolicyValidator instance is retrieved via the AddressProviderService using _getAuthorizedAddress.

This means the PolicyValidator could be malicious if an invalid address was provided on AddressProviderService initialization.

Further details and context around the risks of a malicious PolicyValidator.

The ConsoleFallbackHandler relies on the PolicyValidator contract to check all signature validity against the configured account policies. This is the key mechanism that enforces transaction compliance in the system.

However, the PolicyValidator instance is retrieved from AddressProviderService: Line 59-50

PolicyValidator policyValidator =  
                PolicyValidator(AddressProviderService._getAuthorizedAddress(_POLICY_VALIDATOR_HASH));

This means if a malicious AddressProviderService address was provided in the ConsoleFallbackHandler constructor, an attacker could make _getAuthorizedAddress return any contract address under their control.

Now suppose the attacker deploys a malicious PolicyValidator contract that has none of the expected EIP-712 policy checking logic. Instead it just blindly returns true on every isPolicySignatureValid check: function isPolicySignatureValid

function isPolicySignatureValid(address, bytes32, bytes memory) external view returns (bool) {
  return true;
}

With this malicious validator, the attacker can now bypass ALL signature policy checks enforced by ConsoleFallbackHandler.

Every transaction will be allowed through regardless of the account policy, since the PolicyValidator wrongly approves everything.

This completely defeats the security of ConsoleFallbackHandler and allows arbitrary access, draining funds etc.

So, a malicious PolicyValidator contract provided via AddressProviderService leads to a complete bypass of the account policy protections.

Tools Used

Manual Review

Recommended Mitigation Steps

To secure this, a few validations should be added:

  1. Validate the PolicyValidator address conforms to the expected real contract address, not a arbitrary user input address. This could check against a constant:
require(policyValidator == EXPECTED_POLICY_VALIDATOR_ADDRESS, "Invalid validator"); 
  1. Verify the PolicyValidator contract code matches the expected implementation through EIP-1967 proxy patterns.

  2. Require the PolicyValidator implements EIP-712 by checking for the EIP-712 revision value:

require(PolicyValidator.EIP712_REVISION() == EXPECTED_REVISION, "Invalid EIP712 revision");

This will provide assurance that the PolicyValidator correctly implements EIP-712 and matches the expected code. Without these validations, a malicious contract could bypass policy checks.

Assessed type

Access Control

Lack of `_signature` length validation could permit short, invalid signatures.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L39-L55
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L43-L45

Vulnerability details

Impact

No validation of the _signature length in isValidSignature. Could allow for short invalid signatures.

Proof of Concept

There is no validation of the _signature length in the isValidSignature method: function isValidSignature

function isValidSignature(bytes memory _data, bytes memory _signature) public view override returns (bytes4) {

  // ...

  if (_signature.length == 0) {
    // ...signature validation logic  
  } else {
    // ...signature validation logic
  }

  // ...

}

The _signature length is checked against 0, but there is no upper bound check.

This could allow an attacker to provide a short invalid signature that bypasses the validation logic.

For example, a 1 byte signature could skip the ecrecover call and checkSignatures logic and still be considered valid.

Further details here would help provide better context on the security impact. I will expand on how an attacker could exploit the lack of _signature length validation in isValidSignature:

When a signature is passed to isValidSignature for verification, it goes through two main validation checks:

  1. The ecrecover call to recover the signing address.

  2. The checkSignatures call to verify against the Safe's approved signatures.

A short _signature could potentially bypass both checks:

The ecrecover call requires a minimum of 65 bytes - v (1 byte), r (32 bytes), s (32 bytes). Anything shorter would fail.

So a 1 byte signature would avoid this ecrecover computation entirely and skip that validation.

Similarly, checkSignatures relies on having a proper signature length and format. A short invalid signature could trigger array out of bounds or other unexpected behavior here.

This means a short _signature like 1 byte could skip both the ecrecover and checkSignatures validation logic that is essential for correctly verifying signatures.

As a result, an attacker could construct short invalid signatures that appear valid to isValidSignature simply by skipping the verification code. This could allow unauthorized transactions or lead to fund theft by spoofed signatures.

Adding a _signature length check prevents this attack vector and ensures the input conforms to expected parameters before passing to the validation logic.

Here is the code that demonstrates the lack of _signature length validation:

The _signature length check only checks for 0 length:

if (_signature.length == 0) {
  // signature validation logic
} else {
  // signature validation logic  
}

There is no check for an upper bound like:

require(_signature.length == 65, "Invalid length"); 

The code shows:

  • Only a 0 length check

  • No upper bound validation

  • Signature validation logic that assumes 65 byte length

This allows short invalid signatures to bypass the ecrecover and checkSignatures methods as explained previously.

Tools Used

Manual Review

Recommended Mitigation Steps

An explicit check should be added on _signature.length to ensure it meets the expected byte length, e.g.:

require(_signature.length == 65, "Invalid signature length"); 

Adding this length validation will prevent short invalid signatures from bypassing the verification logic in isValidSignature.

Assessed type

Invalid Validation

[M-01] Read-Only Reentrancy in the ConsoleFallbackHandler contract

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L39-L55
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L83-L87

Vulnerability details

Impact

The reentrancy vulnerability uses the attack contract to call into the victim contract several times before the victim contract's balance updates.
Hence allowing the attacker to withdraw e.g. 2 ether when they only deposited 1 ether.
Which means double entry counting duplicate withdrawals for only one genuine withdrawal.

Proof of Concept

Vulnerable isValidSignature (part 1)

// Line 39-55
    function isValidSignature(bytes memory _data, bytes memory _signature) public view override returns (bytes4) {
        // Caller should be a Safe
        GnosisSafe safe = GnosisSafe(payable(msg.sender));
        bytes32 messageHash = getMessageHashForSafe(safe, _data);
        if (_signature.length == 0) {
            require(safe.signedMessages(messageHash) != 0, "Hash not approved");
        } else {
            /// @dev For validating signatures, `PolicyValidator` is invoked to ensure that
            /// the intent of `messageHash` signed by the owners of the safe, complies with its committed
            /// policy
            PolicyValidator policyValidator =
                PolicyValidator(AddressProviderService._getAuthorizedAddress(_POLICY_VALIDATOR_HASH));
            require(policyValidator.isPolicySignatureValid(msg.sender, messageHash, _signature), "Policy not approved");
            safe.checkSignatures(messageHash, _data, _signature);
        }
        return EIP1271_MAGIC_VALUE;
    }

Vulnerable isValidSignature (part 2)

// Line 83-87
    function isValidSignature(bytes32 _dataHash, bytes calldata _signature) external view returns (bytes4) {
        ISignatureValidator validator = ISignatureValidator(msg.sender);
        bytes4 value = validator.isValidSignature(abi.encode(_dataHash), _signature);
        return (value == EIP1271_MAGIC_VALUE) ? UPDATED_MAGIC_VALUE : bytes4(0);
    }

Exploit Reentrancy

/// SPDX-License-Identifier: BUSL-1.1

/// Copyright (C) 2023 Brahma.fi

pragma solidity 0.8.19;

import "./ConsoleFallbackHandler.sol";

contract tConsoleFallbackHandler {

   ConsoleFallbackHandler public x1;

   constructor(ConsoleFallbackHandler _x1) {

      x1 = ConsoleFallbackHandler(_x1);

   }

   function testReentrancy(ConsoleFallbackHandler _x1) external view {

      bytes memory dataFx = "0xx1";
      bytes memory signatureFx = "0x01";
      GnosisSafe safe = GnosisSafe(payable(_x1));
      bytes32 messageHash = getMessageHashForSafe(safe, dataFx);
      safe.signedMessages(messageHash) = 1;
      bytes32 dataHashFx = safe.signedMessages(messageHash);
      bytes calldata signatureFx2 = "0x01";

      x1.isValidSignature(dataFx, signatureFx);
      x1.isValidSignature(dataHashFx, signatureFx2);

   }

   receive() external payable {
      msg.sender.transfer(payable(address(_x1)).balance);
   }

   }

Tools Used

VS Code.

Recommended Mitigation Steps

All functions that are not internal and are making a call should have a reentrancy guard added to them.
Checks-Effects-Interactions should be applied to the functions.
Balance updates should be made at the beginning of the call.
The actual call should be made at the end of the function.
So that the balance is already updated first and reentrancy is not possible.

Assessed type

Reentrancy

Agreements & Disclosures

Agreements

If you are a C4 Certified Contributor by commenting or interacting with this repo prior to public release of the contest report, you agree that you have read the Certified Warden docs and agree to be bound by:

To signal your agreement to these terms, add a 👍 emoji to this issue.

Code4rena staff reserves the right to disqualify anyone from this role and similar future opportunities who is unable to participate within the above guidelines.

Disclosures

Sponsors may elect to add team members and contractors to assist in sponsor review and triage. All sponsor representatives added to the repo should comment on this issue to identify themselves.

To ensure contest integrity, the following potential conflicts of interest should also be disclosed with a comment in this issue:

  1. any sponsor staff or sponsor contractors who are also participating as wardens
  2. any wardens hired to assist with sponsor review (and thus presenting sponsor viewpoint on findings)
  3. any wardens who have a relationship with a judge that would typically fall in the category of potential conflict of interest (family, employer, business partner, etc)
  4. any other case where someone might reasonably infer a possible conflict of interest.

Attacker contracts can modify Safe security without authentication, endangering its integrity.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L43-L44
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L66-L67

Vulnerability details

Impact

There is no authentication on the enableModule and setGuard functions. Any contract that delegatecalls into SafeEnabler can enable/set any module or guard. Consider requiring these to only be callable by the Safe itself.

Proof of Concept

The lack of authentication on enableModule and setGuard is a major issue.

The problematic are:

function enableModule(address module) public {
  // ...
}
function setGuard(address guard) public {
 // ...
} 

Specifically the public access modifier on these two functions.

This allows any contract to call these functions directly if they know the SafeEnabler contract address. So, they could enable or disable any modules/guards, since there is no check on the caller.

The lack of authentication on those functions enableModule and setGuard functions are intended to be privileged operations that only the Safe itself can call. But right now they are marked public with no access control:

function enableModule(address module) public {
  // ...
} 

function setGuard(address guard) public {
  // ...
}

This allows any other contract to call them directly if they know the SafeEnabler address.

A malicious contract could create transactions targeting these functions. Since they are public, those transactions would succeed and grant the attacker unauthorized control to modify the Safe's modules and guards.

Some examples of attacks this could enable:

  • Attacker calls enableModule to whitelist a malicious module that steals funds or executes unauthorized transactions.

  • Attacker removes critical modules like OwnersManager that provide access control for the Safe.

  • Attacker sets the guard to their own contract that approves all transactions.

  • Attacker disables the Console module to block the owners from using Safe features.

  • Attacker changes the guard from a hardware wallet to a contract they control to bypass hardware approvals.

Essentially any security policy for the Safe could be bypassed by directly calling these functions without authentication.

Here is proof:

The enableModule and setGuard functions are defined as:

function enableModule(address module) public {
  // ...
}

function setGuard(address guard) public {
  // ... 
}

Can call these directly from another contract like so:

contract Attacker {

  SafeEnabler constant se = SafeEnabler(0x123...);

  function attack() external {
    se.enableModule(0xATTACKER_ADDRESS); 
    se.setGuard(0xATTACKER_ADDRESS);
  }

}

When Attacker.attack() is executed, it will make direct public calls to the SafeEnabler to enable a malicious module and set the guard to the attacker's address.

Since these functions have no authentication and are public, the transactions will succeed. This gives the attacker unauthorized control without needing to go through the Safe.

This proves any contract can call enableModule and setGuard to modify the Safe's security settings without permission.

Tools Used

Manual Review

Recommended Mitigation Steps

  1. Use the onlyOwner modifier to restrict access:
function enableModule(address module) public onlyOwner {
  // ...
}
  1. Maintain an allowlist of contracts like the Safe that are authorized to call these.

  2. Use a signature-based scheme where the caller has to provide a valid sig from the Safe owner.

Assessed type

Access Control

QA Report

See the markdown file with the details of this report here.

Unchecked transaction length could overflow, disrupting `_packMultisendTxns` functionality.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/libraries/SafeHelper.sol#L103-L125
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/libraries/SafeHelper.sol#L125

Vulnerability details

Impact

Within _packMultisendTxns:
The encoded transaction length is not checked against a max size. Could cause an overflow. This is a medium-severity issue. While it does not directly lead to assets loss, an overflow condition could disrupt the expected functioning of _packMultisendTxns.

Proof of Concept

In _packMultisendTxns, the encoded transaction length is not checked after encoding each transaction: function _packMultisendTxns

// contracts/libraries/SafeHelper.sol

function _packMultisendTxns(Types.Executable[] memory _txns) internal pure returns (bytes memory packedTxns) {

  // Encode each transaction 
  bytes memory encodedTxn = abi.encodePacked(...)

  // Issue occurs here
  packedTxns = abi.encodePacked(packedTxns, encodedTxn); 

}

On line 125, the encodedTxn is appended to packedTxns without checking the length.

This could allow the packedTxns bytes array to grow arbitrarily large and overflow.

Vulnerability:

The length of each encoded transaction is not validated before being appended to the packedTxns bytes array. This could allow arbitrarily large transactions.

Impact:

  • Without a max length check, encoded transactions can grow unbounded.
  • Appending large transactions to packedTxns could cause an integer overflow.
  • This would corrupt the packed data passed back to the caller.
  • At best the overflow causes a revert, at worst it results in unintended behavior.
  • Failing to restrict encoded txn length undermines the integrity of the packed data.

Context:

_packMultisendTxns is designed to safely combine multiple transactions into a single encoded payload. Letting transaction encodings grow unrestricted defeats this purpose by allowing overflows that corrupt the final packed data. A max length check is needed to preserve the integrity of the encoded payloads.

Proof of concept to demonstrate the security issues. Here is an example to prove the risk of unchecked encoded transaction lengths in _packMultisendTxns:

// Exploit contract
contract Exploit {

  using SafeHelper for bytes;

  // Craft extra long transaction
  Types.Executable memory txn;
  txn.data = new bytes(2**256); // Oversized data

  function exploit() external {
    Types.Executable[] memory txns = new Types.Executable[](1);
    txns[0] = txn;

    // Pack transactions
    bytes memory packed = txns.packMultisendTxns(); 
  }

}
  1. Creating anExecutable struct with 2^256 length data
  2. Passing the extra long transaction in an array
  3. Calling _packMultisendTxns on the array
  4. The oversized txn will get encoded without length checks
  5. This could overflow packedTxns when appending

Calling exploit would pass the unchecked long transaction encoding, likely causing an overflow.

Tools Used

Vs

Recommended Mitigation Steps

Should check encoded txn length.

// contracts/libraries/SafeHelper.sol

function _packMultisendTxns(Types.Executable[] memory _txns) internal pure returns (bytes memory packedTxns) {

  // Encode each transaction
  bytes memory encodedTxn = abi.encodePacked(...)  

  // Check encoded txn length
  require(encodedTxn.length <= MAX_SIZE, "Encoded txn too large");
  
  // Append encoded txn
  packedTxns = abi.encodePacked(packedTxns, encodedTxn);

}

Adding this max size check on line XX would prevent packedTxns from growing too large.

Assessed type

Under/Overflow

Console Accounts can't deregister SubAccounts

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/registries/WalletRegistry.sol#L1-L76

Vulnerability details

Title

Console Accounts can't deregister SubAccounts

Impact

  • Compromised SubAccounts can't be deregistered by the Console Account, this could cause the SubAccount keep operating on behalf of the Console Account

Proof of Concept

  • The existing implementation doesn't allow the Console Accounts to deregister their SubAccounts, in case a SubAccount gets compromised, the console account can't deregister it as a valid SubAccount and could cause the compromised SubAccount to abuse the permissions that it has.
  • The WalletRegistry contract only has a function to register SubAccounts, but it doesn't have a function to deregister them
function registerSubAccount(address _wallet, address _subAccount) external {
    if (msg.sender != AddressProviderService._getAuthorizedAddress(_SAFE_DEPLOYER_HASH)) revert InvalidSender();
    if (subAccountToWallet[_subAccount] != address(0)) revert AlreadyRegistered();
    subAccountToWallet[_subAccount] = _wallet;
    walletToSubAccountList[_wallet].push(_subAccount);
    emit RegisterSubAccount(_wallet, _subAccount);
}

Tools Used

Manual Audit

Recommended Mitigation Steps

  • Add a function in the Wallet Registry that allows the Console Accounts to deregister any of their SubAccount as a valid SubAccount.

Assessed type

Context

Malicious contracts can bypass SafeEnabler restrictions, endangering Safe security.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L81-L83
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L82
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L43-L44
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L66-L67
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L81-L83

Vulnerability details

Impact

The _onlyDelegateCall modifier should check that msg.sender is the Safe contract address, not just that address(this) != _self. Otherwise another contract could delegatecall in and bypass the check.

Proof of Concept

The issue is in the _onlyDelegateCall modifier:

function _onlyDelegateCall() private view {
  if (address(this) == _self) revert OnlyDelegateCall();
}

Specifically on this line: #Line 82

if (address(this) == _self) revert OnlyDelegateCall();

This is checking that the address of the current contract (address(this)) is not equal to _self, which is defined as the address of the SafeEnabler contract.

It should instead check:

if (msg.sender != SAFE_ADDRESS) {
  revert(); 
}

To ensure that msg.sender is the Safe's address, not just that address(this) is different than _self. That would prevent another contract from delegating calling into the SafeEnabler and bypassing the modifier check.

Here is a more detailed analysis of the potential impact:

We can see the _onlyDelegateCall modifier is intended to restrict access to the enableModule and setGuard functions, so that only the Safe contract can call them via delegatecall. However, the current check validates:

if (address(this) == _self) revert OnlyDelegateCall(); 

This compares the address of the current contract (address(this)) to _self, which is defined as the SafeEnabler contract address.

The problem is that it does not validate msg.sender to ensure it is the Safe contract. So another malicious contract could delegatecall into the SafeEnabler and pass this check, because address(this) would point to the SafeEnabler.

That would allow the malicious contract to call enableModule and setGuard directly, bypassing the Safe's authorization. It could then enable or disable any modules or set any guard address from its own contract logic.

This completely violates the intended security of the SafeEnabler. The enableModule and setGuard functions are supposed to only be callable by the Safe itself. But this bug allows any other contract to change the Safe's modules and guards via delegatecall.

The impact is quite serious:

  • Attackers could build smart contracts that remove security modules like OwnersManager and disable important guards. This could allow them to steal funds or take full control of Safes.

  • Critical modules like the Console or PolicyManager could be disabled or replaced, undermining core Safe functionality and security policies.

  • Malicious modules could be enabled, backdooring the Safe and gaining access to approve transactions.

We can see in SafeEnabler.sol that enableModule and setGuard are marked with the _onlyDelegateCall modifier:

function enableModule(address module) public onlyDelegateCall {
  // ...
}
function setGuard(address guard) public onlyDelegateCall {
  // ...
}

And _onlyDelegateCall is defined as:

function _onlyDelegateCall() private view {
  if (address(this) == _self) revert OnlyDelegateCall();
}

Another contract can bypass this, first, Imagine deploying an attacking contract like:

contract Attacker {

  // SafeEnabler contract
  SafeEnabler constant se = SafeEnabler(0x123...); 
  
  function attack() external {
    se.enableModule(0xATTACKER_ADDRESS); 
  }

}

When Attacker.attack() is called, it will delegatecall into the SafeEnabler and call enableModule.

In this delegatecall, address(this) will point to SafeEnabler. And SafeEnabler._self is also SafeEnabler. So the modifier will pass.

But msg.sender in enableModule will be the Attacker contract, not the Safe. So the Attacker can enable its own module, bypassing the Safe's authority.

Tools Used

Manual Review

Recommended Mitigation Steps

The _onlyDelegateCall modifier needs to be changed to:

if (msg.sender != SAFE_ADDRESS) {
  revert Unauthorized();
}

This would ensure msg.sender is the Safe's address, not just validating address(this).

Assessed type

Access Control

Critical modules (Main Console, SafeModerator) need extra protection against unauthorized changes.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L43-L56
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/SafeEnabler.sol#L66-L75

Vulnerability details

Impact

The Main Console provides the core user interface and often holds administrative privileges. The SafeModerator enforces transaction approvals from guard addresses.

For critical/invariant settings like the Main Console and SafeModerator, should explicitly prevent those from being removed or changed without additional authentication.

Proof of Concept

The key lines where this needs to be addressed are in the enableModule and setGuard functions:

function enableModule(address module) public {
  // add check here to prevent disabling Main Console
}
function setGuard(address guard) public {
  // add check here to prevent changing SafeModerator
}

The Main Console and SafeModerator contracts are integral to the security model and functionality of the Safe.

The Main Console provides the core user interface and often holds administrative privileges. The SafeModerator enforces transaction approvals from guard addresses.

Right now, there are no special protections in place for these contracts in SafeEnabler:

function enableModule(address module) public {
  // No check for Main Console
} 

function setGuard(address guard) public {
  // No check for SafeModerator
}

This means any authenticated caller can potentially disable or replace these critical modules.

Attackers could exploit this to completely undermine the Safe's security:

  • Disable Main Console to lock users out of the interface
  • Set an arbitrary guard that steals funds or approves all transactions
  • Replace Main Console with a malicious version that steals private keys
  • Remove SafeModerator to bypass approvals and unilaterally confirm transactions

Essentially the core administration and security policies of the Safe could be compromised.

An attacker can call these freely without authentication:

contract Attacker {
  function attack() external {
    SafeEnabler safeEnabler = SafeEnabler(0x123...);
    
    // Disable Main Console
    safeEnabler.enableModule(MAIN_CONSOLE_ADDRESS);  

    // Set malicious guard
    safeEnabler.setGuard(0xATTACKER_ADDRESS); 
  }
}

Since there are no checks in place, these privileged settings can be changed by any caller.

Tools Used

Manual Review

Recommended Mitigation Steps

To prevent unauthorized changes to critical modules like the Main Console and SafeModerator, additional logic needs to be added.

Some ways this could be done:

  1. Maintain a list of "invariant" modules and guards and check against it:
mapping(address => bool) public invariantModules; 

function enableModule(address module) public {
  require(!invariantModules[module], "Invariant module");
}
  1. Check against specific hardcoded addresses:
address constant MAIN_CONSOLE = 0xabcd...

function enableModule(address module) public {
  require(module != MAIN_CONSOLE, "Cannot modify main console");
}
  1. Use a more advanced AccessControl scheme where only the owner or dao votes can modify invariants.

This will prevent unauthorized tampering with modules/guards that are meant to be permanent.

OR

The SafeEnabler must add access control so only trusted roles can modify these invariant modules/guards.

For example:

  • Restrict to owner only
  • Require timelock and multi-sig from admins
  • Implement a DAO voting process for any changes

This will help ensure critical components are highly protected against unauthorized changes or tampering.

Assessed type

Access Control

Unbounded Iterations

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/SafeDeployer.sol#L229-L224

Vulnerability details

Impact

The issue of not limiting the number of iterations in the code can lead to several security and performance implications:

  • Overloading the System: An attacker could potentially create a large number of Gnosis Safe instances in a single transaction, leading to system overload and gas exhaustion.
  • Denial of Service (DoS) Attacks: Creating an excessive number of Safe instances could result in a Denial of Service (DoS) attack, rendering the system unavailable to other users.

Proof of Concept

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/core/SafeDeployer.sol#L229-L224

// Generate nonce based on owners and user provided salt
uint256 nonce = _genNonce(ownersHash, _salt);
do {
    try IGnosisProxyFactory(gnosisProxyFactory).createProxyWithNonce(gnosisSafeSingleton, _initializer, nonce)
    returns (address _deployedSafe) {
        _safe = _deployedSafe;
    } catch Error(string memory reason) {
        // KEK
        if (keccak256(bytes(reason)) != _SAFE_CREATION_FAILURE_REASON) {
            // A safe is already deployed with the same salt, retry with bumped nonce
            revert SafeProxyCreationFailed();
        }
        emit SafeProxyCreationFailure(gnosisSafeSingleton, nonce, _initializer);
        nonce = _genNonce(ownersHash, _salt);
    } catch {
        revert SafeProxyCreationFailed();
    }
} while (_safe == address(0));

Tools Used

Manual Review

Recommended

  • Limit the Number of Iterations: Add a check to limit the number of iterations in the code. This will prevent attackers from creating an excessive number of Safe instances in a single transaction.
  • Ensure Transaction Integrity: Define a maximum limit for the number of iterations and check whether this limit has been exceeded before proceeding with the creation of Safe instances.

Example:

function _createSafe(address[] calldata _owners, bytes memory _initializer, bytes32 _salt)
    private
    returns (address _safe)
{
    address gnosisProxyFactory = AddressProviderService._getAuthorizedAddress(_GNOSIS_PROXY_FACTORY_HASH);
    address gnosisSafeSingleton = AddressProviderService._getAuthorizedAddress(_GNOSIS_SINGLETON_HASH);
    bytes32 ownersHash = keccak256(abi.encode(_owners));

    // Generate nonce based on owners and user provided salt
    uint256 nonce = _genNonce(ownersHash, _salt);

    // Limit the number of retries to prevent excessive gas usage
+   uint256 maxRetries = 3;  // Set your desired maximum number of retries
+   uint256 retries = 0;

+   while (retries < maxRetries) {
-   do {
        try IGnosisProxyFactory(gnosisProxyFactory).createProxyWithNonce(gnosisSafeSingleton, _initializer, nonce)
            returns (address _deployedSafe)
        {
            _safe = _deployedSafe;
            break; // Successfully created a safe, exit the loop
        } catch Error(string memory reason) {
            // Handle the error with a specific reason
            if (keccak256(bytes(reason)) != _SAFE_CREATION_FAILURE_REASON) {
                revert("SafeProxyCreationFailed");
            }
            emit SafeProxyCreationFailure(gnosisSafeSingleton, nonce, _initializer);
            
            // Retry with a bumped nonce
            nonce = _genNonce(ownersHash, _salt);
            retries++;
        } catch {
            revert("SafeProxyCreationFailed");
        }
    }

+   if (_safe == address(0)) {
+       revert("Safe creation failed after maximum retries");
+   }
-   } while (_safe == address(0));
}

Assessed type

DoS

QA Report

See the markdown file with the details of this report here.

Analysis

See the markdown file with the details of this report here.

Expiry check may return false due to different chains timestamps

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/PolicyValidator.sol#L116-L118

Vulnerability details

Impact

The block timestamp is different on different chains:

if (expiryEpoch < uint32(block.timestamp)) {
    revert TxnExpired(expiryEpoch);
}

So this check may be bypassed on different chains. For example consider a situation where signatures x is expired on Eth-Mainnet but not expired on Polygon.

Tools Used

Manual Review

Recommended Mitigation Steps

Make sure about the signatures expire time on different chains and ensure it doesn't lead to dangerous processes.

Assessed type

Other

`simulate` method lacks authorization, exposing potential unauthorized contract execution.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L104-L147
https://github.com/code-423n4/2023-10-brahma/blob/c217699448ffd7ec0253472bf0d156e52d45ca71/contracts/src/core/ConsoleFallbackHandler.sol#L98-L103

Vulnerability details

Impact

Potential issues with simulate method exposing functionality without proper authorization.

Proof of Concept

The simulate method could potentially expose functionality without proper authorization checks:

The simulate method is defined as:

function simulate(address targetContract, bytes calldata calldataPayload)
  external
  returns (bytes memory response)
{

  // Logic to execute arbitrary contract code
  // and return the bytes response

}

The issue is there are no access controls on who can call this method. It is marked external so any user can call it.

This exposes the ability to simulate contract execution on any targetContract with arbitrary calldataPayload without any authentication.

Attackers could use this to extract information from contracts they should not have access to, or potentially manipulate state in unsafe ways.

Attackers could leverage this to:

  • Extract sensitive data from contracts, like private variables or logic

  • Manipulate state in contracts they don't own e.g. updating mappings

  • Exploit vulnerabilities by simulating complex execution paths

  • Steal funds or assets by simulating privileged operations

Overall, allowing arbitrary users to simulate contract execution gives them expanded read/write capabilities they may not be authorized for.

The comment indicating it can execute arbitrary code:

/**
 * @dev Performs a delegetecall on a targetContract in the context of self. 
 * Internally reverts execution to avoid side effects (making it static). Catches revert and returns encoded result as bytes.
 * @param targetContract Address of the contract containing the code to execute.
 * @param calldataPayload Calldata that should be sent to the target contract (encoded method name and arguments).
*/

Tools Used

Manual Review

Recommended Mitigation Steps

simulate should add an access control modifier like onlyOwner:

function simulate(address targetContract, bytes calldata calldataPayload)
  external
  onlyOwner
  returns (bytes memory response)
{

  // Logic 
  
}

This would restrict the usage to authorized parties only.

Assessed type

Access Control

Protocol is not `EIP712` compliant: incorrect typehash for `Validation` and `Transaction` structures

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/libraries/TypeHashHelper.sol#L45-L50
https://github.com/code-423n4/2023-10-brahma/blob/main/contracts/src/libraries/TypeHashHelper.sol#L52-L57

Vulnerability details

Summary

When implementing EIP712, among other things, for data structures that will be a part of signing message a typehash must be defined.

The structure typehash is defined as:

typeHash = keccak256(encodeType(typeOf(s)))

  • where encodeType is the type of a struct that is encoded as:

name ‖ "(" ‖ member₁ ‖ "," ‖ member₂ ‖ "," ‖ … ‖ memberₙ ")"

  • and each member is written as:

type ‖ " " ‖ name.

The project uses 2 structures on the signed data Transaction and Validation declared in TypeHashHelper:

    /**
     * @notice Structural representation of transaction details
     * @param operation type of operation
     * @param to address to send tx to
     * @param account address of safe
     * @param executor address of executor if executed via executor plugin, address(0) if executed via execTransaction
     * @param value txn value
     * @param nonce txn nonce
     * @param data txn callData
     */
    struct Transaction {
        uint8 operation;
        address to;
        address account;
        address executor;
        uint256 value;
        uint256 nonce;
        bytes data;
    }


    /**
     * @notice Type of validation struct to hash
     * @param expiryEpoch max time till validity of the signature
     * @param transactionStructHash txn digest generated using TypeHashHelper._buildTransactionStructHash()
     * @param policyHash policy commit hash of the safe account
     */
    struct Validation {
        uint32 expiryEpoch;
        bytes32 transactionStructHash;
        bytes32 policyHash;
    }

However, the precalculated typehash for each of the structure are of different structures:

    /**
     * @notice EIP712 typehash for transaction data
     * @dev keccak256("ExecutionParams(address to,uint256 value,bytes data,uint8 operation,address account,address executor,uint256 nonce)");
     */
    bytes32 public constant TRANSACTION_PARAMS_TYPEHASH =
        0xfd4628b53a91b366f1977138e2eda53b93c8f5cc74bda8440f108d7da1e99290;
    /**
     * @notice EIP712 typehash for validation data
     * @dev keccak256("ValidationParams(ExecutionParams executionParams,bytes32 policyHash,uint32 expiryEpoch)ExecutionParams(address to,uint256 value,bytes data,uint8 operation,address account,address executor,uint256 nonce)")
     */
    bytes32 public constant VALIDATION_PARAMS_TYPEHASH =
        0x0c7f653e0f641e41fbb4ed1440c7d0b08b8d2a19e1c35cfc98de2d47519e15b1;

POC

The issue went undetected in the initial development phase, and can be verified, because the tests still use the old hashes:

    function testValidateConstants() public {
        assertEq(
            TypeHashHelper.TRANSACTION_PARAMS_TYPEHASH,
            keccak256(
                "ExecutionParams(address to,uint256 value,bytes data,uint8 operation,address account,address executor,uint256 nonce)"
            )
        );
        assertEq(
            TypeHashHelper.VALIDATION_PARAMS_TYPEHASH,
            keccak256(
                "ValidationParams(ExecutionParams executionParams,bytes32 policyHash,uint32 expiryEpoch)ExecutionParams(address to,uint256 value,bytes data,uint8 operation,address account,address executor,uint256 nonce)"
            )
        );
    }

The specific test can be verified by running:

forge test --fork-url "https://eth-mainnet.g.alchemy.com/v2/<ALCHEMY_API_KEY>" -vvv --ffi --match-test testValidateConstants

Impact

Protocol is not EIP712 compliant which will result in issues with integrators.

Tools Used

Manual review and an online keccak256 checker for validating that the those hashes are not actually for the correct structures.

Recommendations

Modify the structure typehash (and tests) to point to the correct structures.

Assessed type

Other

The console account address is passed in as a parameter. This could be manipulated to enable a malicious account as a module. The console account address should be derived from msg.sender.

Lines of code

https://github.com/code-423n4/2023-10-brahma/blob/a6424230052fc47c4215200c19a8eef9b07dfccc/contracts/src/core/SafeDeployer.sol#L177-L182

Vulnerability details

Impact

Passing _consoleAccount directly can allow module enablement for any account, leading to potential theft or hijacking

Proof of Concept

Specifically, in the _setupSubAccount() function, the _consoleAccount parameter is passed in directly when encoding the call to enableModule():

 txns[0] = Types.Executable({
   callType: Types.CallType.DELEGATECALL,
   target: safeEnabler, 
   value: 0,
   data: abi.encodeCall(IGnosisSafe.enableModule, (_consoleAccount)) 
 });

This allows the caller to specify any address as the console account, rather than deriving it from msg.sender.

Here's a deep dive on how it works:

The _setupSubAccount function is used to initialize a new sub-account safe. It takes the console account address as a parameter named _consoleAccount.

In the function logic, it encodes a call to IGnosisSafe.enableModule passing the _consoleAccount parameter as the module to enable:

 txns[0] = Types.Executable({
   target: safeEnabler,
   data: abi.encodeCall(
     IGnosisSafe.enableModule, 
     (_consoleAccount)  
   )
 });

When this setup data is used to deploy the new sub-account safe, it will execute this encoded call during initialization.

This has the effect of enabling whatever address was passed as _consoleAccount as a module on the new sub-account safe.

A malicious attacker could exploit this by:

  1. Deploying their own malicious console account contract with logic to steal funds etc.
  2. Calling the deploySubAccount function, passing the address of their malicious contract as the _consoleAccount parameter.
  3. When the sub-account safe is initialized, it will enable the attacker's malicious contract as a module.
  4. Now the attacker has full control over the new sub-account safe through their malicious console logic. They can call functions to steal funds etc.

To prove this concept, an attacker could:

  • Deploy a malicious console contract with a function stealFunds() that drains funds from the safe
  • Call deploySubAccount, passing the malicious contract address
  • When the new sub-account is deployed, call stealFunds() on the malicious contract to drain funds

This shows the attack works in practice.

Tools Used

Manual

Recommended Mitigation Steps

The _consoleAccount parameter should be derived from msg.sender instead of passed in directly, a suggestive example:

 address _consoleAccount = msg.sender;

 txns[0] = Types.Executable({
   callType: Types.CallType.DELEGATECALL,
   target: safeEnabler,
   value: 0,
   data: abi.encodeCall(IGnosisSafe.enableModule, (_consoleAccount))  
 });

This ensures only the console account that called deploySubAccount() can be enabled as a module, preventing the vulnerability.

Assessed type

Other

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.