GithubHelp home page GithubHelp logo

ssh2-streams's Introduction

Description

SSH2 and SFTP(v3) client/server protocol streams for node.js.

Build Status

Requirements

Install

npm install ssh2-streams

API

require('ssh2-streams').SSH2Stream returns an SSH2Stream constructor.

require('ssh2-streams').SFTPStream returns an SFTPStream constructor.

require('ssh2-streams').utils returns an object of useful utility functions.

require('ssh2-streams').constants returns an object containing useful SSH protocol constants.

SSH2Stream events

Client/Server events

  • header(< object >headerInfo) - Emitted when the protocol header is seen. headerInfo contains:

    • greeting - string - (Client-only) An optional greeting message presented by the server.

    • identRaw - string - The raw identification string sent by the remote party.

    • versions - object - Contains various information parsed from identRaw:

      • protocol - string - The protocol version (always 1.99 or 2.0) supported by the remote party.

      • software - string - The software name used by the remote party.

    • comments - string - Any additional text that comes after the software name.

  • GLOBAL_REQUEST(< string >reqName, < boolean >wantReply, < mixed >reqData)

  • CHANNEL_DATA:<channel>(< Buffer >data)

  • CHANNEL_EXTENDED_DATA:<channel>(< integer >type, < Buffer >data)

  • CHANNEL_WINDOW_ADJUST:<channel>(< integer >bytesToAdd)

  • CHANNEL_SUCCESS:<channel>()

  • CHANNEL_FAILURE:<channel>()

  • CHANNEL_EOF:<channel>()

  • CHANNEL_CLOSE:<channel>()

  • CHANNEL_OPEN_CONFIRMATION:<channel>(< object >channelInfo) - channelInfo contains:

    • recipient - integer - The local channel number.

    • sender - integer - The remote party's channel number.

    • window - integer - The initial window size for the channel.

    • packetSize - integer - The maximum packet size for the channel.

  • CHANNEL_OPEN_FAILURE:<channel>(< object >failInfo) - failInfo contains:

    • recipient - integer - The local channel number.

    • reasonCode - integer - The reason code of the failure.

    • reason - string - A text representation of the reasonCode.

    • description - string - An optional description of the failure.

  • DISCONNECT(< string >reason, < integer >reasonCode, < string >description)

  • DEBUG(< string >message)

  • NEWKEYS()

  • REQUEST_SUCCESS([< Buffer >resData])

  • REQUEST_FAILURE()

Client-only events

  • fingerprint(< Buffer >hostKey, < function >callback) - This event allows you to verify a host's key. If callback is called with true, the handshake continues. Otherwise a disconnection will occur if callback is called with false. The default behavior is to auto-allow any host key if there are no handlers for this event.

  • SERVICE_ACCEPT(< string >serviceName)

  • USERAUTH_PASSWD_CHANGEREQ(< string >message)

  • USERAUTH_INFO_REQUEST(< string >name, < string >instructions, < string >lang, < array >prompts)

  • USERAUTH_PK_OK()

  • USERAUTH_SUCCESS()

  • USERAUTH_FAILURE(< array >methodsContinue, < boolean >partialSuccess)

  • USERAUTH_BANNER(< string >message)

  • CHANNEL_OPEN(< object >channelInfo) - channelInfo contains:

    • type - string - The channel type (e.g. x11, forwarded-tcpip).

    • sender - integer - The remote party's channel number.

    • window - integer - The initial window size for the channel.

    • packetSize - integer - The maximum packet size for the channel.

    • data - object - The properties available depend on type:

      • x11:

        • srcIP - string - Source IP address of X11 connection request.

        • srcPort - string - Source port of X11 connection request.

      • forwarded-tcpip:

        • srcIP - string - Source IP address of incoming connection.

        • srcPort - string - Source port of incoming connection.

        • destIP - string - Destination IP address of incoming connection.

        • destPort - string - Destination port of incoming connection.

      • [email protected]:

        • socketPath - string - Source socket path of incoming connection.
      • [email protected] has no extra data.

  • CHANNEL_REQUEST:<channel>(< object >reqInfo) - reqInfo properties depend on reqInfo.request:

    • exit-status:

      • code - integer - The exit status code of the remote process.
    • exit-signal:

      • signal - string - The signal name.

      • coredump - boolean - Was the exit the result of a core dump?

      • description - string - An optional error message.

Server-only events

  • SERVICE_REQUEST(< string >serviceName)

  • USERAUTH_REQUEST(< string >username, < string >serviceName, < string >authMethod, < mixed >authMethodData) - authMethodData depends on authMethod:

    • For password, it's a string containing the password.

    • For publickey, it's an object containing:

      • keyAlgo - string - The public key algorithm.

      • key - Buffer - The public key data.

      • signature - mixed - If set, it is a Buffer containing the signature to be verified.

      • blob - mixed - If set, it is a Buffer containing the data to sign. The resulting signature is what is compared to signature.

    • For hostbased, it's an object including the properties from publickey but also:

      • localHostname - string - The client's hostname to be verified.

      • localUsername - string - The client's (local) username to be verified.

  • USERAUTH_INFO_RESPONSE(< array >responses)

  • GLOBAL_REQUEST(< string >reqName, < boolean >wantReply, < mixed >reqData) - reqData depends on reqName:

    • For tcpip-forward/cancel-tcpip-forward, it's an object containing:

      • bindAddr - string - The IP address to start/stop binding to.

      • bindPort - string - The port to start/stop binding to.

    • For [email protected]/[email protected], it's an object containing:

      • socketPath - string - The socket path to start/stop listening on.
    • For [email protected], there is no reqData.

    • For any other requests, it's a Buffer containing raw request-specific data if there is any extra data.

  • CHANNEL_OPEN(< object >channelInfo) - channelInfo contains:

    • type - string - The channel type (e.g. session, direct-tcpip).

    • sender - integer - The remote party's channel number.

    • window - integer - The initial window size for the channel.

    • packetSize - integer - The maximum packet size for the channel.

    • data - object - The properties available depend on type:

      • direct-tcpip:

        • srcIP - string - Source IP address of outgoing connection.

        • srcPort - string - Source port of outgoing connection.

        • destIP - string - Destination IP address of outgoing connection.

        • destPort - string - Destination port of outgoing connection.

      • [email protected]:

        • socketPath - string - Destination socket path of outgoing connection.
      • session has no extra data.

  • CHANNEL_REQUEST:<channel>(< object >reqInfo) - reqInfo properties depend on reqInfo.request:

    • pty-req:

      • wantReply - boolean - The client is requesting a response to this request.

      • term - string - The terminal type name.

      • cols - integer - The number of columns.

      • rows - integer - The number of rows.

      • width - integer - The width in pixels.

      • height - integer - The height in pixels.

      • modes - object - The terminal modes.

    • window-change:

      • cols - integer - The number of columns.

      • rows - integer - The number of rows.

      • width - integer - The width in pixels.

      • height - integer - The height in pixels.

    • x11-req:

      • wantReply - boolean - The client is requesting a response to this request.

      • single - boolean - Whether only a single X11 connection should be allowed.

      • protocol - string - The X11 authentication protocol to be used.

      • cookie - string - The hex-encoded X11 authentication cookie.

      • screen - integer - The screen number for incoming X11 connections.

    • env:

      • wantReply - boolean - The client is requesting a response to this request.

      • key - string - The environment variable name.

      • val - string - The environment variable value.

    • shell:

      • wantReply - boolean - The client is requesting a response to this request.
    • exec:

      • wantReply - boolean - The client is requesting a response to this request.

      • command - string - The command to be executed.

    • subsystem:

      • wantReply - boolean - The client is requesting a response to this request.

      • subsystem - string - The name of the subsystem.

    • signal:

      • signal - string - The signal name (prefixed with SIG).
    • xon-xoff:

      • clientControl - boolean - Client can/can't perform flow control (control-S/control-Q processing).
    • [email protected] has no reqInfo.

SSH2Stream properties

  • bytesSent - integer - The number of bytes sent since the last keying. This metric can be useful in determining when to call rekey().

  • bytesReceived - integer - The number of bytes received since the last keying. This metric can be useful in determining when to call rekey().

SSH2Stream methods

  • (constructor)(< object >config) - Creates and returns a new SSH2Stream instance. SSH2Stream instances are Duplex streams. config can contain:

    • server - boolean - Set to true to create an instance in server mode. Default: false

    • hostKeys - object - If in server mode, an object keyed on host key format (see supported serverHostKey values in algorithms option below) with values being (decrypted) _Buffer_s or _string_s that contain PEM-encoded (OpenSSH format) host private key(s). Default: (none)

    • greeting - string - If in server mode, an optional message to send to the user immediately upon connection, before the handshake. Note: Most clients usually ignore this. Default: (none)

    • banner - string - If in server mode, an optional message to send to the user once, right before authentication begins. Default: (none)

    • ident - string - A custom software name/version identifier. Default: 'ssh2js' + moduleVersion + 'srv' (server mode) 'ssh2js' + moduleVersion (client mode)

    • maxPacketSize - string - This is the maximum packet size that will be accepted. It should be 35000 bytes or larger to be compatible with other SSH2 implementations. Default: 35000

    • highWaterMark - integer - This is the highWaterMark to use for the stream. Default: 32 * 1024

    • algorithms - object - This option allows you to explicitly override the default transport layer algorithms used for the connection. Each value must be an array of valid algorithms for that category. The order of the algorithms in the arrays are important, with the most favorable being first. Valid keys:

      • kex - array - Key exchange algorithms.

        • Default values:

          1. curve25519-sha256 (node v13.9.0 or newer)
          2. [email protected] (node v13.9.0 or newer)
          3. ecdh-sha2-nistp256 (node v0.11.14 or newer)
          4. ecdh-sha2-nistp384 (node v0.11.14 or newer)
          5. ecdh-sha2-nistp521 (node v0.11.14 or newer)
          6. diffie-hellman-group-exchange-sha256 (node v0.11.12 or newer)
          7. diffie-hellman-group14-sha256
          8. diffie-hellman-group16-sha512
          9. diffie-hellman-group18-sha512
          10. diffie-hellman-group14-sha1
        • Supported values:

          • curve25519-sha256 (node v13.9.0 or newer)
          • [email protected] (node v13.9.0 or newer)
          • ecdh-sha2-nistp256 (node v0.11.14 or newer)
          • ecdh-sha2-nistp384 (node v0.11.14 or newer)
          • ecdh-sha2-nistp521 (node v0.11.14 or newer)
          • diffie-hellman-group-exchange-sha1 (node v0.11.12 or newer)
          • diffie-hellman-group-exchange-sha256 (node v0.11.12 or newer)
          • diffie-hellman-group1-sha1
          • diffie-hellman-group14-sha1
          • diffie-hellman-group14-sha256
          • diffie-hellman-group16-sha512
          • diffie-hellman-group18-sha512
      • cipher - array - Ciphers.

        • Default values:

          1. aes128-ctr
          2. aes192-ctr
          3. aes256-ctr
          4. aes128-gcm (node v0.11.12 or newer)
          5. [email protected] (node v0.11.12 or newer)
          6. aes256-gcm (node v0.11.12 or newer)
          7. [email protected] (node v0.11.12 or newer)
        • Supported values:

          • aes128-ctr
          • aes192-ctr
          • aes256-ctr
          • aes128-gcm (node v0.11.12 or newer)
          • [email protected] (node v0.11.12 or newer)
          • aes256-gcm (node v0.11.12 or newer)
          • [email protected] (node v0.11.12 or newer)
          • aes256-cbc
          • aes192-cbc
          • aes128-cbc
          • blowfish-cbc
          • 3des-cbc
          • arcfour256
          • arcfour128
          • cast128-cbc
          • arcfour
      • serverHostKey - array - Server host key formats. In server mode, this list must agree with the host private keys set in the hostKeys config setting.

        • Default values:

          1. ssh-ed25519 (node v12.0.0 or newer)
          2. ecdsa-sha2-nistp256 (node v5.2.0 or newer)
          3. ecdsa-sha2-nistp384 (node v5.2.0 or newer)
          4. ecdsa-sha2-nistp521 (node v5.2.0 or newer)
          5. ssh-rsa
        • Supported values:

          • ssh-ed25519 (node v12.0.0 or newer)
          • ssh-rsa
          • ecdsa-sha2-nistp256 (node v5.2.0 or newer)
          • ecdsa-sha2-nistp384 (node v5.2.0 or newer)
          • ecdsa-sha2-nistp521 (node v5.2.0 or newer)
          • ssh-dss
      • hmac - array - (H)MAC algorithms.

        • Default values:

          1. hmac-sha2-256
          2. hmac-sha2-512
          3. hmac-sha1
        • Supported values:

          • hmac-sha2-256
          • hmac-sha2-512
          • hmac-sha1
          • hmac-md5
          • hmac-sha2-256-96
          • hmac-sha2-512-96
          • hmac-ripemd160
          • hmac-sha1-96
          • hmac-md5-96
      • compress - array - Compression algorithms.

    • debug - function - Set this to a function that receives a single string argument to get detailed (local) debug information. Default: (none)

Client/Server methods

  • ping() - boolean - Writes a dummy GLOBAL_REQUEST packet (specifically "[email protected]") that requests a reply. Returns false if you should wait for the continue event before sending any more traffic.

  • disconnect([< integer >reasonCode]) - boolean - Writes a disconnect packet and closes the stream. Returns false if you should wait for the continue event before sending any more traffic.

  • rekey() - boolean - Starts the re-keying process. Incoming/Outgoing packets are buffered until the re-keying process has finished. Returns false to indicate that no more packets should be written until the NEWKEYS event is seen.

  • requestSuccess([< Buffer >data]) - boolean - Writes a request success packet. Returns false if you should wait for the continue event before sending any more traffic.

  • requestFailure() - boolean - Writes a request failure packet. Returns false if you should wait for the continue event before sending any more traffic.

  • channelSuccess() - boolean - Writes a channel success packet. Returns false if you should wait for the continue event before sending any more traffic.

  • channelFailure() - boolean - Writes a channel failure packet. Returns false if you should wait for the continue event before sending any more traffic.

  • channelEOF(< integer >channel) - boolean - Writes a channel EOF packet for the given channel. Returns false if you should wait for the continue event before sending any more traffic.

  • channelClose(< integer >channel) - boolean - Writes a channel close packet for the given channel. Returns false if you should wait for the continue event before sending any more traffic.

  • channelWindowAdjust(< integer >channel, < integer >amount) - boolean - Writes a channel window adjust packet for the given channel where amount is the number of bytes to add to the channel window. Returns false if you should wait for the continue event before sending any more traffic.

  • channelData(< integer >channel, < mixed >data) - boolean - Writes a channel data packet for the given channel where data is a Buffer or string. Returns false if you should wait for the continue event before sending any more traffic.

  • channelExtData(< integer >channel, < mixed >data, < integer >type) - boolean - Writes a channel extended data packet for the given channel where data is a _Buffer_ or _string_. Returns falseif you should wait for thecontinue` event before sending any more traffic.

  • channelOpenConfirm(< integer >remoteChannel, < integer >localChannel, < integer >initWindow, < integer >maxPacket) - boolean - Writes a channel open confirmation packet. Returns false if you should wait for the continue event before sending any more traffic.

  • channelOpenFail(< integer >remoteChannel, < integer >reasonCode[, < string >description]) - boolean - Writes a channel open failure packet. Returns false if you should wait for the continue event before sending any more traffic.

Client-only methods

  • service(< string >serviceName) - boolean - Writes a service request packet for serviceName. Returns false if you should wait for the continue event before sending any more traffic.

  • tcpipForward(< string >bindAddr, < integer >bindPort[, < boolean >wantReply]) - boolean - Writes a tcpip forward global request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • cancelTcpipForward(< string >bindAddr, < integer >bindPort[, < boolean >wantReply]) - boolean - Writes a cancel tcpip forward global request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • authPassword(< string >username, < string >password) - boolean - Writes a password userauth request packet. Returns false if you should wait for the continue event before sending any more traffic.

  • authPK(< string >username, < object >pubKey[, < function >cbSign]) - boolean - Writes a publickey userauth request packet. pubKey is the object returned from using utils.parseKey() on a private or public key. If cbSign is not present, a pubkey check userauth packet is written. Otherwise cbSign is called with (blob, callback), where blob is the data to sign with the private key and the resulting signature Buffer is passed to callback as the first argument. Returns false if you should wait for the continue event before sending any more traffic.

  • authHostbased(< string >username, < object >pubKey, < string >localHostname, < string >localUsername, < function >cbSign) - boolean - Writes a hostbased userauth request packet. pubKey is the object returned from using utils.parseKey() on a private or public key. cbSign is called with (blob, callback), where blob is the data to sign with the private key and the resulting signature Buffer is passed to callback as the first argument. Returns false if you should wait for the continue event before sending any more traffic.

  • authKeyboard(< string >username) - boolean - Writes a keyboard-interactive userauth request packet. Returns false if you should wait for the continue event before sending any more traffic.

  • authNone(< string >username) - boolean - Writes a "none" userauth request packet. Returns false if you should wait for the continue event before sending any more traffic.

  • authInfoRes(< array >responses) - boolean - Writes a userauth info response packet. responses is an array of zero or more strings corresponding to responses to prompts previously sent by the server. Returns false if you should wait for the continue event before sending any more traffic.

  • directTcpip(< integer >channel, < integer >initWindow, < integer >maxPacket, < object >config) - boolean - Writes a direct tcpip channel open packet. config must contain srcIP, srcPort, dstIP, and dstPort. Returns false if you should wait for the continue event before sending any more traffic.

  • session(< integer >channel, < integer >initWindow, < integer >maxPacket) - boolean - Writes a session channel open packet. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_agentForward(< integer >channel[, < boolean >wantReply]) - boolean - Writes an [email protected] channel request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • windowChange(< integer >channel, < integer >rows, < integer >cols, < integer >height, < integer >width) - boolean - Writes a window change channel request packet. Returns false if you should wait for the continue event before sending any more traffic.

  • pty(< integer >channel, < integer >rows, < integer >cols, < integer >height, < integer >width, < string >terminalType, < mixed >terminalModes[, < boolean >wantReply]) - boolean - Writes a pty channel request packet. If terminalType is falsey, vt100 is used. terminalModes can be the raw bytes, an object of the terminal modes to set, or a falsey value for no modes. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • env(< integer >channel, < string >key, < mixed >value[, < boolean >wantReply]) - boolean - Writes an env channel request packet. value can be a string or Buffer. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • shell(< integer >channel[, < boolean >wantReply]) - boolean - Writes a shell channel request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • exec(< integer >channel, < string >command[, < boolean >wantReply]) - boolean - Writes an exec channel request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • signal(< integer >channel, < string >signalName) - boolean - Writes a signal channel request packet. Returns false if you should wait for the continue event before sending any more traffic.

  • x11Forward(< integer >channel, < object >config[, < boolean >wantReply]) - boolean - Writes an X11 forward channel request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic. config can contain:

    • single - boolean - true if only a single connection should be forwarded.

    • protocol - string - The name of the X11 authentication method used (e.g. MIT-MAGIC-COOKIE-1).

    • cookie - string - The X11 authentication cookie encoded in hexadecimal.

    • screen - integer - The screen number to forward X11 connections for.

  • subsystem(< integer >channel, < string >name[, < boolean >wantReply]) - boolean - Writes a subsystem channel request packet. name is the name of the subsystem (e.g. sftp or netconf). wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_noMoreSessions([< boolean >wantReply]) - boolean - Writes a [email protected] request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_streamLocalForward(< string >socketPath[, < boolean >wantReply]) - boolean - Writes a [email protected] request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_cancelStreamLocalForward(< string >socketPath[, < boolean >wantReply]) - boolean - Writes a [email protected] request packet. wantReply defaults to true. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_directStreamLocal(< integer >channel, < integer >initWindow, < integer >maxPacket, < object >config) - boolean - Writes a [email protected] channel open packet. config must contain socketPath. Returns false if you should wait for the continue event before sending any more traffic.

Server-only methods

  • serviceAccept(< string >serviceName) - boolean - Writes a service accept packet. Returns false if you should wait for the continue event before sending any more traffic.

  • authFailure([< array >authMethods[, < boolean >partialSuccess]]) - boolean - Writes a userauth failure packet. authMethods is an array of authentication methods that can continue. Returns false if you should wait for the continue event before sending any more traffic.

  • authSuccess() - boolean - Writes a userauth success packet. Returns false if you should wait for the continue event before sending any more traffic.

  • authPKOK(< string >keyAlgorithm, < Buffer >keyData) - boolean - Writes a userauth PK OK packet. Returns false if you should wait for the continue event before sending any more traffic.

  • authInfoReq(< string >name, < string >instructions, < array >prompts) - boolean - Writes a userauth info request packet. prompts is an array of { prompt: 'Prompt text', echo: true } objects (prompt being the prompt text and echo indicating whether the client's response to the prompt should be echoed to their display). Returns false if you should wait for the continue event before sending any more traffic.

  • forwardedTcpip(< integer >channel, < integer >initWindow, < integer >maxPacket, < object >info) - boolean - Writes a forwarded tcpip channel open packet. info must contain boundAddr, boundPort, remoteAddr, and remotePort. Returns false if you should wait for the continue event before sending any more traffic.

  • x11(< integer >channel, < integer >initWindow, < integer >maxPacket, < object >info) - boolean - Writes an X11 channel open packet. info must contain originAddr and originPort. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_authAgent(< integer >channel, < integer >initWindow, < integer >maxPacket) - boolean - Writes an [email protected] channel open packet. Returns false if you should wait for the continue event before sending any more traffic.

  • openssh_forwardedStreamLocal(< integer >channel, < integer >initWindow, < integer >maxPacket, < object >info) - boolean - Writes an [email protected] channel open packet. info must contain socketPath. Returns false if you should wait for the continue event before sending any more traffic.

  • exitStatus(< integer >channel, < integer >exitCode) - boolean - Writes an exit status channel request packet. Returns false if you should wait for the continue event before sending any more traffic.

  • exitSignal(< integer >channel, < string >signalName, < boolean >coreDumped, < string >errorMessage) - boolean - Writes an exit signal channel request packet. Returns false if you should wait for the continue event before sending any more traffic.

Utility methods

  • parseKey(< mixed >keyData[, < string >passphrase]) - mixed - Parses a private/public key in OpenSSH, RFC4716, or PPK format. For encrypted private keys, the key will be decrypted with the given passphrase. The returned value will be an array of objects (currently in the case of modern OpenSSH keys) or an object with these properties and methods:

    • type - string - The full key type (e.g. 'ssh-rsa')

    • comment - string - The comment for the key

    • getPrivatePEM() - string - This returns the PEM version of a private key

    • getPublicPEM() - string - This returns the PEM version of a public key (for either public key or derived from a private key)

    • getPublicSSH() - string - This returns the SSH version of a public key (for either public key or derived from a private key)

    • sign(< mixed >data) - mixed - This signs the given data using this key and returns a Buffer containing the signature on success. On failure, an Error will be returned. data can be anything accepted by node's sign.update().

    • verify(< mixed >data, < Buffer >signature) - mixed - This verifies a signature of the given data using this key and returns true if the signature could be verified. On failure, either false will be returned or an Error will be returned upon a more critical failure. data can be anything accepted by node's verify.update().

ssh2-streams's People

Contributors

131 avatar andrewjjenkins avatar bompus avatar fredericosilva avatar h4ndshake avatar hrimhari avatar joeyhub avatar kjvalencik avatar leedm777 avatar liximomo avatar makeusabrew avatar medikoo avatar mscdex avatar ofagbemi avatar pss10c avatar robey avatar scout119 avatar tol1 avatar traviskaufman avatar vinniecent avatar wdavidw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ssh2-streams's Issues

SFTP file upload fails

My sftp upload is failing. Connection seems OK and it seems to start writing the file, but it gets chunked up and take a long time (30+ sec) between chunks. The AWS lambda running it times out after 5 minutes.

Oddly, there is one log line in there that indicates the writeStream was closed, but then it also seems to be continuing.

See the log here: https://pastebin.com/raw/uEvgMYQp

Pertinent code:

var client = ssh2.Client;
        var conn = new client();
        conn.on('ready', function() {
            conn.sftp(function(err, sftp) {
                var readStream = fs.createReadStream(localfile);
                var writeStream = sftp.createWriteStream(SFTP_PATH + filename);

                writeStream.on('close',function () {
                    console.log( "- file transferred succesfully" );
                });

                writeStream.on('end', function () {
                    console.log( "sftp connection closed" );
                    conn.close();
                });

                readStream.pipe(writeStream);
            });
        }).connect({
            host: SFTP_HOST,
            port: 22,
            username: SFTP_USER,
            password: SFTP_PASS,
            debug: console.log,
            algorithms: { cipher: [ 'aes256-cbc' ] }
        });

Am I doing something wrong here? I found a couple 3rd party examples of writing files via sftp using node ssh2, but it doesn't seem to be documented directly in this repo.

createWriteStream encoding

I tried writing an image to the remote server, both as a base64 string and as a binary string. I set the options to {encoding: 'base64'} and {encoding: 'binary'}, both did nothing and the file got written as plain text with the base64 string being in it. How do I properly write non-text files?

Pb with fastGet on a server responding to an fstat with size flag and size value equal to 0

We had to download files from a server which has an issue with fstat because in the fstat response, it sets the size flag and set the size value to 0 whereas the file is not empty.
fastXfer (and thus fastGet) will fallback to stat if fstat returns an error but since there is no error and fstat returns 0 it just goes throw the normal logic which... does not handle empty files (this is another issue because some may care about getting empty files...)
So, to workaround this problem, we just added in sftp.js on line 473 this code:
if (attrs.size === 0) { cb('file size is equal to 0'); } else { cb(undefined, attrs); }
the fallback logic is implemented in different places. I just patched the fastXfer method. I think it'll be better if there is a doYourBestStat method centralizing the fallback logic.
I think also that the library should propose a configuration option to just skip fstat altogether if we know a specific server does not implement it correctly.

do you want me to submit a PR?

utils.parseKey("algo key comment") includes the comment in .public

That is, the comment after the key is included in the .public Buffer, but I suspect it shouldn't be.

I ran into this when I tried https://github.com/mscdex/ssh2#server-examples and found that the server did not accept my public key (a typical id_rsa.pub file with a comment after the key).

# iojs
> utils = require('../ssh2-streams/lib/utils')
{ iv_inc: [Function],
  isStreamCipher: [Function],
  isGCM: [Function],
  readInt: [Function],
  readString: [Function: readString],
  parseKey: [Function],
  genPublicKey: [Function: genPublicKey],
  convertPPKPrivate: [Function: convertPPKPrivate],
  verifyPPKMAC: [Function: verifyPPKMAC],
  decryptKey: [Function] }
> utils.parseKey('ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQvV0lqHPSW+zuD+sT0MBlj4lmdmsBNG8Xhglle/RW9VSPBMuHse9lHxDrpIMpZ0tKXJvdaEqfy3RuSFkDAG7xP+UUl4V3W67qeNPAjLiR459fojef4FiUAjq3sVoEs9tGEI4ZpuAkizdJd7RJ2Qmyw0HDmM+722c6q32IL7cpQG9aHavjOsczFIqjl1Y5/xEdmEcreOa877KKheCFmCyCLZi22UKRKwOD/mLV2uWfAf5KOmgQtOgG4LbYwSOLtlcB2qzf32RY5y/vDQin76SGkNT39jPQn2XSPAVWThh7X6BK4mWc+0B7iCWokL94UnuFUl7z0Uj87QRnm/XKJ7V at@lindev').public.length
285
> utils.parseKey('ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQvV0lqHPSW+zuD+sT0MBlj4lmdmsBNG8Xhglle/RW9VSPBMuHse9lHxDrpIMpZ0tKXJvdaEqfy3RuSFkDAG7xP+UUl4V3W67qeNPAjLiR459fojef4FiUAjq3sVoEs9tGEI4ZpuAkizdJd7RJ2Qmyw0HDmM+722c6q32IL7cpQG9aHavjOsczFIqjl1Y5/xEdmEcreOa877KKheCFmCyCLZi22UKRKwOD/mLV2uWfAf5KOmgQtOgG4LbYwSOLtlcB2qzf32RY5y/vDQin76SGkNT39jPQn2XSPAVWThh7X6BK4mWc+0B7iCWokL94UnuFUl7z0Uj87QRnm/XKJ7V').public.length
279

[Error: SFTP session ended early]

I keep getting this error. I am actually using gulp-ssh, but the problem ended up happening in one of the files here.

[Error: SFTP session ended early]
_stream_writable.js:313
    cb(er);
    ^
TypeError: object is not a function
    at onwriteError (_stream_writable.js:313:5)
    at onwrite (_stream_writable.js:335:5)
    at WritableState.onwrite (_stream_writable.js:105:5)
    at /Users/nico/Work/nico/node_modules/gulp-ssh/node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js:2753:14
    at state.requests.(anonymous function).cb (/Users/nico/Work/nico/node_modules/gulp-ssh/node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js:961:15)
    at SFTPStream._cleanup (/Users/nico/Work/nico/node_modules/gulp-ssh/node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js:191:38)
    at SFTPStream.end (/Users/nico/Work/nico/node_modules/gulp-ssh/node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js:160:8)
    at SFTPWrapper.end (/Users/nico/Work/nico/node_modules/gulp-ssh/node_modules/ssh2/lib/SFTPWrapper.js:29:23)
    at end (/Users/nico/Work/nico/node_modules/gulp-ssh/index.js:253:18)
    at DestroyableTransform._flush (/Users/nico/Work/nico/node_modules/gulp-ssh/index.js:286:5)

After playing around with it for a few hours I found that if I comment out

SFTPStream.prototype._cleanup = function(callback) {
  var state = this._state;

  state.pktBuf = undefined; // give GC something to do

  var requests = state.requests,
      keys = Object.keys(requests),
      len = keys.length;
  if (len) {
    var err = new Error('SFTP session ended early');
    for (var i = 0, cb; i < len; ++i)
      (cb = requests[keys[i]].cb) && cb(err);
  }

  if (this.readable)
    this.push(null);
  if (callback !== false) {
    this.debug('DEBUG[SFTP]: Parser: Malformed packet');
    callback && callback(new Error('Malformed packet'));
  }
};

as such

SFTPStream.prototype._cleanup = function(callback) {
  var state = this._state;

  state.pktBuf = undefined; // give GC something to do

  var requests = state.requests,
      keys = Object.keys(requests),
      len = keys.length;
  if (len) {
    var err = new Error('SFTP session ended early');
    // for (var i = 0, cb; i < len; ++i)
      // (cb = requests[keys[i]].cb) && cb(err);
  }

  if (this.readable)
    this.push(null);
  if (callback !== false) {
    this.debug('DEBUG[SFTP]: Parser: Malformed packet');
    callback && callback(new Error('Malformed packet'));
  }
};

It works.

SSH Client and Server crashes if stream identifier ends with space

Initially thought it was an ssh2 module issue, but tracked it to this repo. Copied the issue info from there:

I played around a bit with setting ident of ssh server:

  • with no value set, no issues, displays default server software name/version identifier.
  • with any string set, displays that string to client as software version.
  • with '' (empty) string, displays the default value
  • with 'a ' string ending with a space, it works fine with default ssh client, displays empty sting as remote software version but when ssh2 node module is used as client, both the server and the client crashes.

Outputs for ident set as 'a '

with default linux ssh client:

$ ssh -p 2201  foo@localhost -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to localhost [127.0.0.1] port 2201.
debug1: Connection established.
debug1: identity file /home/mido/.ssh/id_rsa type 1
debug1: identity file /home/mido/.ssh/id_rsa-cert type -1
debug1: identity file /home/mido/.ssh/id_dsa type -1
debug1: identity file /home/mido/.ssh/id_dsa-cert type -1
debug1: identity file /home/mido/.ssh/id_ecdsa type -1
debug1: identity file /home/mido/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/mido/.ssh/id_ed25519 type -1
debug1: identity file /home/mido/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
debug1: Remote protocol version 2.0, remote software version  
debug1: no match:  
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
...

But when connected with ssh2 node module as client:

$ node client.js 
events.js:182
      throw er; // Unhandled 'error' event
      ^

Error: Handshake failed: signature verification failed
    at onKEXDH_REPLY (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:2903:15)
    at SSH2Stream.<anonymous> (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:215:45)
    at emitOne (events.js:115:13)
    at SSH2Stream.emit (events.js:210:7)
    at parse_KEXDH_REPLY (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:4209:8)
    at parse_KEX (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:4171:14)
    at parsePacket (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:4006:12)
    at SSH2Stream._transform (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:667:13)
    at SSH2Stream.Transform._read (_stream_transform.js:190:10)
    at SSH2Stream._read (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:251:15)

And server crash output:

$ node server.js 
Client connected!
events.js:182
      throw er; // Unhandled 'error' event
      ^

Error: KEY_EXCHANGE_FAILED
    at onDISCONNECT (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:2197:15)
    at SSH2Stream.<anonymous> (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:203:5)
    at emitMany (events.js:146:13)
    at SSH2Stream.emit (events.js:223:7)
    at parsePacket (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:3743:10)
    at SSH2Stream._transform (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:667:13)
    at SSH2Stream.Transform._read (_stream_transform.js:190:10)
    at SSH2Stream._read (/mnt/genji/middleEarth/ssh-test/node_modules/ssh2-streams/lib/ssh.js:251:15)
    at SSH2Stream.Transform._write (_stream_transform.js:178:12)
    at doWrite (_stream_writable.js:371:12)

Client code:

var Client = require('ssh2').Client
 
var conn = new Client()
conn.on('ready', () => {
  console.log('Client :: ready')
  conn.end();
}).connect({
  host: 'localhost',
  port: 2201,
  username: 'foo',
  password: 'bar'
})

Server code:

var ssh2 = require('ssh2')
var utils = ssh2.utils
var fs = require('fs')
var pubKey = utils.genPublicKey(utils.parseKey(fs.readFileSync('testSSH.pub')))
 
new ssh2.Server({
  ident: ' ',
  hostKeys: [fs.readFileSync('testSSH')]
}, function(client) {
  console.log('Client connected!'); 
  client
    .on('end', () =>  console.log('Client disconnected'))
    .on('authentication', ctx => ctx.accept())
    .on('ready', function() {
      console.log('Client authenticated!');
      client.on('session', accept => accept())
    })
}).listen(2201, '127.0.0.1')

Obscure error messages with incorrect private key passphrases

Given an encrypted private key and an incorrect passphrase, obscure error messages are thrown from the node-asn1 library:

Long, non-empty, incorrect passphrase

InvalidAsn1Error: encoding too long
    at newInvalidAsn1Error (/Users/k/p/lagoon/cli/node_modules/asn1/lib/ber/errors.js:7:13)
    at Reader.readLength (/Users/k/p/lagoon/cli/node_modules/asn1/lib/ber/reader.js:102:13)
    at Reader.readSequence (/Users/k/p/lagoon/cli/node_modules/asn1/lib/ber/reader.js:135:16)
    at genPublicKey (/Users/k/p/lagoon/cli/node_modules/ssh2-streams/lib/utils.js:429:19)
    at Client.connect (/Users/k/p/lagoon/cli/node_modules/ssh2/lib/client.js:239:29)
    at /Users/k/p/lagoon/cli/src/util/ssh.js:42:16
    at Promise (<anonymous>)
    at sshConnect$ (/Users/k/p/lagoon/cli/src/util/ssh.js:31:10)
    at tryCatch (/Users/k/p/lagoon/cli/node_modules/regenerator-runtime/runtime.js:65:40)
    at Generator.invoke [as _invoke] (/Users/k/p/lagoon/cli/node_modules/regenerator-runtime/runtime.js:303:22)

Empty, incorrect passphrase

InvalidAsn1Error: Expected 0x2: got 0xa3
    at newInvalidAsn1Error (/Users/k/p/cli/node_modules/asn1/lib/ber/errors.js:7:13)
    at Reader._readTag (/Users/k/p/cli/node_modules/asn1/lib/ber/reader.js:229:11)
    at Reader.readInt (/Users/k/p/cli/node_modules/asn1/lib/ber/reader.js:145:15)
    at genPublicKey (/Users/k/p/cli/node_modules/ssh2-streams/lib/utils.js:437:19)
    at Client.connect (/Users/k/p/cli/node_modules/ssh2/lib/client.js:239:29)
    at /Users/k/p/cli/src/util/ssh.js:41:16
    at Promise (<anonymous>)
    at sshConnect$ (/Users/k/p/cli/src/util/ssh.js:30:10)
    at tryCatch (/Users/k/p/cli/node_modules/regenerator-runtime/runtime.js:65:40)
    at Generator.invoke [as _invoke] (/Users/k/p/cli/node_modules/regenerator-runtime/runtime.js:303:22)

This appears to be as a result of asnReader.readSequence() and asnReader.readInt() throwing here and here and not being caught in lib/utils.js.

Would it make sense to wrap these (and maybe other calls to node-asn1 methods) in try / catch blocks?

sftp send file bug?

Hi,

I have a problem with uploading big files by sftp. sftp.writeFile callback not firing toooo long. I debug sftp.js file, and I saw, that when self.writeData call self recursively, last callback is cb(undefined, len); but len is not total length in this case, it was limited by if (overflow) len = state.maxDataLen;

I found solution for me, please look at this patch:

Index: node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
--- node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js (date 1449276186000)
+++ node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js (date 1449280838000)
@@ -939,7 +939,7 @@
                        origPosition + len,
                        cb);
       } else
-        cb && cb(undefined, len);
+        cb && cb(undefined, off + len)
     }
   };

addding new cipher alghortims

Im trying to connect to a ssh server with the 'ssh2' module but the server ciphers' not match any of chipers on the ssh2-stream ciphers. here is ssh-session logs:


+LiveParser:DEBUG: Outgoing: Writing DISCONNECT (KEY_EXCHANGE_FAILED)

+LiveParser:DEBUG: No matching Client->Server cipher

+LiveParser:DEBUG: (remote) Client->Server ciphers: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour,aes192-cbc,aes256-cbc

+LiveParser:DEBUG: (local) Client->Server ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm,[email protected],aes256-gcm,[email protected]

what should i do to able to add new cipher algorithms to the 'ssh2-stream'?
how could add those cipher to my app??

API documentation references ssh2 instead of ssh2-streams

The API section of the README says:

require('ssh2').SSH2Stream returns an SSH2Stream constructor.
require('ssh2').SFTPStream returns an SFTPStream constructor.
require('ssh2').utils returns an object of useful utility functions.
require('ssh2').constants returns an object containing useful SSH protocol constants.

When it should say:

require('ssh2-stream')

Inconsistent error "Invalid HMAC"

I have a server that opens 3 connections to 3 different clients and runs a command. Randomly the server will throw the following error:

ERROR Error: Invalid HMAC
  at SSH2Stream._transform (.../node_modules/ssh2-streams/lib/ssh.js:543:25)
  at SSH2Stream.Transform._read (_stream_transform.js:167:10)
  at SSH2Stream._read (.../node_modules/ssh2-streams/lib/ssh.js:236:15)
  at SSH2Stream.Transform._write (_stream_transform.js:155:12)
  at doWrite (_stream_writable.js:307:12)
  at writeOrBuffer (_stream_writable.js:293:5)
  at SSH2Stream.Writable.write (_stream_writable.js:220:11)
  at Socket.ondata (_stream_readable.js:556:20)
  at emitOne (events.js:96:13)
  at Socket.emit (events.js:188:7)
  at readableAddChunk (_stream_readable.js:177:18)
  at Socket.Readable.push (_stream_readable.js:135:10)
  at TCP.onread (net.js:542:20)

It doesn't happen every time it's run and it doesn't happen on all servers at the same time (sometimes 0, 1, 2, or all 3). The command is long running (5-10 mins) with a lot of output (thousands of lines), and logging shows it happens to stop randomly in time/code for that command.

Server:

# node --version
v6.4.0
# ssh -V
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8, OpenSSL 1.0.1f 6 Jan 2014
# npm ls ssh2-streams
...
└─┬ [email protected]
  └── [email protected]

Clients:

# ssh -V
OpenSSH_5.3p1, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
# ssh -V
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
# ssh -V
OpenSSH_6.6.1p1, OpenSSL 1.0.1e-fips 11 Feb 2013

isDirectory() and related functions don't work in node >= 6.3.0

Calling isDirectory(), or any related function always returns false.

The issue is due to a change in a node 3.6.0 of reordering the constants when called with process.binding.

Pre node 3.6.0:

constants { .... S_IFMT: 61440, S_IFREG: 32768, S_IFDIR: 16384, S_IFCHR: 8192, S_IFBLK: 24576, S_IFIFO: 4096, S_IFLNK: 40960, ... }
in node >= 6.3.0
constants { ... fs: { O_RDONLY: 0, O_WRONLY: 1, O_RDWR: 2, S_IFMT: 61440, S_IFREG: 32768, S_IFDIR: 16384, S_IFCHR: 8192, S_IFBLK: 24576, S_IFIFO: 4096, S_IFLNK: 40960, S_IFSOCK: 49152, ... } ... }

Here is the offending code ssh2-streams/lib/sftp.js
var constants = process.binding('constants');

verified changing to a required as below resolve the issue.

var constants =require('constants');

Invalid Packet Length when connecting with Filezilla

Hello,

I'm currently working on implementing an SFTP server using this project: https://github.com/BriteVerify/node-sftp-server

Whenever I connect with the command line sftp client everything works, but when I try to connect with Filezilla, I get this:

events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: Bad packet length
    at SSH2Stream._transform (/Users/kevin/Development/projects/open-source/node-sftp-server/node_modules/ssh2/node_modules/ssh2-streams/lib/ssh.js:406:25)
    at SSH2Stream.Transform._read (_stream_transform.js:167:10)
    at SSH2Stream._read (/Users/kevin/Development/projects/open-source/node-sftp-server/node_modules/ssh2/node_modules/ssh2-streams/lib/ssh.js:212:15)
    at SSH2Stream.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:328:12)
    at writeOrBuffer (_stream_writable.js:314:5)
    at SSH2Stream.Writable.write (_stream_writable.js:241:11)
    at Socket.ondata (_stream_readable.js:555:20)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:548:20)

It's entirely possible that Filezilla is sending some dodgy data, however it'd be good to get into the detail of what's going on and see if it's really something they're doing wrong or if it's an error in the SSH2 stream.

I'm currently using:
OS: Mac OS X 10.12.1
Node: v6.8.0
Filezilla: 3.22.1
Authentication: Public Key
Algo: ssh-rsa

There was originally a bug filed with the downstream package but it's looking like it isn't something they're going to be able to solve because it's coming from the SSH2 stream.

Is there any other info I could give that would help debug this?

When downloading a large file over sftp, the 65536th byte is always dropped.

High level, this fix addresses an issue where when downloading a large file
(greater than 65535 bytes), the 65536th byte is dropped on each buffer)
Meaning zips - etc cannot be opened.

-- I have a fix for this (pull #42 ) that i've confirmed with my own project.
Please do take a look as quickly as possible.

The highest number should be 65535 not 65536.
The reason is that we are dealing with unsigned digits
and 65535 in binary is 0b1111111111111111
(this is the full integer with all 1s)

65536 in binary is just one more 0b10000000000000000

When downloading large files, what happens is that the buffer
THINKs it stores 65536 bytes, but actually only stores 65535
(and the 65536th byte is lost for each buffer)

Correcting the issue to 64*1024-1 (65535) makes sure that the
highest number of bytes actually meets what it can store.

#42

Question about createReadStream on legacy server giving disconnect packet

I've got the following code. It connects to a legacy server and readdir's. That part works. But when I try to createReadStream from a file it get an error shown below.

  new Promise((resolve, reject) => {
    const conn = new ssh2.Client();
    let params = {
      host: process.env.SFTP_HOST,
      port: process.env.SFTP_PORT,
      username: process.env.SFTP_USER,
      password: process.env.SFTP_PASS,
      algorithms: {
        kex: ["diffie-hellman-group1-sha1"],
        cipher: ["aes128-cbc"],
        hmac: ["hmac-sha1"],
        serverHostKey: ["ssh-rsa"],
        compress: ["none"]
      }
    };

    conn.on('ready', function() {
      debug('SFTP Client :: ready');
      resolve(conn);
    })
    .on('error', (err) => {
      console.error(err);
      reject(err);
    })
    .connect(params);

That works just fine for this legacy server. Then I do a readdir and get a list of files. This part also works. BUT then I do:

          debug(`Downloading File As: ${name}`);
          return new Promise((resolve, reject) => {
            sshFs.createReadStream(
              ssh,
              path.join(REMOTE_DIRECTORY, file),
              (err, readStream) => {
                if (err) {
                  return reject(err);
                }

                resolve(readStream);
              }
            )
          })

And that gives me my error: Unable to start subsystem: sftp

Here is a complete log with debug enabled:

  emaf Downloading File As: FATT_P0BCDNAF_20170309_052629.txt +1ms
DEBUG: Outgoing: Writing CHANNEL_OPEN (1, session)
DEBUG: Outgoing: Writing CHANNEL_EOF (0)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:28,padLen:10,remainLen:16
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_OPEN_CONFIRMATION
TYPE IS 91
DEBUG: Outgoing: Writing CHANNEL_REQUEST (1, subsystem: sftp)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:44,padLen:18,remainLen:32
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
TYPE IS 98
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_REQUEST (0, exit-status)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:12,padLen:6,remainLen:0
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
TYPE IS 97
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_CLOSE (0)
DEBUG: Outgoing: Writing CHANNEL_CLOSE (0)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:60,padLen:12,remainLen:48
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
TYPE IS 1
======
PACKET: <Buffer 01 00 00 00 02 00 00 00 22 46 61 69 6c 65 64 20 74 6f 20 72 65 61 64 20 62 69 6e 61 72 79 20 70 61 63 6b 65 74 20 64 61 74 61 21 00 00 00 00>
DEBUG: Parser: IN_PACKETDATAAFTER, packet: DISCONNECT (PROTOCOL_ERROR)
{ Error: Failed to read binary packet data!
    at onDISCONNECT (/Projects/emaf/node_modules/ssh2-streams/lib/ssh.js:2197:15)
    at SSH2Stream.<anonymous> (/Projects/emaf/node_modules/ssh2-streams/lib/ssh.js:203:5)
    at emitMany (events.js:127:13)
    at SSH2Stream.emit (events.js:201:7)
    at parsePacket (/Projects/emaf/node_modules/ssh2-streams/lib/ssh.js:3745:10)
    at SSH2Stream._transform (/Projects/emaf/node_modules/ssh2-streams/lib/ssh.js:667:13)
    at SSH2Stream.Transform._read (_stream_transform.js:167:10)
    at SSH2Stream._read (/Projects/emaf/node_modules/ssh2-streams/lib/ssh.js:251:15)
    at SSH2Stream.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:307:12) code: 2, level: 'protocol' }
DEBUG: Outgoing: Writing CHANNEL_CLOSE (1)
  emaf Error: Error: Unable to start subsystem: sftp +179ms
^C

As you can see I added logs for the packet and type.

The issue is that this code works just fine on non-legacy ftp systems - but as you can see I have the older algorithms supplied. That enables it to connect and readdir on this legacy server but the createReadStream errors. Is it possible that more configuration is needed for the createReadStream?

I opened this issue but was redirected here.

Thanks!!

Error: Permission denied while uploading a file to a server

Hi,

I have a script that upload a file to an sftp server. It was working with [email protected] but it stopped working with [email protected].

This is the error:

events.js:141
      throw er; // Unhandled 'error' event
      ^

Error: Permission denied
    at SFTPStream._transform (/Users/luca.a.mugnaini/am/node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js:405:27)
    at SFTPStream.Transform._read (_stream_transform.js:167:10)
    at SFTPStream._read (/Users/luca.a.mugnaini/am/node_modules/ssh2/node_modules/ssh2-streams/lib/sftp.js:181:15)
    at SFTPStream.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:300:12)
    at writeOrBuffer (_stream_writable.js:286:5)
    at SFTPStream.Writable.write (_stream_writable.js:214:11)
    at Channel.ondata (_stream_readable.js:542:20)
    at emitOne (events.js:77:13)
    at Channel.emit (events.js:169:7)

I use a Mac with node v4.6.0 but I verified that the same problem occur also on Windows.

Let me know if there is any other info that I can provide.

This is the debug log:

DEBUG: Local ident: 'SSH-2.0-ssh2js0.1.15'
DEBUG: Client: Trying xxxxxxx ...
DEBUG: Client: Connected
DEBUG: Parser: IN_INIT
DEBUG: Parser: IN_GREETING
DEBUG: Parser: IN_HEADER
DEBUG: Remote ident: 'SSH-2.0-mod_sftp/0.9.8'
DEBUG: Outgoing: Writing KEXINIT
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:636,padLen:9,remainLen:632
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXINIT
DEBUG: Comparing KEXINITs ...
DEBUG: (local) KEX algorithms: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
DEBUG: (remote) KEX algorithms: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,rsa1024-sha1
DEBUG: KEX algorithm: diffie-hellman-group-exchange-sha256
DEBUG: (local) Host key formats: ssh-rsa
DEBUG: (remote) Host key formats: ssh-rsa,ssh-dss
DEBUG: Host key format: ssh-rsa
DEBUG: (local) Client->Server ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm,[email protected],aes256-gcm,[email protected]
DEBUG: (remote) Client->Server ciphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,blowfish-ctr,blowfish-cbc,cast128-cbc,arcfour256,arcfour128,3des-ctr,3des-cbc
DEBUG: Client->Server Cipher: aes128-ctr
DEBUG: (local) Server->Client ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm,[email protected],aes256-gcm,[email protected]
DEBUG: (remote) Server->Client ciphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,blowfish-ctr,blowfish-cbc,cast128-cbc,arcfour256,arcfour128,3des-ctr,3des-cbc
DEBUG: Server->Client Cipher: aes128-ctr
DEBUG: (local) Client->Server HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1
DEBUG: (remote) Client->Server HMAC algorithms: hmac-sha1,hmac-sha1-96,hmac-md5,hmac-md5-96,hmac-ripemd160
DEBUG: Client->Server HMAC algorithm: hmac-sha1
DEBUG: (local) Server->Client HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1
DEBUG: (remote) Server->Client HMAC algorithms: hmac-sha1,hmac-sha1-96,hmac-md5,hmac-md5-96,hmac-ripemd160
DEBUG: Server->Client HMAC algorithm: hmac-sha1
DEBUG: (local) Client->Server compression algorithms: none,[email protected],zlib
DEBUG: (remote) Client->Server compression algorithms: none
DEBUG: Client->Server compression algorithm: none
DEBUG: (local) Server->Client compression algorithms: none,[email protected],zlib
DEBUG: (remote) Server->Client compression algorithms: none
DEBUG: Server->Client compression algorithm: none
DEBUG: Outgoing: Writing KEXDH_GEX_REQUEST
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:276,padLen:8,remainLen:272
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXDH_GEX_GROUP
DEBUG: Outgoing: Writing KEXDH_GEX_INIT
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:828,padLen:10,remainLen:824
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXDH_GEX_REPLY
DEBUG: Checking host key format
DEBUG: Checking signature format
DEBUG: Verifying host fingerprint
DEBUG: Host accepted by default (no verification)
DEBUG: Verifying signature
DEBUG: Outgoing: Writing NEWKEYS
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:12,padLen:10,remainLen:8
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: NEWKEYS
DEBUG: Outgoing: Writing SERVICE_REQUEST (ssh-userauth)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:28,padLen:10,remainLen:16
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: SERVICE_ACCEPT
DEBUG: Outgoing: Writing USERAUTH_REQUEST (password)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:12,padLen:10,remainLen:0
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: USERAUTH_SUCCESS
DEBUG: Outgoing: Writing CHANNEL_OPEN (0, session)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:28,padLen:10,remainLen:16
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_OPEN_CONFIRMATION
DEBUG: Outgoing: Writing CHANNEL_REQUEST (0, subsystem: sftp)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:12,padLen:6,remainLen:0
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_SUCCESS (0)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Outgoing: Writing CHANNEL_DATA (0)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:140,padLen:12,remainLen:128
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0)
DEBUG[SFTP]: Outgoing: Writing OPEN
DEBUG: Outgoing: Writing CHANNEL_DATA (0)
DEBUG: Parser: IN_PACKETBEFORE (expecting 16)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: Decrypting
DEBUG: Parser: pktLen:60,padLen:7,remainLen:48
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: Decrypting
DEBUG: Parser: HMAC size:20
DEBUG: Parser: IN_PACKETDATAVERIFY
DEBUG: Parser: Verifying MAC
DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC)
DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0)
DEBUG[SFTP]: Parser: Response: STATUS

Invalid returned stats?

Hi,

When I do a sftp.stat on a path, the stats var has no isDirectory() method (or any method related in the docs. Is there something I'm missing?

Library incorrectly assumes 'stat' result to contain a value for 'size'

The SFTP protocol allows a server to choose what information to return to a 'stat' command.

When talking to a SSH Tectia Server 5.5 for IBM z/OS and attempting to get a dataset, the server will not return the 'size'. This causes the operation to fail here:

buffer.js:275
  throw new TypeError(kFromErrorMsg);
  ^

TypeError: First argument must be a string, Buffer, ArrayBuffer, Array, or array-like object.
    at fromObject (buffer.js:275:9)
    at Function.Buffer.from (buffer.js:107:10)
    at new Buffer (buffer.js:86:17)
    at tryStat (/Users/fcarasso/Dev/cdr-merger/node_modules/ssh2-streams/lib/sftp.js:1246:16)
    at SFTPStream._transform (/Users/fcarasso/Dev/cdr-merger/node_modules/ssh2-streams/lib/sftp.js:496:15)
    at SFTPStream.Transform._read (_stream_transform.js:167:10)
(...)

I could quickly work around the issue by replacing:

 1238:     size = st.size;

with

1238:      size = st.size || 0;

SFTP Error: java.lang.NullPointerException

I'm using sftp.fastPut and with files over a few KB I'm routinely getting the following exception thrown:

Error: java.lang.NullPointerException
    at SFTPStream._transform (/Users/adam/Projects/Emarsys/nodeETL/node_modules/ssh2-streams/lib/sftp.js:405:27)
    at SFTPStream.Transform._read (_stream_transform.js:167:10)
    at SFTPStream._read (/Users/adam/Projects/Emarsys/nodeETL/node_modules/ssh2-streams/lib/sftp.js:181:15)
    at SFTPStream.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:332:12)
    at clearBuffer (_stream_writable.js:439:7)
    at onwrite (_stream_writable.js:371:7)
    at afterTransform (_stream_transform.js:79:3)
    at TransformState.afterTransform (_stream_transform.js:54:12)
    at SFTPStream._transform (/Users/adam/Projects/Emarsys/nodeETL/node_modules/ssh2-streams/lib/sftp.js:745:3)
    at SFTPStream.Transform._read (_stream_transform.js:167:10)
    at SFTPStream._read (/Users/adam/Projects/Emarsys/nodeETL/node_modules/ssh2-streams/lib/sftp.js:181:15)
    at SFTPStream.Readable.read (_stream_readable.js:348:10)
    at flow (_stream_readable.js:761:34)
    at Channel.<anonymous> (_stream_readable.js:623:7)
    at emitNone (events.js:86:13) code: 4, lang: '' }

I can't figure out what about the file / code is causing the exception as the stack trace is not very helpful. Here's the relevant bit of code:

return new Promise(function(resolve,reject) {
    conn.on('ready', function(){
        console.log('connected to upload server');
        conn.sftp(function(err,sftp) {
            if(err) return reject(err);
            console.log('sftp started on uploaded');

            sftp.fastPut(get,put,function(err) {
                sftp.end();
                conn.end();
                if(err) return reject(err);
                console.log('upload callback');
                console.timeEnd('promise');
                resolve({
                    local: get,
                    remote: put
                });
            })
        });
    }).connect(self.credentials);
});

Any ideas or suggestions on what might be causing the issue? It seems to occur on the third _write iteration as two chunks are written and the third causes the exception.

Uncaught range error

OS: Windows 10 Pro version 1607
Atom's Node version: 6.3.0
Atom's version (not sure if relevant): 1.12.1
Getting this stack trace by using the sync local <- remote feature of https://github.com/mgrenier/remote-ftp version 0.9.4 on a large directory

buffer.js:8 Uncaught RangeError: 
Array buffer allocation failed
FastBuffer @ buffer.js:8
createUnsafeBuffer @ buffer.js:33
allocate @ buffer.js:176
Buffer.allocUnsafe @ buffer.js:136
Buffer @ buffer.js:73
fastXfer @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\sftp.js:1003
SFTPStream.fastGet @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\sftp.js:1128
SFTPWrapper.fastGet @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2\lib\SFTPWrapper.js:51
(anonymous function) @ C:\Users\zaid\.atom\packages\remote-ftp\lib\connectors\sftp.js:228
SFTPStream._transform @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\sftp.js:496
Transform._read @ _stream_transform.js:167
SFTPStream._read @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\sftp.js:181
Transform._write @ _stream_transform.js:155
doWrite @ _stream_writable.js:307
writeOrBuffer @ _stream_writable.js:293
Writable.write @ _stream_writable.js:220
ondata @ _stream_readable.js:556
emitOne @ events.js:96
emit @ events.js:188
readableAddChunk @ _stream_readable.js:177
Readable.push @ _stream_readable.js:135
(anonymous function) @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2\lib\Channel.js:166
emitOne @ events.js:96
emit @ events.js:188
parsePacket @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\ssh.js:3400
SSH2Stream._transform @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\ssh.js:665
Transform._read @ _stream_transform.js:167
SSH2Stream._read @ C:\Users\zaid\.atom\packages\remote-ftp\node_modules\ssh2-streams\lib\ssh.js:249
Transform._write @ _stream_transform.js:155
doWrite @ _stream_writable.js:307
writeOrBuffer @ _stream_writable.js:293
Writable.write @ _stream_writable.js:220
ondata @ _stream_readable.js:556
emitOne @ events.js:96
emit @ events.js:188
readableAddChunk @ _stream_readable.js:177
Readable.push @ _stream_readable.js:135
onread @ net.js:542

Where can I get status of command output when use shell mode?

I'm developing an SSH client base on SSH2 in browser with node-webkit(a nodejs chrome), but I find it's hard to deal with output when use stream.on('data', callbak), because I can't get any status of those output.
When I use up arrow, I change the command with output by my self, it's fine; But when I make some command like "sudo apt-get update", I can't know when it's done. And also, the output can only be print as " [Waiting for headers] 100% [Waiting for headers] 100% [Waiting for headers] 100% [Waiting for headers] 100%" cause output multiple times;
Any suggest for that?

How to set Type in Attrs()?

If I'm reading section 7 of the SFTP Spec correctly (and I may not be... very new to SFTP internals), shouldn't the Attrs structure always include a type byte? Don't see it in the docs and don't see it in the code.

Specifically, I'm implementing a Server, and I'm trying to implement REATPATH & STAT so that a client can execute 'cd /server/path' successfully. When I try that from my client (Max OS X - /usr/bin/sftp), I see the REALPATH and STAT events on the server, but the client reports, "Can't change directory: "/server/path" is not a directory". I'm guessing that's because the Attrs structure returned by STAT doesn't include the SSH_FILEXFER_TYPE_DIRECTORY value in the type byte.

Error: Bad packet length thrown from SSH2Stream._transform

Hey Brian,

I'm running into this error occasionally when forwarding data through an accept stream returned from the ssh2 module's client.on("tcp connection", ...) call. You'll notice that some of the debug below comes from my module but hopefully it doesn't obscure anything. The scenario below is the result of a uploading a ~935Kb file (I use ssh2 to set up a reverse proxy) - you can see the initial POST kicks things off, and every REMOTE --> LOCAL line is emitted when the reading stream receives data:

  finch:core:tunnel Issuing request: POST / +1ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: pktLen:2924,padLen:18,remainLen:2912 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATA +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: HMAC size:16 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY +1ms
  finch:core:tunnel DEBUG: Parser: Verifying MAC +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (2) +0ms
  finch:core:tunnel REMOTE --> LOCAL +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: pktLen:2924,padLen:18,remainLen:2912 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATA +1ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: HMAC size:16 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY +0ms
  finch:core:tunnel DEBUG: Parser: Verifying MAC +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (2) +0ms
  finch:core:tunnel REMOTE --> LOCAL +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: pktLen:2924,padLen:18,remainLen:2912 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATA +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: HMAC size:16 +1ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY +0ms
  finch:core:tunnel DEBUG: Parser: Verifying MAC +1ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (2) +1ms
  finch:core:tunnel REMOTE --> LOCAL +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: pktLen:1468,padLen:10,remainLen:1456 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATA +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: HMAC size:16 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY +0ms
  finch:core:tunnel DEBUG: Parser: Verifying MAC +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (2) +0ms
  finch:core:tunnel REMOTE --> LOCAL +1ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: pktLen:2924,padLen:18,remainLen:2912 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATA +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: HMAC size:16 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY +0ms
  finch:core:tunnel DEBUG: Parser: Verifying MAC +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (2) +1ms
  finch:core:tunnel REMOTE --> LOCAL +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: pktLen:2924,padLen:18,remainLen:2912 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATA +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Parser: HMAC size:16 +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY +0ms
  finch:core:tunnel DEBUG: Parser: Verifying MAC +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (2) +1ms
  finch:core:tunnel REMOTE --> LOCAL +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms
  finch:core:tunnel DEBUG: Parser: IN_PACKET +0ms
  finch:core:tunnel DEBUG: Parser: Decrypting +0ms
  finch:core:tunnel DEBUG: Outgoing: Writing DISCONNECT (PROTOCOL_ERROR) +0ms
  finch:core:tunnel DEBUG: Parser: Bad packet length (65584) +0ms

Error: Bad packet length
  at SSH2Stream._transform (/home/nick/code/node/finch/client/core/node_modules/ssh2/node_modules/ssh2-streams/lib/ssh.js:412:25)
  at SSH2Stream.Transform._read [as __read] (_stream_transform.js:179:10)
  at SSH2Stream._read (/home/nick/code/node/finch/client/core/node_modules/ssh2/node_modules/ssh2-streams/lib/ssh.js:213:15)
  at SSH2Stream.Transform._write (_stream_transform.js:167:12)
  at doWrite (_stream_writable.js:226:10)
  at writeOrBuffer (_stream_writable.js:216:5)
  at SSH2Stream.Writable.write (_stream_writable.js:183:11)
  at write (_stream_readable.js:582:24)
  at flow (_stream_readable.js:591:7)
  at Socket.pipeOnReadable (_stream_readable.js:623:5)
  at Socket.EventEmitter.emit (events.js:92:17)
  at emitReadable_ (_stream_readable.js:407:10)
  at emitReadable (_stream_readable.js:403:5)
  at readableAddChunk (_stream_readable.js:165:9)
  at Socket.Readable.push (_stream_readable.js:127:10)
  at TCP.onread (net.js:528:21)

I can supply a source code snippet if it helps but all that's happening is the accept stream is being piped into a local net.Socket, and that socket is being piped back into the accept stream. The code in my wrapper module hasn't changed much for a year or so, and indeed while debugging things today I swapped back to ssh2 0.3.3 (i.e. pre rewrite / ssh2-streams split) and the same scenario and file works reliably every time.

I must admit the inner workings of the ssh2-streams module are a bit beyond my limited knowledge of the protocol :). What I have found is that each stream's maxPacketSize is (effectively) hard-coded to 35000 bytes; although it is an optional constructor param of each stream, users of the ssh2 module can't configure the value which is passed in:

https://github.com/mscdex/ssh2-streams/blob/master/lib/ssh.js#L104

Initially I noticed that the bad packet was suspiciously close to the max TCP packet size of 64Kb (albeit with some padding which I put down to SSH2 protocol padding), so manually doubled MAX_PACKET_SIZE to 70000. That alone wasn't enough due to:

https://github.com/mscdex/ssh2-streams/blob/master/lib/ssh.js#L3159-L3162

Doubling that figure in tandem with a larger max packet size solved the issue. Just to humour myself I tried MAX_CHAN_DATA_LEN again with a value of (32768*2)-1 and sure enough tripped the limit again.

As you can tell, I don't know enough to pose the correct solution but wanted to outline the problem with as much information as I could. For what it's worth, and although I can't repeat it again now, I also saw the issue occur with a much larger final packet of around 750Kb after several 16Kb packets. I realise without debug output that's anecdotal, and has only happened maybe twice out of 100 or so attempts, but it did happen (honest). I manually tallied the total packet sizes in that instance and they matched up to the size of the file being uploaded, so I'm fairly confident what I saw was correct. After a bit more digging I do believe that's perfectly possible, since a data event isn't necessarily coupled 1:1 with an underlying TCP packet.

One thing which might be relevant (or not): the call to TransformStream.push with the big packet returns false, i.e. the stream is starting to be buffered at this point. I know one can continue to write even in such a situation, but just trying to supply as much information as possible.

Anything else you need or want me to debug more specifically, please do just let me know.

All the best, and thanks again for your great work :)

Nick

fastPut step 'total_transferred' always the same as 'total'

Hello. Looking at the docs for sftp client fastPut:

  • fastPut(< string >localPath, < string >remotePath[, < object >options], < function >callback) - (void) - Uploads a file from localPath to remotePath using parallel reads for faster throughput. options can have the following properties:

    concurrency - integer - Number of concurrent reads Default: 25

    chunkSize - integer - Size of each read in bytes Default: 32768

    step - function(< integer >total_transferred, < integer >chunk, < integer >total) - Called every time a part of a file was transferred

callback has 1 parameter: < Error >err.

  1. I'm guessing it should be 'writes' where it says 'reads'.
  2. I'm trying it as follows:
sftp.fastPut( localPath, remotePath, {
        step: function ( totalTx, chunk, total ) {
            console.log( 'uploadFile.step totalTx', totalTx, 'chunk', chunk, 'total', total );
        }
    },

and getting the following output for a file of 106614 bytes:

uploadFile.step totalTx 106614 chunk 32768 total 106614
uploadFile.step totalTx 106614 chunk 32768 total 106614
uploadFile.step totalTx 106614 chunk 32768 total 106614
uploadFile.step totalTx 106614 chunk 8310 total 106614

The "totalTx" ("total_transferred") appears to be the same value as the "total"; is this correct? Is there no running total of "bytes transferred so far"? I can easily calculate it myself but then I'm wondering why there are two "total" parameters.

Thanks!

Cannot upload bigger file size or larger number of files

Stacktrace,

events.js:141
      throw er; // Unhandled 'error' event
      ^
 Error: Failure
    at SFTPStream._transform (F:\gulp-ssh\node_modules\ssh2\node_modules\ssh2-streams\lib\sftp.js:384:27)
    at SFTPStream.Transform._read (_stream_transform.js:167:10)
    at SFTPStream._read (F:\node_modules\gulp-ssh\node_modules\ssh2\node_modules\ssh2-streams\lib\sftp.js:170:15)
    at SFTPStream.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:300:12)
    at writeOrBuffer (_stream_writable.js:286:5)
    at SFTPStream.Writable.write (_stream_writable.js:214:11)
    at Channel.ondata (_stream_readable.js:542:20)
    at emitOne (events.js:77:13)
    at Channel.emit (events.js:169:7)

Related to this issue: teambition/gulp-ssh#51
Summary:

  1. When trying to upload 100> files, I am getting this error
  2. When I try to upload one file of size 30 MB + , I am getting this error.

Handshake failed: no matching host key format

Hi There,

I am hoping you can help me trace down this error that's coming from this lib. we have a service using this library to connect to sftp and when we moved the docker to another host one of the sftp accounts started failing and generating these errors. I can sftp directly from that node to the valid account, however we fail when using this library.

I am a bit unsure what would be causing this, so any direction you can point me in is much appreciated.

Error: Handshake failed: no matching host key format
    at check_KEXINIT (/var/www/node_modules/ssh2-streams/lib/ssh.js:2268:15)
    at check (/var/www/node_modules/ssh2-streams/lib/ssh.js:2179:9)
    at onKEXINIT (/var/www/node_modules/ssh2-streams/lib/ssh.js:2176:5)
    at SSH2Stream.<anonymous> (/var/www/node_modules/ssh2-streams/lib/ssh.js:204:39)
    at emitOne (events.js:77:13)
    at SSH2Stream.emit (events.js:169:7)
    at parse_KEXINIT (/var/www/node_modules/ssh2-streams/lib/ssh.js:4028:8)
    at parsePacket (/var/www/node_modules/ssh2-streams/lib/ssh.js:3927:12)
    at SSH2Stream._transform (/var/www/node_modules/ssh2-streams/lib/ssh.js:652:13)
    at SSH2Stream.Transform._read (_stream_transform.js:167:10)
    at SSH2Stream._read (/var/www/node_modules/ssh2-streams/lib/ssh.js:236:15)
    at SSH2Stream.Transform._write (_stream_transform.js:155:12)

Corrupt json file

Attempting to download a JSON file from a Cerberus FTP server (version 5). Upon inspection, it appears that a quote is being dropped before the second key. I was able to pull the file with no issues on my linux install using sftp command and validate it as valid JSON.

While I understand this might be related to the server's protocol it is in fact outside of my control. Any help would be greatly appreciated

"stat":0.0905846153846,top_ftrxavg_wkc13":0.37284615384'

Attempting to JSON.parse the raw downloaded string results in the following error:

SyntaxError: Unexpected token t in JSON at position 65535

Is there any way to interrupt a transfer cleanly?

Hi, is there any way to stop a transfer in progress? I'm using sftp.fastPut with a progress dialog that contains a "stop" button. I can't find a way to cleanly terminate the put other than using connection.end() but I'm not sure if this is a clean termination. I was considering looking at putting a stop mechanism in onstep so it would at least close the handles properly and use the onerror callback. Is there a better way that I'm missing?

Thanks.

installation complete or not?

$ npm install ssh2-streams
[email protected] /Users/sebastien.cheung/spark2acs
├── UNMET PEER DEPENDENCY [email protected]
└── [email protected]

npm WARN [email protected] requires a peer of react@^0.14.0 but none was installed.

Does it mean it is installed or not?

which ssh2-streams returns empty?

Because when doing a MeteorJS mupx setup:

$ mupx setup

Meteor Up: Production Quality Meteor Deployments

Configuration file : mup.json
Settings file : settings.json

“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup

Started TaskList: Setup (linux)
[46.101.86.130] - Installing Docker
events.js:141
throw er; // Unhandled 'error' event
^

Error: All configured authentication methods failed
at tryNextAuth (/usr/local/lib/node_modules/mupx/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/local/lib/node_modules/mupx/node_modules/ssh2/lib/client.js:469:5)
at emitTwo (events.js:87:13)
at SSH2Stream.emit (events.js:172:7)
at parsePacket (/usr/local/lib/node_modules/mupx/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/local/lib/node_modules/mupx/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:167:10)
at SSH2Stream._read (/usr/local/lib/node_modules/mupx/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:155:12)
at doWrite (_stream_writable.js:300:12)

mkdirp method

Hi I was wondering whether there are any plans for a mkdirp method of if there is already away to get the functionality via the mkdir method (I'm using your ssh2 module)?

Client hang after debug message "DEBUG: Parser: Decrypting"

The issue appears right after the debug message "DEBUG: Parser: Decrypting" appears the client hangs until idle timeout then disconnects. The only way i could track this down i ran the program in STRACE until it hung and this is the output.

write(9, "DEBUG: Parser: pktLen:4668,padLe"..., 52DEBUG: Parser: pktLen:4668,padLen:11,remainLen:4656
) = 52
epoll_wait(5, {{EPOLLIN, {u32=12, u64=12}}}, 1024, 0) = 1
read(12, "!\v\343\351\263q\r.\375\230>X\250\204O\310\322O\f\330\260\302C)\207\233\177j\275+\231\255"..., 65536) = 2240
epoll_wait(5, 

a valid output is as follows.

write(9, "DEBUG: Parser: pktLen:4620,padLe"..., 52DEBUG: Parser: pktLen:4620,padLen:11,remainLen:4608
) = 52
epoll_wait(5, {{EPOLLIN, {u32=12, u64=12}}}, 1024, 0) = 1
read(12, "\321\7\35Q\227\363\35R\267\235\207\210\322\tA]\233\234\314\315\213\343\216,M\17\365\367\215\23\33\217"..., 65536) = 1680
epoll_wait(5, {{EPOLLIN, {u32=12, u64=12}}}, 1024, -1) = 1
read(12, "As`\362\23\325n\301!|fQ\267\233\236&m\2\255\177\5\237cXGK\320\2\16\235\266\7"..., 65536) = 560
epoll_wait(5, {{EPOLLIN, {u32=12, u64=12}}}, 1024, -1) = 1
read(12, "\350\245\201hd\212\32\273\374)\356\1E3\276>\320q\210.\305\376\344\301\267j>\220\306\301\251\241"..., 65536) = 1120
epoll_wait(5, {{EPOLLIN, {u32=12, u64=12}}}, 1024, -1) = 1
read(12, "u\302\234\251q\37\nw.\376\247\332\336\343\304\251UY\327\323\360\n\213\334L\326\22\16\244\333\336\301"..., 65536) = 160
write(9, "DEBUG: Parser: IN_PACKETDATA\n", 29DEBUG: Parser: IN_PACKETDATA
) = 29
write(9, "DEBUG: Parser: Decrypting\n", 26DEBUG: Parser: Decrypting
) = 26
write(9, "DEBUG: Parser: HMAC size:16\n", 28DEBUG: Parser: HMAC size:16
) = 28
write(9, "DEBUG: Parser: IN_PACKETDATAVERI"..., 35DEBUG: Parser: IN_PACKETDATAVERIFY
) = 35
write(9, "DEBUG: Parser: Verifying MAC\n", 29DEBUG: Parser: Verifying MAC
) = 29

Any assistance on this issue is greatly appreciated let me know if there is anything that i can do to assist in.

keyParser headers

The OSX keychain exports RSA keys with a header -----BEGIN RSA PUBLIC KEY-----, regardless of the export format (SSH, SSH2, OpenSSL). The private key begins -----BEGIN RSA PRIVATE KEY-----.

The keyParser doesn't anticipate this PEM armour.

Do you have any suggestions to integrate with this type of OSX export?

Responding to a READDIR request - folders always being listed as files

I'm responding to a READDIR request with the following data:

[
    {
        "filename": "stuff.txt",
        "longname": "-rw------- 1 foo foo 20 Sep 8 14:39 stuff.txt",
        "attrs": {
            "mode": "0600",
            "size": 20,
            "atime": 1504900534,
            "mtime": 1504899577
        }
    },
    {
        "filename": "things",
        "longname": "drw------- 2 foo foo 68 Sep 8 14:48 things",
        "attrs": {
            "mode": "0600",
            "size": 68,
            "atime": 1504901150,
            "mtime": 1504900080
        }
    }
]

things is a directory, but my SFTP client (Transmit) thinks it is a file. Am I doing something wrong?

How to conn setMaxListeners?

Hello,
I'm developing sftp related function using ssh2 module.
There is a problem using the ssh module. So create issues.

This Error Log

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 
11 ready listeners added. Use emitter.setMaxListeners() to increase 
  | process.on | @ | internal/process/warning.js:21
  | emitOne | @ | events.js:96
  | emit | @ | events.js:191
  | process.nextTick | @ | internal/process/warning.js:47
  | _combinedTickCallback | @ | internal/process/next_tick.js:73
  | _tickCallback | @ | internal/process/next_tick.js:104


The code that is currently experiencing the problem.

// other code...
import { Client } from 'ssh2';

const conn = new Client();
const config = {
  host: database.sftpURL,
  port: database.sftpPORT,
  username: database.sftpUsername,
  password: database.sftpPassword,
};

// connect conn and return sftp object using Promise Patterns
function connectSFTP() {
  return new Promise((resolve, reject) => {
    conn.on('ready', () => {
      conn.sftp((err, sftp) => {
        if (err) reject({ error: err, message: 'Blah Blah' });
        resolve(sftp);
      });
    }).connect(config);
  })
}

// cllas connectSFTP()
function methodName(cb, errorCb) {
  connectSFTP()
  .then(sftp => Promise.resolve(otherMethod(sftp)))
  .then((data) => {
    conn.end();
    cb(data);
  })
  .catch((err) => {
    conn.end();
    errorCb(err)
  });
}

function otherMethod(sftp) {
  return new Promise((resolve, reject) => {
    sftp.readdir('.', (err, files) => {
      if (err) reject({ error: err, message: 'Blah Blah' });
      resolve(files);
    });
  });
}

I'm using code similar to the above.
The problem is that up to ten Listeners register on the node is the default, an error occurs on the 11th call.

So, to solve this problem,
I set conn.setMaxListeners(0) but the problem is not solved.

How can i solve this problem
Please help us solve the problem!

Support for recursive copy with put?

sftp's put command supports the -r option, which will recursively copy a directory to the destination. After digging through the code, I don't see a way to do that with this lib, but I could be overlooking something. Is it supported? If not, are there plans to support it?

scp is not an option for me. I've got to upload files to a server that is secured in such a way that sftp is my only option.

How to use fastput

Hey, I'm having a hard time implementing fastPut , would be great if included an example
So far what i tried

const ssh2Client = require("ssh2").Client;
let sftpConnection = new ssh2Client();
            let sftpOptions = {
                host: ip,
                username: 'ubuntu',
                path: sourcePath,
                remoteDir: destinationPath,
                privateKey: fs.readFileSync('./somekeyfile.pem')
            };
            sftpConnection.on('ready', function() {
                sftpConnection.sftp(function(err, sftp) {              
                    sftp.fastPut(sftpOptions.path, sftpOptions.remoteDir, function(err){
                        if(err) {
                            console.log("err in fastput");
                            console.log(err);
                        }
                        console.log("completed -----------------------------------------");
                    });
                });
            }).connect(sftpOptions);
 

Off-by-one bug handling bad_pkt leads to server error upon receiving two SSH_FXP_EXTENDED requests in sequence

Unit test repro:

  { run: function() {
      setup(this);

      var self = this;
      var what = this.what;
      var client = this.client;

      client._state.extensions['[email protected]'] = ['2'];

      this.onReady = function() {
        var pathOne_ = '/foo/baz';
        client.ext_openssh_statvfs(pathOne_, function(err, fsInfo) {
          assert(++self.state.responses === 1,
                 makeMsg(what, 'Saw too many responses'));
          assert(err && err.code === STATUS_CODE.OP_UNSUPPORTED,
                 makeMsg(what, 'Expected OP_UNSUPPORTED, got: ' + err));

          var pathTwo_ = '/baz/foo';
          client.ext_openssh_statvfs(pathOne_, function(err, fsInfo) {
            assert(++self.state.responses === 2,
                   makeMsg(what, 'Saw too many responses'));
            assert(err && err.code === STATUS_CODE.OP_UNSUPPORTED,
                   makeMsg(what, 'Expected OP_UNSUPPORTED, got: ' + err));
            next();
          });
        });
      };
    },
    what: 'multiple extended operations in sequence fail as expected'
  },

Results in:

AssertionError: [test-sftp/multiple extended operations fail as expected]: Unexpected server error: Error: Packet length (2046820351) exceeds max length (34000)

Error: com.jcraft.jsch.JSchException: Auth fail at onDISCONNECT

First - thank you for the substantial amount of work you've put in on this library. It is tremendously useful.

I've put together a server for the sole purpose of accepting SFTP connections:

https://github.com/tkambler/sftp-server

I'm still getting familiar with the various stream classes, and I've yet to determine the source of the following error that I'm frequently seeing show up in my logs. If you could point me in the right direction, I would really appreciate it.

Error: com.jcraft.jsch.JSchException: Auth fail\n    at onDISCONNECT 
(/home/ubuntu/sftp/node_modules/ssh2-streams/lib/ssh.js:2201:15)\n    at SSH2Stream.
<anonymous> (/home/ubuntu/sftp/node_modules/ssh2-streams/lib/ssh.js:203:5)\n    at emitMany 
(events.js:127:13)\n    at SSH2Stream.emit (events.js:201:7)\n    at parsePacket 
(/home/ubuntu/sftp/node_modules/ssh2-streams/lib/ssh.js:3747:10)\n    at 
SSH2Stream._transform (/home/ubuntu/sftp/node_modules/ssh2-streams/lib/ssh.js:668:13)\n    at 
SSH2Stream.Transform._read (_stream_transform.js:167:10)\n    at SSH2Stream._read 
(/home/ubuntu/sftp/node_modules/ssh2-streams/lib/ssh.js:251:15)\n    at 
SSH2Stream.Transform._write (_stream_transform.js:155:12)\n    at doWrite 
(_stream_writable.js:334:12)\n    at writeOrBuffer (_stream_writable.js:320:5)\n    at 
SSH2Stream.Writable.write (_stream_writable.js:247:11)\n    at Socket.ondata 
(_stream_readable.js:555:20)\n    at emitOne (events.js:96:13)\n    at Socket.emit 
(events.js:188:7)\n    at readableAddChunk (_stream_readable.js:176:18)\n    at 
Socket.Readable.push (_stream_readable.js:134:10)\n    at TCP.onread (net.js:548:20)

SFTP ReadStream doesn't handle newlines

Hi,

I'm reading a distant file with

var data = '';
sftp.createReadStream(path.join(csvReturnPath, getFileName(new Date(), true)), {encoding: 'utf8'})
.on('readable', function() {
  data += this.read();
})
.on('end', function() {
  console.log(data);
});

My distant file has multiple lines but all I get in data is a big string without any \n. For comparison, fs.createReadStream ouputs me a nice multilined string.

Is there a option to handle this or is this a bug?

Thanks

Seeing a signature verification connecting to Solaris sshd server with 7.1p2

While I was able to successfully connect to Solaris running sshd with 7.1p1, I seem to be having problems connecting using ssh2 with a failure in the ssh2-streams when connecting to 7.1p2.

Is there some way the I could figure out whether the problem is on the client or server side, since I
don't have any problems connecting using any other ssh clients to this system.

I can't see anything obvious in the debug output:

DEBUG: Local ident: 'SSH-2.0-ssh2js0.1.1'
DEBUG: Client: Trying remote-host on port 22 ...
DEBUG: Client: Connected
DEBUG: Parser: IN_INIT
DEBUG: Parser: IN_GREETING
DEBUG: Parser: IN_HEADER
DEBUG: Remote ident: 'SSH-2.0-OpenSSH_7.1'
DEBUG: Outgoing: Writing KEXINIT
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:948,padLen:9,remainLen:944
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXINIT
DEBUG: Comparing KEXINITs ...
DEBUG: (local) KEX algorithms: diffie-hellman-group1-sha1,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
DEBUG: (remote) KEX algorithms: gss-gex-sha1-toWM5Slw5Ew8Mqkay+al2g==,gss-group1-sha1-toWM5Slw5Ew8Mqkay+al2g==,gss-group14-sha1-toWM5Slw5Ew8Mqkay+al2g==,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
DEBUG: KEX algorithm: diffie-hellman-group-exchange-sha256
DEBUG: (local) Host key formats: ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
DEBUG: (remote) Host key formats: ssh-rsa
DEBUG: Host key format: ssh-rsa
DEBUG: (local) Client->Server ciphers: aes128-cbc,3des-cbc,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm,[email protected],aes256-gcm,[email protected]
DEBUG: (remote) Client->Server ciphers: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
DEBUG: Client->Server Cipher: aes128-ctr
DEBUG: (local) Server->Client ciphers: aes128-cbc,3des-cbc,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm,[email protected],aes256-gcm,[email protected]
DEBUG: (remote) Server->Client ciphers: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
DEBUG: Server->Client Cipher: aes128-ctr
DEBUG: (local) Client->Server HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1
DEBUG: (remote) Client->Server HMAC algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
DEBUG: Client->Server HMAC algorithm: hmac-sha2-256
DEBUG: (local) Server->Client HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1
DEBUG: (remote) Server->Client HMAC algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
DEBUG: Server->Client HMAC algorithm: hmac-sha2-256
DEBUG: (local) Client->Server compression algorithms: none,[email protected],zlib
DEBUG: (remote) Client->Server compression algorithms: none,[email protected]
DEBUG: Client->Server compression algorithm: none
DEBUG: (local) Server->Client compression algorithms: none,[email protected],zlib
DEBUG: (remote) Server->Client compression algorithms: none,[email protected]
DEBUG: Server->Client compression algorithm: none
DEBUG: Outgoing: Writing KEXDH_GEX_REQUEST
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:404,padLen:8,remainLen:400
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXDH_GEX_GROUP
DEBUG: Outgoing: Writing KEXDH_GEX_INIT
DEBUG: Parser: IN_PACKETBEFORE (expecting 8)
DEBUG: Parser: IN_PACKET
DEBUG: Parser: pktLen:956,padLen:9,remainLen:952
DEBUG: Parser: IN_PACKETDATA
DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXDH_GEX_REPLY
DEBUG: Checking host key format
DEBUG: Checking signature format
DEBUG: Verifying host fingerprint
DEBUG: Host accepted by default (no verification)
DEBUG: Verifying signature
DEBUG: Signature could not be verified
DEBUG: Outgoing: Writing DISCONNECT (KEY_EXCHANGE_FAILED)

Any advice on figuring this out would be great - unfortunately I cannot provide you
with access to the remote machine.

Standalone SFTP over stream?

If I already have a full-duplex stream between 2 parties (without SSH), is the SFTPStream module capable of listening/operating over the stream? i.e. Just the SFTP protocol implementation over stream but without SSH dependency.

`sftp .createWriteStream` always yields file with 0664 permissions

Hi there,

When using the sftp.createWriteStream method, the file written to the server always ends up with 0664 permissions, regardless of what is set in the mode property of the options argument. This also occurs if no options argument is provided.

Thank you for your work on this excellent library.

SFTP client fastPut error

I can successfully connect to the server and get a directory listing, but when attempting to upload a file, I get a failure with the following stack trace with the following code:

ftp.on('ready', function () {

    console.log('connection success');
    ftp.sftp(function (err, sftp) {
        if (err)
            throw err;

        sftp.fastPut(currentFile, '/', function (err) {
            if (err) throw err;
            console.log('done');
        });
    });
});

Error: Failure
at SFTPStream._transform (C:\Users\sam.bengtson\Documents\NodeProjects\nodejs\tsm-usg-batch\node_modules\ssh2\node_modules\ssh2-streams\lib\sftp.js:354:27)
at SFTPStream.Transform._read as __read
at SFTPStream._read (C:\Users\sam.bengtson\Documents\NodeProjects\nodejs\tsm-usg-batch\node_modules\ssh2\node_modules\ssh2-streams\lib\sftp.js:160:15)
at SFTPStream.Transform._write (_stream_transform.js:167:12)
at doWrite (_stream_writable.js:301:12)
at writeOrBuffer (_stream_writable.js:288:5)
at SFTPStream.Writable.write (_stream_writable.js:217:11)
at Channel.ondata (_stream_readable.js:540:20)
at Channel.emit (events.js:107:17)
at readableAddChunk (_stream_readable.js:163:16)

Any thoughts?

SFTP version negotiation

I think the version should be negotiated, instead of terminating with error if client version is not exactly "3". Using WinSCP I must now select explicitly version 3 as preferred, instead default 6.
By my understanding, server should send its own preferred version as response to client init packet, at least if client version is greater or equal than highest supported server version.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.