GithubHelp home page GithubHelp logo

fastdfs-client-java's People

Contributors

dependabot[bot] avatar happyfish100 avatar json20080301 avatar mengpengfei avatar niloay6 avatar rui8832 avatar shootercheng avatar sunqiangwei1988 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastdfs-client-java's Issues

fastdfs配置外网应用访问内网fastdfs服务器

当集成fastdfs-client后,将应用配置到阿里云上面,fastdfs服务器在内网,然后配置fdfs.tracker-list为外网ip, 但是23000端口还是读取到了内网IP.
com.github.tobato.fastdfs.proto.AbstractFdfsCommand.send(AbstractFdfsCommand.java:74) - 发出交易请求..报文头为ProtoHead [contentLength=0, cmd=101, status=0]
[17:16:41:037] [DEBUG] - com.github.tobato.fastdfs.proto.AbstractFdfsCommand.send(AbstractFdfsCommand.java:75) - 交易参数为[]
[17:16:41:050] [DEBUG] - com.github.tobato.fastdfs.proto.AbstractFdfsCommand.receive(AbstractFdfsCommand.java:99) - 服务端返回报文头ProtoHead [contentLength=40, cmd=100, status=0]
[17:16:41:053] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:201) - dump class=com.github.tobato.fastdfs.domain.StorageNode
[17:16:41:053] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:202) - ----------------------------------------
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=groupName, index=0, max=16, size=16, offsize=0]
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=ip, index=1, max=15, size=15, offsize=16]
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=port, index=2, max=0, size=8, offsize=31]
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=storeIndex, index=3, max=0, size=1, offsize=39]
[17:16:41:055] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=groupName, index=0, max=16, size=16, offsize=0]group1
[17:16:41:087] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string 'group1' to class 'java.lang.String'
[17:16:41:089] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=ip, index=1, max=15, size=15, offsize=16]192.168.99.223
[17:16:41:089] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string '192.168.99.223' to class 'java.lang.String'
[17:16:41:089] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=port, index=2, max=0, size=8, offsize=31]23000
[17:16:41:090] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string '23000' to class 'int'
[17:16:41:090] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=storeIndex, index=3, max=0, size=1, offsize=39]0
[17:16:41:090] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string '0' to class 'byte'
[17:16:41:092] [DEBUG] - com.github.tobato.fastdfs.conn.DefaultConnection.(DefaultConnection.java:48) - connect to /192.168.99.223:23000 soTimeout=3000 connectTimeout=6000
[17:16:42:582] [DEBUG] - com.alibaba.dubbo.remoting.exchange.support.header.HeartbeatHandler.received(HeartbeatHandler.java:74) - [DUBBO] Received heartbeat from remote channel /113.118.199.215:62212, cause: The channel has no data-transmission exceeds a heartbeat period: 60000ms, dubbo version: 2.6.2, current host: 47.89.31.99
[17:16:44:568] [DEBUG] - org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:742) - Got ping response for sessionid: 0x1668201744503d6 after 13ms
[17:16:47:094] [ERROR] - com.bessky.erp.logistics.api.tengjia.GetTengJiaCall.submitOrder(GetTengJiaCall.java:168) - 无法获取服务端连接资源:can't create connection to/192.168.99.223:23000

连接超时问题

StorageServer storageServer = trackerClient.getStoreStorage(trackerServer);
当执行到上面这行代码时会报超时错误:
java.net.SocketTimeoutException: connect timed out

我检查了fdfs的服务器ip和端口,都没问题。trackerClient和trackerServer都可以获取到对象的。
完整代码如下:
ClientGlobal.init(CONFIG_FILENAME);
trackerClient = new TrackerClient();
trackerServer = trackerClient.getConnection();
if (trackerServer == null) {
throw new IllegalStateException("getConnection return null");
}

        **StorageServer storageServer = trackerClient.getStoreStorage(trackerServer);**
        if (storageServer == null) {
            throw new IllegalStateException("getStoreStorage return null");
        }
        storageClient = new StorageClient1(trackerServer, storageServer);

流的方式上传文件

知道这个需求是有悖于fastdfs的,但是还是希望能支持大文件流方式上传

javaclient下载文件内存溢出

单个下载文件大于几百兆就会报内存溢出问题,是bug吗,还是说不支持稍大文件下载?

Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap space
    at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:256)
    at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1412)
    at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1387)
    at com.common.impl.FDFSBaseDaoImpl.download(FDFSBaseDaoImpl.java:136)
    at com.service.impl.FDFSServiceImpl.download(FDFSServiceImpl.java:25)
    at com.multi.DownloadThread.run(DownloadThread.java:20)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Exception in thread "pool-1-thread-2" java.lang.OutOfMemoryError: Java heap space
    at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:256)
    at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1412)
    at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1387)
    at com.common.impl.FDFSBaseDaoImpl.download(FDFSBaseDaoImpl.java:136)
    at com.service.impl.FDFSServiceImpl.download(FDFSServiceImpl.java:25)
    at com.multi.DownloadThread.run(DownloadThread.java:20)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

访问fastDFS集群时总是group1和group2轮流访问

假设现在有group1和group2两个组,然后有个文件是在group2里面保存的。但是在用storageClient.delete_file(groupName,remote_filename)方法去删除文件的时候,虽然已经指定了groupName为group2,但是首次执行时仍会去访问group1,因为找不到文件就返回了22错误码。然后再执行一次时,才会去访问group2。 然后再执行一次,又会去访问group1了。
就很奇怪为什么明明已经指定了group,但是在执行的时候还是会来回的变换group访问,这是bug吗?
而如果用linux安装时带的fdfs_delete_file 功能去操作时就不会有这个问题。

删除文件是总是要调用两次接口才可以成功

fastdfs: 5.11
java-client: 1.27

 public ResultVO deletes(@RequestBody List<String> fileUrls){
        ResultVO resultVO = new ResultVO();
//        List<String> files = fileUrls.get("fileUrls");
        if (fileUrls == null || fileUrls.size()==0) {
            logger.warn("文件路径不能为空");
            resultVO.setErr("文件路径不能为空");
            resultVO.setFlag(false);
            return resultVO;
        }
        TrackerClient trackerClient = null;
        TrackerServer trackerServer = null;
        StorageServer storageServer = null;
        StorageClient storageClient = null;
        int result = -1;
        try{
//            String filePath = new ClassPathResource("fdfs_client.conf").getFile().getAbsolutePath();
            ClientGlobal.init(fastdfsConfig);
            trackerClient = new TrackerClient();
            trackerServer = trackerClient.getConnection();
            if(trackerServer !=null) {
                storageServer = trackerClient.getStoreStorage(trackerServer);
                if(storageServer !=null) {
                    storageClient = new StorageClient(trackerServer, storageServer);
                    if(storageClient !=null) {
                        for (int i = 0; i < fileUrls.size(); i++) {
                            String groupName = fileUrls.get(i).substring(0, fileUrls.get(i).indexOf('/'));
                            String remoteFile = fileUrls.get(i).substring(fileUrls.get(i).indexOf('/')+1);
                            logger.debug(groupName);
                            logger.debug(remoteFile);
                            result = storageClient.delete_file(groupName, remoteFile);
                            if (result != 0) {
                                if(resultVO.getPath() != null){
                                resultVO.setPath(resultVO.getPath() + ";" + fileUrls.get(i));
                                }else{
                                    resultVO.setPath(fileUrls.get(i));
                                }
                            }
                        }
                        if (resultVO.getPath() != null) {
                            resultVO.setCount(-1);
                            resultVO.setFlag(false);
                        } else {
                            resultVO.setCount(result);
                            resultVO.setFlag(true);
                        }
                    }else{
                        throw new IOException("连接storageClient出错");
                    }
                }else{
                    throw new IOException("连接storageServer出错");
                }
            }else{
                throw new IOException("连接trackerServer出错");
            }
        }catch (Exception err){
            logger.error(err.getLocalizedMessage());

            resultVO.setFlag(false);
            resultVO.setCount(-1);
            resultVO.setErr(err.getLocalizedMessage());
        }finally {
            close(trackerServer,storageServer);
        }
        return resultVO;
    }

    private void close(TrackerServer trackerServer,StorageServer storageServer){
        try {
            if(storageServer != null) {
                storageServer.close();
            }
            if(trackerServer != null) {
                trackerServer.close();
            }
        }catch (Exception err){
            logger.error("关闭连接失败:"+ err.getLocalizedMessage());
        }
    }

每次想删除的时候,第一次调用接口是只的22也没的说明原因。第二次调用就成功了。不知道为什么
还发现有ERROR - file: storage_service.c, line: 7255, client ip:192.168.0.3, group_name: group1 not correct, should be: group2
但我删除的就是group1上的文件啊
192.168.0.3就是group1的机器
我删除group2上的文件时又提示应该是group1

2019-05-08 10:10:13.137 fastdfs [http-nio-18787-exec-2] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:168 -group1
2019-05-08 10:10:13.138 fastdfs [http-nio-18787-exec-2] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:169 -M00/00/00/wKgA6lzRJbyAMpRrAAFPhZzrGbc49..jpg
2019-05-08 10:10:13.139 fastdfs [http-nio-18787-exec-2] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:171 -22
2019-05-08 10:10:28.007 fastdfs [http-nio-18787-exec-4] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:168 -group1
2019-05-08 10:10:28.007 fastdfs [http-nio-18787-exec-4] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:169 -M00/00/00/wKgA6lzRJbyAMpRrAAFPhZzrGbc49..jpg
2019-05-08 10:10:28.008 fastdfs [http-nio-18787-exec-4] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:171 -0

每次new也会报

java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:168)
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1627)
at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:639)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:162)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:180)
at org.csource.fastdfs.StorageClient1.upload_file1(StorageClient1.java:103)

压力测试报空指针异常。

在进行压力测试的时候,发现只要有并发就会报空指针异常。

java.lang.NullPointerException at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:862) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:208) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:226)

跟踪了一下代码,发现this.storageServer 是一个全局变量,在多线程访问的情况下会产生并发。

我看代码的逻辑是,在进入方法的时候给this.storageServer 赋值,在方法退出的时候释放掉,但是如果在一个线程在释放的时候一个线程刚好进去方法就会产生空指针异常。

解决办法有两种:

  1. 给方法加上synchronized 避免并发。
    2)使用threadlocal来存储storageServer

我本机测试的时候加上synchronized 进行压测就不再出现问题了。不知道大家是否也会遇到这个问题。

运行中突然报错FastDFSException:get storage failed

运行中突然报异常FastDFSException:get storage failed
查看storage和tracker服务器日志均没有报错
重启client所在的tomcat后正常
但是运行一段时间后,错误再次出现
这个是什么原因啊,无从下手

jar包安装到maven报错

maven clean install成功
执行mvn install:install-file -DgroupId=org.csource -DartifactId=fastdfs-client-java -Dversion=${version} -Dpackaging=jar -Dfile=fastdfs-client-java-${version}.jar报错,

qq 20180515165003

上传支持自定义远程文件名吗?

上传文件,需要 指定文件名 和 目录,不是默认的随机生成。

现有的客户端上传接口是否支持? 支持的话,如何调用?(请原谅我的无知,我没有找到api文档。。。)

我观察源码的测试代码,可能有该接口:
public String[] upload_file(String group_name, String master_filename, String prefix_name, byte[] file_buff, String file_ext_name, NameValuePair[] meta_list)

我调用上述接口,总是返回null,并且没有异常抛出= =,也没有任何提示。。。
调试运行,在写数据的时候会有“Socke连接错误”的异常观察到。

有没有路过的大神知道的。。。救命呀

在使用TrackerClient中的listStorage方法出错

fastdfs-client-java v1.25在跟fastdfs-v5.03进行交互时,TrackerClient中的listStorage方法出错~~

最后发现是通信协议这一块出问题了,StructStorageStat中的变量跟服务端返回的二进制消息中的参数不一致~~~
StructStorageStat 中的以下四个参数,在fastdfs-v5.03返回的二进制消息中是没有的!!

protected long connectionAllocCount;
protected long connectionCurrentCount;
protected long connectionMaxCount;
protected boolean ifTrunkServer;

删掉这四个变量就可以额~~

错误如下:

java.io.IOException: byte array length: 1200 is invalid!
at org.csource.fastdfs.ProtoStructDecoder.decode(ProtoStructDecoder.java:38)
at org.csource.fastdfs.TrackerClient.listStorages(TrackerClient.java:752)
at org.csource.fastdfs.TrackerClient.listStorages(TrackerClient.java:661)

建议增强读取配置文件功能。

目前,只能读取到配置文件的本地磁盘路径再加载配置文件,建议增加能够读取项目src下的配置文件。可参考spring-core中io部分的读取方式。本人冒昧的简单的改了下IniFileReader类的loadFromFile方法,如下:

//fReader = new FileReader(conf_filename);
ResourceLoader resourceLoader = new DefaultResourceLoader();
InputStream in = resourceLoader.getResource(conf_filename).getInputStream();
buffReader = new BufferedReader(new InputStreamReader(in));

注:引用了spring的spring-core包。

谢谢!

JDK版本问题

java.lang.UnsupportedClassVersionError: org/csource/fastdfs/ClientGlobal : Unsupported major.minor version 52.0 (unable to load class org.csource.fastdfs.ClientGlobal)

java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

当并发量大的时候,就会报错断开连接,java.net.SocketException: 断开的管道 (Write failed)

java.net.SocketException: 断开的管道 (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:143)
at org.csource.fastdfs.ProtoCommon.closeSocket(ProtoCommon.java:293)
at org.csource.fastdfs.TrackerServer.close(TrackerServer.java:71)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:148)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1632)
at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:644)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:167)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:185)
at org.csource.demo.FastClient.upload(FastClient.java:82)
at com.scmd.upload.HttpUploadServerHandler.writeHttpData(HttpUploadServerHandler.java:244)
at com.scmd.upload.HttpUploadServerHandler.readHttpDataChunkByChunk(HttpUploadServerHandler.java:182)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:141)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:56)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:127)
at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:168)
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1632)
at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:644)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:167)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:185)
at org.csource.demo.FastClient.upload(FastClient.java:82)
at com.scmd.upload.HttpUploadServerHandler.writeHttpData(HttpUploadServerHandler.java:244)
at com.scmd.upload.HttpUploadServerHandler.readHttpDataChunkByChunk(HttpUploadServerHandler.java:182)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:141)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:56)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

recv package size -1 != 10

java.io.IOException: recv package size -1 != 10
at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:169)
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1627)

这个怎么回事
qq 20180524123153

api文档说明不清楚

类StructStorageStat里面,
getLastSourceUpdate()
getLastSyncedTimestamp()
getLastSyncUpdate()
请问这三个方法是有份区别?

recv cmd: 32 is not correct, expect cmd: 100

服务器端可以正常上传 使用java client 1.27 就不行了,这是什么原因啊?(客户端的配置文件加载成功了)

java.io.IOException: recv cmd: 32 is not correct, expect cmd: 100 at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:173) at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201) at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130) at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1627) at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:639) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:162) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:180) at FastDFS.uploadFile(FastDFS.java:105) at FastDFS.main(FastDFS.java:265) java.lang.NullPointerException at FastDFS.main(FastDFS.java:266)

出现问题不会抛出异常

比如 https://github.com/happyfish100/fastdfs-client-java/blob/master/src/main/java/org/csource/fastdfs/TrackerClient.java#L736
如果,连不到服务器时,异常直接输出到终端,不会抛出,会让用户很困惑,而且正规的项目,是不希望在终端答应 StackTrace,而是输入到对应的日志文件。

try {
        trackerServer = trackerGroup.getConnection(serverIndex);
      } catch (IOException ex) {
        ex.printStackTrace(System.err);
        this.errno = ProtoCommon.ECONNREFUSED;
        return false;
      }

上传失败

1 tracker , 2 storaged(同一个组) ,2个dht

同一个文件 通过 javaclient.jar 上传偶数次一定会上传失败。
但N个文件交替上传,则全部都能成功。
报错: error code: 2

但通过命令行 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf test.png 上传没问题。

可能会是什么问题?

通过网页上传文件的获取文件流问题

通过jsp上传文件时,在后端的Servlet中,通过 InputStream inputStream = request.getInputStream() 方式获取文件流,上传到Fastdfs后,图片显示不完整,这个问题请问有人有解决方案吗?

关于tracker

trackerServer trackerGroup trackerClient三者之间是什么关系?

storaged log response status 2 != 0

当文件上传并发量大时上传失败,服务器端报以下错误:
[2018-11-14 19:29:16] WARNING - file: storage_service.c, line: 7155, client ip: 10.1.30.11, logic file: M00/15/39/CgEeC1vsBwSAA36zAAZArCzCh5855.jpeg not exist
[2018-11-14 19:35:03] ERROR - file: tracker_proto.c, line: 48, server: 10.1.30.11:23000, response status 2 != 0
求解

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.