happyfish100 / fastdfs-client-java Goto Github PK
View Code? Open in Web Editor NEWFastDFS java client SDK
License: BSD 3-Clause "New" or "Revised" License
FastDFS java client SDK
License: BSD 3-Clause "New" or "Revised" License
什么原因,求解
上传的时候报的错,请问这个head[8]==100检查的是什么呢,是头文件么
当集成fastdfs-client后,将应用配置到阿里云上面,fastdfs服务器在内网,然后配置fdfs.tracker-list为外网ip, 但是23000端口还是读取到了内网IP.
com.github.tobato.fastdfs.proto.AbstractFdfsCommand.send(AbstractFdfsCommand.java:74) - 发出交易请求..报文头为ProtoHead [contentLength=0, cmd=101, status=0]
[17:16:41:037] [DEBUG] - com.github.tobato.fastdfs.proto.AbstractFdfsCommand.send(AbstractFdfsCommand.java:75) - 交易参数为[]
[17:16:41:050] [DEBUG] - com.github.tobato.fastdfs.proto.AbstractFdfsCommand.receive(AbstractFdfsCommand.java:99) - 服务端返回报文头ProtoHead [contentLength=40, cmd=100, status=0]
[17:16:41:053] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:201) - dump class=com.github.tobato.fastdfs.domain.StorageNode
[17:16:41:053] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:202) - ----------------------------------------
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=groupName, index=0, max=16, size=16, offsize=0]
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=ip, index=1, max=15, size=15, offsize=16]
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=port, index=2, max=0, size=8, offsize=31]
[17:16:41:054] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.ObjectMataData.dumpObjectMataData(ObjectMataData.java:204) - FieldMataData [field=storeIndex, index=3, max=0, size=1, offsize=39]
[17:16:41:055] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=groupName, index=0, max=16, size=16, offsize=0]group1
[17:16:41:087] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string 'group1' to class 'java.lang.String'
[17:16:41:089] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=ip, index=1, max=15, size=15, offsize=16]192.168.99.223
[17:16:41:089] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string '192.168.99.223' to class 'java.lang.String'
[17:16:41:089] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=port, index=2, max=0, size=8, offsize=31]23000
[17:16:41:090] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string '23000' to class 'int'
[17:16:41:090] [DEBUG] - com.github.tobato.fastdfs.proto.mapper.FdfsParamMapper.mapByIndex(FdfsParamMapper.java:94) - 设置值是 FieldMataData [field=storeIndex, index=3, max=0, size=1, offsize=39]0
[17:16:41:090] [DEBUG] - org.apache.commons.beanutils.ConvertUtilsBean.convert(ConvertUtilsBean.java:418) - Convert string '0' to class 'byte'
[17:16:41:092] [DEBUG] - com.github.tobato.fastdfs.conn.DefaultConnection.(DefaultConnection.java:48) - connect to /192.168.99.223:23000 soTimeout=3000 connectTimeout=6000
[17:16:42:582] [DEBUG] - com.alibaba.dubbo.remoting.exchange.support.header.HeartbeatHandler.received(HeartbeatHandler.java:74) - [DUBBO] Received heartbeat from remote channel /113.118.199.215:62212, cause: The channel has no data-transmission exceeds a heartbeat period: 60000ms, dubbo version: 2.6.2, current host: 47.89.31.99
[17:16:44:568] [DEBUG] - org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:742) - Got ping response for sessionid: 0x1668201744503d6 after 13ms
[17:16:47:094] [ERROR] - com.bessky.erp.logistics.api.tengjia.GetTengJiaCall.submitOrder(GetTengJiaCall.java:168) - 无法获取服务端连接资源:can't create connection to/192.168.99.223:23000
有多个StorageServer的情况下,api怎么指定group和路径上传
StorageServer storageServer = trackerClient.getStoreStorage(trackerServer);
当执行到上面这行代码时会报超时错误:
java.net.SocketTimeoutException: connect timed out
我检查了fdfs的服务器ip和端口,都没问题。trackerClient和trackerServer都可以获取到对象的。
完整代码如下:
ClientGlobal.init(CONFIG_FILENAME);
trackerClient = new TrackerClient();
trackerServer = trackerClient.getConnection();
if (trackerServer == null) {
throw new IllegalStateException("getConnection return null");
}
**StorageServer storageServer = trackerClient.getStoreStorage(trackerServer);**
if (storageServer == null) {
throw new IllegalStateException("getStoreStorage return null");
}
storageClient = new StorageClient1(trackerServer, storageServer);
请问下有没有批量上传和下载的接口?
知道这个需求是有悖于fastdfs的,但是还是希望能支持大文件流方式上传
单个下载文件大于几百兆就会报内存溢出问题,是bug吗,还是说不支持稍大文件下载?
Exception in thread "pool-1-thread-1" java.lang.OutOfMemoryError: Java heap space
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:256)
at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1412)
at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1387)
at com.common.impl.FDFSBaseDaoImpl.download(FDFSBaseDaoImpl.java:136)
at com.service.impl.FDFSServiceImpl.download(FDFSServiceImpl.java:25)
at com.multi.DownloadThread.run(DownloadThread.java:20)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "pool-1-thread-2" java.lang.OutOfMemoryError: Java heap space
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:256)
at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1412)
at org.csource.fastdfs.StorageClient.download_file(StorageClient.java:1387)
at com.common.impl.FDFSBaseDaoImpl.download(FDFSBaseDaoImpl.java:136)
at com.service.impl.FDFSServiceImpl.download(FDFSServiceImpl.java:25)
at com.multi.DownloadThread.run(DownloadThread.java:20)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
假设现在有group1和group2两个组,然后有个文件是在group2里面保存的。但是在用storageClient.delete_file(groupName,remote_filename)方法去删除文件的时候,虽然已经指定了groupName为group2,但是首次执行时仍会去访问group1,因为找不到文件就返回了22错误码。然后再执行一次时,才会去访问group2。 然后再执行一次,又会去访问group1了。
就很奇怪为什么明明已经指定了group,但是在执行的时候还是会来回的变换group访问,这是bug吗?
而如果用linux安装时带的fdfs_delete_file 功能去操作时就不会有这个问题。
fastdfs: 5.11
java-client: 1.27
public ResultVO deletes(@RequestBody List<String> fileUrls){
ResultVO resultVO = new ResultVO();
// List<String> files = fileUrls.get("fileUrls");
if (fileUrls == null || fileUrls.size()==0) {
logger.warn("文件路径不能为空");
resultVO.setErr("文件路径不能为空");
resultVO.setFlag(false);
return resultVO;
}
TrackerClient trackerClient = null;
TrackerServer trackerServer = null;
StorageServer storageServer = null;
StorageClient storageClient = null;
int result = -1;
try{
// String filePath = new ClassPathResource("fdfs_client.conf").getFile().getAbsolutePath();
ClientGlobal.init(fastdfsConfig);
trackerClient = new TrackerClient();
trackerServer = trackerClient.getConnection();
if(trackerServer !=null) {
storageServer = trackerClient.getStoreStorage(trackerServer);
if(storageServer !=null) {
storageClient = new StorageClient(trackerServer, storageServer);
if(storageClient !=null) {
for (int i = 0; i < fileUrls.size(); i++) {
String groupName = fileUrls.get(i).substring(0, fileUrls.get(i).indexOf('/'));
String remoteFile = fileUrls.get(i).substring(fileUrls.get(i).indexOf('/')+1);
logger.debug(groupName);
logger.debug(remoteFile);
result = storageClient.delete_file(groupName, remoteFile);
if (result != 0) {
if(resultVO.getPath() != null){
resultVO.setPath(resultVO.getPath() + ";" + fileUrls.get(i));
}else{
resultVO.setPath(fileUrls.get(i));
}
}
}
if (resultVO.getPath() != null) {
resultVO.setCount(-1);
resultVO.setFlag(false);
} else {
resultVO.setCount(result);
resultVO.setFlag(true);
}
}else{
throw new IOException("连接storageClient出错");
}
}else{
throw new IOException("连接storageServer出错");
}
}else{
throw new IOException("连接trackerServer出错");
}
}catch (Exception err){
logger.error(err.getLocalizedMessage());
resultVO.setFlag(false);
resultVO.setCount(-1);
resultVO.setErr(err.getLocalizedMessage());
}finally {
close(trackerServer,storageServer);
}
return resultVO;
}
private void close(TrackerServer trackerServer,StorageServer storageServer){
try {
if(storageServer != null) {
storageServer.close();
}
if(trackerServer != null) {
trackerServer.close();
}
}catch (Exception err){
logger.error("关闭连接失败:"+ err.getLocalizedMessage());
}
}
每次想删除的时候,第一次调用接口是只的22也没的说明原因。第二次调用就成功了。不知道为什么
还发现有ERROR - file: storage_service.c, line: 7255, client ip:192.168.0.3, group_name: group1 not correct, should be: group2
但我删除的就是group1上的文件啊
192.168.0.3就是group1的机器
我删除group2上的文件时又提示应该是group1
2019-05-08 10:10:13.137 fastdfs [http-nio-18787-exec-2] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:168 -group1
2019-05-08 10:10:13.138 fastdfs [http-nio-18787-exec-2] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:169 -M00/00/00/wKgA6lzRJbyAMpRrAAFPhZzrGbc49..jpg
2019-05-08 10:10:13.139 fastdfs [http-nio-18787-exec-2] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:171 -22
2019-05-08 10:10:28.007 fastdfs [http-nio-18787-exec-4] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:168 -group1
2019-05-08 10:10:28.007 fastdfs [http-nio-18787-exec-4] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:169 -M00/00/00/wKgA6lzRJbyAMpRrAAFPhZzrGbc49..jpg
2019-05-08 10:10:28.008 fastdfs [http-nio-18787-exec-4] INFO c.j.cloudfastdfs.controller.FileUploadController FileUploadController.java:171 -0
java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:168)
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1627)
at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:639)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:162)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:180)
at org.csource.fastdfs.StorageClient1.upload_file1(StorageClient1.java:103)
如题。
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.8.0_172]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) ~[na:1.8.0_172]
at java.net.SocketOutputStream.write(SocketOutputStream.java:143) ~[na:1.8.0_172]
请问该怎么解决
storageClient.upload_file(fileName, extName, metas) ,the [ .upload_file ] always cant use, who can know why ?
在进行压力测试的时候,发现只要有并发就会报空指针异常。
java.lang.NullPointerException at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:862) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:208) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:226)
跟踪了一下代码,发现this.storageServer
是一个全局变量,在多线程访问的情况下会产生并发。
我看代码的逻辑是,在进入方法的时候给this.storageServer
赋值,在方法退出的时候释放掉,但是如果在一个线程在释放的时候一个线程刚好进去方法就会产生空指针异常。
解决办法有两种:
我本机测试的时候加上synchronized 进行压测就不再出现问题了。不知道大家是否也会遇到这个问题。
代码都运行完好,打包成jar包运行报错
报错:
Caused by: org.csource.common.MyException: item "tracker_server" in file:/C:/Users/Lenovo/Desktop/CrowdFundingParentProject.jar!/BOOT-INF/classes!/tracker.conf not found
运行中突然报异常FastDFSException:get storage failed
查看storage和tracker服务器日志均没有报错
重启client所在的tomcat后正常
但是运行一段时间后,错误再次出现
这个是什么原因啊,无从下手
如题, 现在在maven**仓库(http://search.maven.org/)搜索结果中没有这个jar包(fastdfs-client-java), 之前是有的
大神你好,最近在弄fastdfs上传下载,我写了个全局异常处理器,可是不知道为什么捕获不到你的org.csource.common.MyException: getStoreStorage fail, errno code: 22,程序还是能直接往下走。能为我解答一下吗?
client默认使用的是oio,阻塞io且没有连接池的支持,其实可以用nio,aio(java7),或者使用netty来提高性能的
并发量大时,后台日志看到很多recv body length: 60 is not correct, expect length: 0 这个错,有遇到过这种问题的吗?
如题, 这样大家都很方便了, 而且能同步最新版本; 发布流程可参考 http://www.arccode.net/publish-artifact-to-maven-central-repository.html
User是干什么的啊,在哪里定义呢?
上传文件,需要 指定文件名 和 目录,不是默认的随机生成。
现有的客户端上传接口是否支持? 支持的话,如何调用?(请原谅我的无知,我没有找到api文档。。。)
我观察源码的测试代码,可能有该接口:
public String[] upload_file(String group_name, String master_filename, String prefix_name, byte[] file_buff, String file_ext_name, NameValuePair[] meta_list)
我调用上述接口,总是返回null,并且没有异常抛出= =,也没有任何提示。。。
调试运行,在写数据的时候会有“Socke连接错误”的异常观察到。
有没有路过的大神知道的。。。救命呀
求下载分片内容的api
fastdfs-client-java v1.25在跟fastdfs-v5.03进行交互时,TrackerClient中的listStorage方法出错~~
最后发现是通信协议这一块出问题了,StructStorageStat中的变量跟服务端返回的二进制消息中的参数不一致~~~
StructStorageStat 中的以下四个参数,在fastdfs-v5.03返回的二进制消息中是没有的!!
protected long connectionAllocCount;
protected long connectionCurrentCount;
protected long connectionMaxCount;
protected boolean ifTrunkServer;
删掉这四个变量就可以额~~
错误如下:
java.io.IOException: byte array length: 1200 is invalid!
at org.csource.fastdfs.ProtoStructDecoder.decode(ProtoStructDecoder.java:38)
at org.csource.fastdfs.TrackerClient.listStorages(TrackerClient.java:752)
at org.csource.fastdfs.TrackerClient.listStorages(TrackerClient.java:661)
问题如标题所示。
查看代码发现:
byte[] buff = base64.decodeAuto(remote_filename.substring(ProtoCommon.FDFS_FILE_PATH_LEN,
ProtoCommon.FDFS_FILE_PATH_LEN + ProtoCommon.FDFS_FILENAME_BASE64_LENGTH));
此处,FDFS_FILENAME_BASE64_LENGTH=27
经测试,如果设为28
则结果正确。
怎么遍历获取FastDFS指定文件夹下的所有文件
将测试用例的下载文件位置改为其他盘
目前,只能读取到配置文件的本地磁盘路径再加载配置文件,建议增加能够读取项目src下的配置文件。可参考spring-core中io部分的读取方式。本人冒昧的简单的改了下IniFileReader类的loadFromFile方法,如下:
//fReader = new FileReader(conf_filename);
ResourceLoader resourceLoader = new DefaultResourceLoader();
InputStream in = resourceLoader.getResource(conf_filename).getInputStream();
buffReader = new BufferedReader(new InputStreamReader(in));
注:引用了spring的spring-core包。
谢谢!
开了50个线程,进行文件上传操作;一会儿出现Address already in use:connect一会儿出现socket closed 一会儿出现socket in not connected;有木有谁解决过....是不支持并发吗?
在两台虚拟机中搭建了fastDFS集群,在虚拟机中使用命令上传没问题,在本地浏览器中也可以访问,但是在使用源码的测试用例中,会提示getStoreStorage fail, errno code: 2,请告知,谢谢
java.lang.UnsupportedClassVersionError: org/csource/fastdfs/ClientGlobal : Unsupported major.minor version 52.0 (unable to load class org.csource.fastdfs.ClientGlobal)
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
java.net.SocketException: 断开的管道 (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:143)
at org.csource.fastdfs.ProtoCommon.closeSocket(ProtoCommon.java:293)
at org.csource.fastdfs.TrackerServer.close(TrackerServer.java:71)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:148)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1632)
at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:644)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:167)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:185)
at org.csource.demo.FastClient.upload(FastClient.java:82)
at com.scmd.upload.HttpUploadServerHandler.writeHttpData(HttpUploadServerHandler.java:244)
at com.scmd.upload.HttpUploadServerHandler.readHttpDataChunkByChunk(HttpUploadServerHandler.java:182)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:141)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:56)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:127)
at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:168)
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1632)
at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:644)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:167)
at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:185)
at org.csource.demo.FastClient.upload(FastClient.java:82)
at com.scmd.upload.HttpUploadServerHandler.writeHttpData(HttpUploadServerHandler.java:244)
at com.scmd.upload.HttpUploadServerHandler.readHttpDataChunkByChunk(HttpUploadServerHandler.java:182)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:141)
at com.scmd.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:56)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
java.io.IOException: recv package size -1 != 10
at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:169)
at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201)
at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130)
at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1627)
类StructStorageStat里面,
getLastSourceUpdate()
getLastSyncedTimestamp()
getLastSyncUpdate()
请问这三个方法是有份区别?
服务器端可以正常上传 使用java client 1.27 就不行了,这是什么原因啊?(客户端的配置文件加载成功了)
java.io.IOException: recv cmd: 32 is not correct, expect cmd: 100 at org.csource.fastdfs.ProtoCommon.recvHeader(ProtoCommon.java:173) at org.csource.fastdfs.ProtoCommon.recvPackage(ProtoCommon.java:201) at org.csource.fastdfs.TrackerClient.getStoreStorage(TrackerClient.java:130) at org.csource.fastdfs.StorageClient.newWritableStorageConnection(StorageClient.java:1627) at org.csource.fastdfs.StorageClient.do_upload_file(StorageClient.java:639) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:162) at org.csource.fastdfs.StorageClient.upload_file(StorageClient.java:180) at FastDFS.uploadFile(FastDFS.java:105) at FastDFS.main(FastDFS.java:265) java.lang.NullPointerException at FastDFS.main(FastDFS.java:266)
recv cmd: 0 is not correct, expect cmd: 100
upload_file1
StorageClient & StorageClient
e.printStackTrace
MyException
比如 https://github.com/happyfish100/fastdfs-client-java/blob/master/src/main/java/org/csource/fastdfs/TrackerClient.java#L736
如果,连不到服务器时,异常直接输出到终端,不会抛出,会让用户很困惑,而且正规的项目,是不希望在终端答应 StackTrace,而是输入到对应的日志文件。
try {
trackerServer = trackerGroup.getConnection(serverIndex);
} catch (IOException ex) {
ex.printStackTrace(System.err);
this.errno = ProtoCommon.ECONNREFUSED;
return false;
}
1 tracker , 2 storaged(同一个组) ,2个dht
同一个文件 通过 javaclient.jar 上传偶数次一定会上传失败。
但N个文件交替上传,则全部都能成功。
报错: error code: 2
但通过命令行 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf test.png 上传没问题。
可能会是什么问题?
为什么不能直接从**仓库拉下来,这个是我自己构建到自己的本地仓库的。
fastdfs有没有增加批量zip下载的方法,方便调用
通过jsp上传文件时,在后端的Servlet中,通过 InputStream inputStream = request.getInputStream()
方式获取文件流,上传到Fastdfs后,图片显示不完整,这个问题请问有人有解决方案吗?
trackerServer trackerGroup trackerClient三者之间是什么关系?
当文件上传并发量大时上传失败,服务器端报以下错误:
[2018-11-14 19:29:16] WARNING - file: storage_service.c, line: 7155, client ip: 10.1.30.11, logic file: M00/15/39/CgEeC1vsBwSAA36zAAZArCzCh5855.jpeg not exist
[2018-11-14 19:35:03] ERROR - file: tracker_proto.c, line: 48, server: 10.1.30.11:23000, response status 2 != 0
求解
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.