Comments (5)
Hi, @minmincute912. Thanks for your attention to FL-bench first.
alpha == 0.1 or alpha == 0.5, the output performance always lower than the performance of the method which has alpha == 100.
The fact is the label heterogeneity is negative correlated to alpha
. You can evaluate it by split the same dataset using the same arguments except -a
and check the class_distribution.pdf
that auto generated in data/<dataset>
folder. And you will find that when alpha
becomes greater, the label heterogeneity decreasing. Back to the results of elastic aggregation. I think is normal that a traditional FL method performs better when label heterogeneity weaker.
Especially, i'm very interested in the new method "Elastic Aggregation" but i see this method do not show better performance than FedAvg as the paper mentioned.
First of all, I don't know whether you experimental settings are the same as the original paper's. And second, acutally I don't responsible for answering the algorithm performance questions, except you find it is due to my wrong code. About Elastic Aggregation, because its author doesn't publish the source code, it's hard to identically reproduce the experiment environment for me. I just simply reproduce the algorithm by its pseudo codes and that's it. If you find something wrong in my code, please let me know (or open a PR for contribution to this project).
details on accuracy and loss after fine-tuning, i just see it's always equal 0.00%
Because you didn't set the finetune_epoch
and it is default to 0
, which means no local fine-tuning after model testing. So obviously the results about that is 0.00%
, which doesn't means 0 loss or 0 accuracy. they mean NONE.
In the end, thanks for your attention and appreciation to me again. Wish you have a good time with FL-bench. 😉
from fl-bench.
Sorry, i typed wrong something, at prolem 1 i just mentioned, i find that smaller alpha like 0.1 always gives better results than larger alpha like 100, not lower @@, i don't know why, can u check this!! Thanh you very much
from fl-bench.
when alpha==100
alpha==0.1
i see when alpha == 100, the performance approximately about 50% but when alpha == 0.1, the performance can observe that around higher than
from fl-bench.
First of all, the FL-bench learning curve plots actually don't have that much reference value. It's stats only relate to participated clients at each round (10 clients if client num is 100 and join_ratio = 0.1
). And it is probably different from the curve in the paper, which calculation may include all clients.
Let me claim that FL-bench's curves mainly provide reference to whether the algorithm collapsing during training. In the case of traditional FL algorithms, like Elastic Aggregation, the test_before
one, which means results computed from the global model *before local client training` is the one you should refer to. Unless the algorithm involves local fine-tuning or something about meta-learning (Per-FedAvg), the orange one has less importance.
All learning curves in traditional FL algorithm papers are basically the blue one. And in this situation, Elastic Aggregation runs better with -a 100
than -a 0.1
.
BTW, you should mainly consider this value as the final performance of a algorithm, which involves all clients and fair enough (compare to those curve, which involves part of clients).
from fl-bench.
This issue is closed due to long time no response.
from fl-bench.
Related Issues (20)
- For pFedMe HOT 2
- 关于数据集分类 HOT 2
- std HOT 2
- Dataset problem HOT 2
- runtime erro HOT 4
- python generate_data.py -d medmnistC -a 0.1 -cn 100 HOT 8
- Comparison of the results of FedAvg on Cifar with the original paper HOT 3
- question HOT 2
- please can somebody helps me
- please can somebody helps me to solve this problem HOT 5
- COPY failed: forbidden path outside the build context: ../ () HOT 3
- Changing "finetune_epoch" doesn't affect test accuracies. HOT 4
- problem run pre-treatment HOT 7
- [Implementation Error] algorithm "ccvr" code lost a "()" HOT 5
- Are u considering to add the FedMix algo in the repository? HOT 2
- There is no VALIDATION set for FEMNIST LEAF - help wanted HOT 8
- FL-bench welcomes PRs
- Hi, thanks for your contributions to the FL community, you are extremely a talented person
- 我想把大佬你复现的pfedla的cnnwithbn迁移进这个model中,该如何迁移呢? HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fl-bench.