GithubHelp home page GithubHelp logo

Comments (5)

KarhouTam avatar KarhouTam commented on July 17, 2024

Hi, @minmincute912. Thanks for your attention to FL-bench first.

alpha == 0.1 or alpha == 0.5, the output performance always lower than the performance of the method which has alpha == 100.

The fact is the label heterogeneity is negative correlated to alpha. You can evaluate it by split the same dataset using the same arguments except -a and check the class_distribution.pdf that auto generated in data/<dataset> folder. And you will find that when alpha becomes greater, the label heterogeneity decreasing. Back to the results of elastic aggregation. I think is normal that a traditional FL method performs better when label heterogeneity weaker.

Especially, i'm very interested in the new method "Elastic Aggregation" but i see this method do not show better performance than FedAvg as the paper mentioned.

First of all, I don't know whether you experimental settings are the same as the original paper's. And second, acutally I don't responsible for answering the algorithm performance questions, except you find it is due to my wrong code. About Elastic Aggregation, because its author doesn't publish the source code, it's hard to identically reproduce the experiment environment for me. I just simply reproduce the algorithm by its pseudo codes and that's it. If you find something wrong in my code, please let me know (or open a PR for contribution to this project).

details on accuracy and loss after fine-tuning, i just see it's always equal 0.00%

Because you didn't set the finetune_epoch and it is default to 0, which means no local fine-tuning after model testing. So obviously the results about that is 0.00%, which doesn't means 0 loss or 0 accuracy. they mean NONE.

In the end, thanks for your attention and appreciation to me again. Wish you have a good time with FL-bench. 😉

from fl-bench.

minmincute912 avatar minmincute912 commented on July 17, 2024

Sorry, i typed wrong something, at prolem 1 i just mentioned, i find that smaller alpha like 0.1 always gives better results than larger alpha like 100, not lower @@, i don't know why, can u check this!! Thanh you very much

from fl-bench.

minmincute912 avatar minmincute912 commented on July 17, 2024

when alpha==100
Screenshot from 2024-05-17 14-37-46
alpha==0.1
Screenshot from 2024-05-17 14-37-30
i see when alpha == 100, the performance approximately about 50% but when alpha == 0.1, the performance can observe that around higher than

from fl-bench.

KarhouTam avatar KarhouTam commented on July 17, 2024

First of all, the FL-bench learning curve plots actually don't have that much reference value. It's stats only relate to participated clients at each round (10 clients if client num is 100 and join_ratio = 0.1). And it is probably different from the curve in the paper, which calculation may include all clients.

Let me claim that FL-bench's curves mainly provide reference to whether the algorithm collapsing during training. In the case of traditional FL algorithms, like Elastic Aggregation, the test_before one, which means results computed from the global model *before local client training` is the one you should refer to. Unless the algorithm involves local fine-tuning or something about meta-learning (Per-FedAvg), the orange one has less importance.

All learning curves in traditional FL algorithm papers are basically the blue one. And in this situation, Elastic Aggregation runs better with -a 100 than -a 0.1.

BTW, you should mainly consider this value as the final performance of a algorithm, which involves all clients and fair enough (compare to those curve, which involves part of clients).
image

from fl-bench.

KarhouTam avatar KarhouTam commented on July 17, 2024

This issue is closed due to long time no response.

from fl-bench.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.