Comments (9)
Hi @psklavos1, If you try to run the code in examples/simulation-tensorflow
with --num_gpus>0
do you experience the same problem?
from flower.
from flower.
hmmm... did you add those print("Available GPUs:", tf.config.list_physical_devices("GPU"))
in the same python file you later run start_simulation
? Are you on Windows? (if yes, could you try WSL?)
from flower.
from flower.
Is there any Idea of what to do to fix the problem. My Gpu is fully recognised when executing:
print("Available GPUs:", tf.config.list_physical_devices("GPU"))
and when trying to list:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
But Ray still does not seem to initialize gpu device. Can I manually initialize to see the gpu in some way?
from flower.
@alessiomora, do you know what could be causing this? @psklavos1 is trying to do simulation with TF in Window's WSL but his GPU isn't detected properly
from flower.
I imagine that with 'num_gpus': 1.0
the problem persists, right? I see that the GPU memory logged in the issue is a bit more than 3GB (am I wrong?) and maybe 0.33 of that memory is not enough? However, the exemplary code should be very light in terms of memory footprint. Anyway, just to exclude that it is not a problem of memory.
from flower.
Yes. I can verify the issue is the same with: 'num_gpus': 1.0.
If I run just a simple snippet like:
`import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
import numpy as np
np.random.seed(0)
X = np.random.rand(100, 2)
y = (X[:, 0] + X[:, 1] > 1).astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model = Sequential()
model.add(Dense(16, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
with tf.device('/GPU:0'):
history = model.fit(X_train, y_train, epochs=100, batch_size=32, validation_data=(X_test, y_test))
loss, accuracy = model.evaluate(X_test, y_test)
print("Test loss:", loss)
print("Test accuracy:", accuracy)`
The Gpu is used with no errors whatsoever so the installation is correct. Although when running with Ray, the Gpu is not intantiated in the actorpool.
from flower.
I had the same problem, but I was using pytorch.
output:
torch.cuda.is_available(): True
INFO flwr 2023-12-24 03:37:57,013 | app.py:178 | Starting Flower simulation, config: ServerConfig(num_rounds=10, round_timeout=None)
2023-12-24 03:37:59,156 INFO worker.py:1553 -- Started a local Ray instance.
INFO flwr 2023-12-24 03:38:01,192 | app.py:213 | Flower VCE: Ray initialized with resources: {'object_store_memory': 35014316851.0, 'memory': 71700072653.0, 'CPU': 40.0, 'node:10.139.50.89': 1.0}
INFO flwr 2023-12-24 03:38:01,192 | app.py:219 | Optimize your simulation with Flower VCE: https://flower.dev/docs/framework/how-to-run-simulations.html
INFO flwr 2023-12-24 03:38:01,192 | app.py:242 | Flower VCE: Resources for each Virtual Client: {'num_cpus': 1, 'num_gpus': 1.0}
WARNING flwr 2023-12-24 03:38:01,193 | ray_actor.py:144 | The ActorPool is empty. The system (CPUs=40.0, GPUs=0) does not meet the criteria to host at least one client with resources: {'num_cpus': 1, 'num_gpus': 1.0}. Lowering the client_resources
could help.
from flower.
Related Issues (20)
- Error when implemnting Flower with CatBoost model "TypeError: None has type NoneType, but expected one of: bytes" HOT 1
- Plotting losses/accuracies in flwr HOT 2
- Environment setput, No module named 'distutils' HOT 1
- Client cannot connect via gRPC HOT 2
- GRPC Channel StatusCode.RESOURCE_EXHAUSTED HOT 1
- example run failed HOT 3
- Add Flower Baseline: FedGen HOT 1
- Add Flower Baseline: FedDebug HOT 3
- I cant run code on the Flower Tutorial : Get started with Flower HOT 4
- Run simulation_pytorch example, there are some questions HOT 6
- How to add data augmentation in clients' local datasets? HOT 1
- pip install flwr, but there are some ImportErrors HOT 4
- how to load the mnist dataset locally HOT 3
- Inplace update error in loading state_dict for YOLOV8 HOT 5
- Unable to start federated training on CUDA when using YOLOv8 model
- Can a client call a server-side function?
- cryptography 42
- Arduino Client for federated learning with Flower HOT 1
- Label and feature skew Partitioner HOT 1
- Does the Flower support decentralized federated learning?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flower.