Comments (8)
hdparm is not likely to run inside a container, anyway. It's likely going to need privileged mode and access to the /dev tree, at a minimum.
Where are you encountering the error?
from ceph-container.
I believe you saw this during the OSD startup, it usually tries to disable the write back cache of the journal device before opening it.
It's a warning not an error if I remember correctly.
Can you show the logs?
from ceph-container.
No I lost it. You're right it was more of a warning and just noticed it during the startup/init process somewhere.
Actually I was wanting to pre-initialise the ceph to use an SSD journal device rather than having a journal file or journal on the same device. I have managed to get that to work using ceph-disk prepare.
Then mounting the drives, then starting ceph-osd.
from ceph-container.
OK great, I'm closing this. Feel free to re-open.
I think we need to see if we can improve the journal support.
from ceph-container.
@hookenz I do exactly this on most of my nodes: 1 SSD journal for several OSDs. These containers will work perfectly if you simply mount your journal to /var/lib/ceph/osd/journal
(the default journal location, if it exists).
I run my OSD containers with -v /var/lib/ceph/osd:/var/lib/ceph/osd
and have my SSD journal mounted to /var/lib/ceph/osd/journal
. Each OSD will be automatically assigned a separate directory in that directory, so things should just work.
from ceph-container.
I just noticed that this functionality is not described in the documentation; updating now.
from ceph-container.
So you use a journal file for each OSD instead of a journal partition?
I read somewhere about there being less overhead for a journal partition so I went that route. But I'm passing -v /dev:/dev to docker to make the disk available. I think I also need --privileged.
Your method avoids all that I think.
If I go the direction you recommend I'd have to create 2 journal mounts. It's a little more setup but I think I like your approach.
By the way, what filesystem are you initialising your journal drives with?
My configuration is 6 spinners and 2 SSD. So 3 spinners per SSD.
Oh and by the way, you're right about it coming through correctly when you pass /var/lib/ceph/osd. I was passing in /var/lib/ceph and then the filesystem was showing up as tmpfs. I haven't looked again to see if this is true. Maybe I did something else wrong. Anyway, it's working good.
from ceph-container.
I use all SSDs in my setup and all btrfs filesystems (journal and OSD). I used to use the parallel journaling support with btrfs, but somewhere along the line I quit doing so (I don't recall the reason, but parallel journaling requires --privileged
, which I am not running).
I do recall hearing that the direct block device journals are better, but I haven't done any testing to say.
I think it's a reasonable trade (since disks are necessarily tied to host systems) to have the host system manage the mountpoints, so we just export /var/lib/ceph/osd
for the container.
from ceph-container.
Related Issues (20)
- /opt/ceph-container/bin/osd_disk_prepare.sh: line 46: ceph-disk: command not found HOT 7
- Need fix for CVE-2022-21797 HOT 4
- Bootstrap process hangs up for hours HOT 2
- not found /var/lib/ceph/osd/ceph-2//keyring HOT 2
- dnf update in ceph v18 container image is failing HOT 2
- RocksDBStore - cannot set permissions: Operation not permitted HOT 2
- /usr/bin/ceph: stderr Error EIO: Module 'cephadm' has experienced an error and cannot handle commands: ContainerInspectInfo HOT 2
- add ceph-mgr-callhome to IBM downstream container HOT 2
- cephadm has failed ContainerInspectInfo HOT 2
- populate_kvstore error HOT 1
- rename and repurpose this repository HOT 19
- reef builds don't work HOT 12
- Question about osd directory HOT 2
- docker-compose setup dose not run as expected mds and osd HOT 3
- With new quay.io/ceph/ceph:v16 image, ceph-csi meet segfault error HOT 2
- ceph/demo container does not expose mon port 3300 HOT 2
- Instructions for getting the zabbix template to work with rook-ceph HOT 2
- smartctl could not scrape metrics from HPE Smart Array in HBA mode HOT 2
- support VERSION=8 for contrib/compose-rhcs.sh
- Include cephfs-shell HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ceph-container.