cloudfoundry-community-attic / cf-services-contrib-release Goto Github PK
View Code? Open in Web Editor NEWrelease repository for community contributed services
License: Apache License 2.0
release repository for community contributed services
License: Apache License 2.0
In the old cf-release there where more services than are currently available in this release.
I'm talking about the ng and normal versions of mongodb
, mysql
, rabbit
and redis
.
They are also not available in cf-services-release.
Probably needs same upgrade as other services did:
# cf create-service
1: postgresql 9.2
2: rabbitmq 3.0
3: redis 2.6
What kind?> 2
Name?> rabbitmq-4cea0
1: default: Dedicated server, shared VM, 250MB storage, 10 connections
Which plan?> 1
Creating service rabbitmq-4cea0... FAILED
CFoundry::ServerError: 10001: Server error
cat ~/.cf/crash # for more details
Currently the following must be provided to disable all lifecycle for a gateway:
configuration:
lifecycle:
enable: false
Allow for gateway configuration
to be missing entirely and default to the above.
In postgresql node (and the mysql node, as decided by pivotal staff when they were refactoring it):
properties:
postgresql_node:
plan: default
But in other nodes:
properties:
plan: default
Make these two consistent.
Perhaps when the mysql job moves into this repo we can revert their change and go back to the simple 2nd option.
As shown below, the url for submodules of services_warden and vblob_src are using git protocol, this will cause publickey error when checking out these modules unless you have a github account and configure ssh for it. A workaround is to change .git/config manually.
Could you please change it from the source? Thanks.
[submodule "src/services/govendor"]
url = https://github.com/cloudfoundry/govendor.git
[submodule "src/services_warden"]
url = [email protected]:cloudfoundry/warden.git
[submodule "src/tools"]
url = https://github.com/cloudfoundry/vcap-tools.git
[submodule "src/vblob_src"]
url = [email protected]:cloudfoundry/vblob.git
We will probably create a new bosh release for services written for the services v2 API (with its new terminology etc)
Currently, must provide nested set of arbitrary config to enable a service (with no lifecycle):
service_plans:
postgresql:
default:
job_management:
high_water: 1400
low_water: 100
configuration:
lifecycle:
enable: false
If no lifecycle is required (backups), then allow:
service_plans:
- postgresql
If @pivotal-vmware isn't running these services, perhaps we move them to @cloudfoundry-community account to allow its own management/administration to form within the community that uses it?
When I bind one application(whatever ruby, nodejs or JAVA application) to mysql and postgres at the same time, it will tips some error like "Unable to determine primary database from multiple. Please bind only one database service to Rails applications". And I had a check with the /var/vcap/packages/dea_next/lib/dea/starting/database_uri_generator.rb, found that in cloud foundry, it seems this is limited to the combination of having both a mysql service, and a postgre service:
class DatabaseUriGenerator
VALID_DB_TYPES = %w[mysql mysql2 postgres postgresql].freeze
DATABASE_TO_ADAPTER_MAPPING = {
'mysql' => 'mysql2',
'postgresql' => 'postgres'
}.freeze
......
def bound_database_uri
case bound_relational_valid_databases.size
when 0
nil
when 1
bound_relational_valid_databases.first[:uri]
else
binding = bound_relational_valid_databases.detect { |binding| binding[:name] && binding[:name] =˜ /ˆ.*production$|ˆ.*prod$/ }
unless binding
raise "Unable to determine primary database from multiple. Please bind only one database service to Rails applications."
end
binding[:uri]
end
end
......
def bound_relational_valid_databases
@bound_relational_valid_databases ||= @services.inject([]) do |collection, binding|
begin
if binding["credentials"]["uri"]
uri = URI.parse(binding["credentials"]["uri"])
collection << {uri: uri, name: binding["name"]} if VALID_DB_TYPES.include?(uri.scheme)
end
rescue URI::InvalidURIError => e
raise URI::InvalidURIError, "Invalid database uri: #{binding["credentials"]["uri"].gsub(/\/\/.+@/, '//USER_NAME_PASS@')}"
end
collection
end
end
So, here I want to make sure is it really true that the combination of mysql and postgresql is disabled? Is there some good reason for disabling this? Can we eable this? Thanks.
After deploying the rabbit and redis services I am able to create redis services but not rabbit services.
The error I am seeing from 'cf create-service' is as follows:
CFoundry::ServerError: 10001: VCAP::Services::Api::ServiceGatewayClient::ErrorResponse: Service start timeout
I have attached all the logs from the Gateway, Node and Node Warden container.
I would like to continue debugging but I'm not too sure where to look.
tastle@Tastle RMBP 2013-10-15-Login-Server-not-working (rdg-2-bxb) $ cf create-service-auth-token
Label> redis
Token> REDIS-TOKEN-PQ6cNYLjMWzsEoZEsQijXt5x8zd
Creating service auth token... OK
Good.
tastle@Tastle RMBP ~ $ cf create-service
1: rabbitmq 3.0
2: redis 2.6
3: user-provided , via
What kind?> 2
Name?> redis-3e3cf
1: default: Developer, 250MB storage, 10 connections
Which plan?> 1
Creating service redis-3e3cf... OK
Good.
tastle@Tastle RMBP 2013-10-15-Login-Server-not-working (rdg-2-bxb) $ cf create-service-auth-token
Label> rabbit
Token> RABBIT-TOKEN-Pl1NnRIe85J8ZzQOb1a3wuqF
Creating service auth token... OK
Good.
tastle@Tastle RMBP 2013-10-15-Login-Server-not-working (rdg-2-bxb) $ cf create-service
1: rabbitmq 3.0
2: redis 2.6
3: user-provided , via
What kind?> 1
Name?> rabbitmq-7afa5
1: default: Developer, 250MB storage, 10 connections
Which plan?> 1
Creating service rabbitmq-7afa5... FAILED
CFoundry::ServerError: 10001: NoMethodError: undefined method `token' for nil:NilClass
Failed.
Problem: I added the token for 'rabbit' and not 'rabbitmq'.
tastle@Tastle RMBP 2013-10-15-Login-Server-not-working (rdg-2-bxb) $ cf create-service-auth-token
Label> rabbitmq
Token> RABBIT-TOKEN-Pl1NnRIe85J8ZzQOb1a3wuqF
Creating service auth token... OK
Good.
tastle@Tastle RMBP 2013-10-15-Login-Server-not-working (rdg-2-bxb) $ cf create-service
1: rabbitmq 3.0
2: redis 2.6
3: user-provided , via
What kind?> 1
Name?> rabbitmq-79243
1: default: Developer, 250MB storage, 10 connections
Which plan?> 1
Creating service rabbitmq-79243... FAILED
CFoundry::ServerError: 10001: VCAP::Services::Api::ServiceGatewayClient::ErrorResponse: Service start timeout
Failed.
New error, time to debug.
Rabbit Gateway node logs when running 'cf create-service'
$ bosh ssh gateways 0
$ cd /var/vcap/sys/log/rabbit_gateway
$ fail -f ***
[2013-10-16 10:02:14.684323] rabbit_gateway - pid=6433 tid=1203 fid=edeb DEBUG -- Provision request for label=rabbitmq-3.0, plan=default, version=3.0
[2013-10-16 10:02:14.684759] rabbit_gateway - pid=6433 tid=1203 fid=edeb DEBUG -- [RMQaaS-Provisioner] Attempting to provision instance (request={:label=>"rabbitmq-3.0", :name=>"rabbitmq-91e99", :email=>"admin", :plan=>"default", :version=>"3.0", :provider=>"core", :space_guid=>"ad642431-d7fa-4738-b977-8d4e1e04bbe8", :organization_guid=>"e911dfdd-ed6e-4739-9524-01c9f1788728", :unique_id=>"default-bfd55f2a-25c8-4444-9200-4e7d2d443467", :plan_option=>{}})
[2013-10-16 10:02:14.685141] rabbit_gateway - pid=6433 tid=1203 fid=edeb DEBUG -- [RMQaaS-Provisioner] Picking version nodes from the following 1 'default' plan nodes: [{"available_capacity"=>125, "capacity_unit"=>1, "id"=>"rabbit_node_default_0", "plan"=>"default", "supported_versions"=>["3.0"], "time"=>1381917717}]
[2013-10-16 10:02:14.685429] rabbit_gateway - pid=6433 tid=1203 fid=edeb DEBUG -- [RMQaaS-Provisioner] 1 nodes allow provisioning for version: 3.0
[2013-10-16 10:02:14.685605] rabbit_gateway - pid=6433 tid=1203 fid=edeb DEBUG -- [RMQaaS-Provisioner] Provisioning on rabbit_node_default_0
[2013-10-16 10:02:23.164226] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- CCNG Catalog Manager: Loading services from CC
[2013-10-16 10:02:23.164773] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Getting services listing from cloud_controller
[2013-10-16 10:02:23.164952] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- Fetching Registered Offerings from: /v2/services?inline-relations-depth=2
[2013-10-16 10:02:23.165148] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- GET - http://api.ft2.cpgpaas.net/v2/services?inline-relations-depth=2
[2013-10-16 10:02:23.291621] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Getting service plans for: rabbitmq/core
[2013-10-16 10:02:23.291852] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- Fetching Service Plans from: /v2/services/d6cbabf1-1ee4-42ef-9053-5fe03a502f5d/service_plans
[2013-10-16 10:02:23.292036] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- GET - http://api.ft2.cpgpaas.net/v2/services/d6cbabf1-1ee4-42ef-9053-5fe03a502f5d/service_plans
[2013-10-16 10:02:23.404734] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Found rabbitmq_core = {"guid"=>"d6cbabf1-1ee4-42ef-9053-5fe03a502f5d", "service"=>{"id"=>"rabbitmq", "description"=>"RabbitMQ message queue", "provider"=>"core", "version"=>"3.0", "url"=>"http://10.0.6.83:51591", "info_url"=>nil, "plans"=>{"default"=>{"guid"=>"9d366128-fa36-45bb-a1ba-4c36cae1b359", "name"=>"default", "description"=>"Developer, 250MB storage, 10 connections", "free"=>true}}}}
[2013-10-16 10:02:23.405184] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- CCNG Catalog Manager: Loading current catalog...
[2013-10-16 10:02:23.405434] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- CCNG Catalog Manager: Updating Offerings...
[2013-10-16 10:02:23.405579] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- CCNG Catalog Manager: Activate services...
[2013-10-16 10:02:23.405714] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Registered in ccng: ["rabbitmq_core"]
[2013-10-16 10:02:23.405859] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Current catalog: ["rabbitmq_core"]
[2013-10-16 10:02:23.406048] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: No changes to plan: default
[2013-10-16 10:02:23.406186] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: plans_to_add = []
[2013-10-16 10:02:23.406356] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: plans_to_update = []
[2013-10-16 10:02:23.406574] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Refresh offering: {:unique_id=>"bfd55f2a-25c8-4444-9200-4e7d2d443467", :label=>"rabbitmq", :version=>"3.0", :active=>true, :description=>"RabbitMQ message queue", :provider=>"core", :acls=>nil, :url=>"http://10.0.6.83:51591", :timeout=>45, :extra=>nil}
[2013-10-16 10:02:23.406885] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Update service offering {:unique_id=>"bfd55f2a-25c8-4444-9200-4e7d2d443467", :label=>"rabbitmq", :version=>"3.0", :active=>true, :description=>"RabbitMQ message queue", :provider=>"core", :acls=>nil, :url=>"http://10.0.6.83:51591", :timeout=>45, :extra=>nil} to cloud_controller: /v2/services/d6cbabf1-1ee4-42ef-9053-5fe03a502f5d
[2013-10-16 10:02:23.407279] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- PUT - http://api.ft2.cpgpaas.net/v2/services/d6cbabf1-1ee4-42ef-9053-5fe03a502f5d
[2013-10-16 10:02:23.519045] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- CCNG Catalog Manager: Advertise offering response (code=201): {"metadata"=>{"guid"=>"d6cbabf1-1ee4-42ef-9053-5fe03a502f5d", "url"=>"/v2/services/d6cbabf1-1ee4-42ef-9053-5fe03a502f5d", "created_at"=>"2013-10-15T18:47:43+00:00", "updated_at"=>"2013-10-16T10:02:23+00:00"}, "entity"=>{"label"=>"rabbitmq", "provider"=>"core", "url"=>"http://10.0.6.83:51591", "description"=>"RabbitMQ message queue", "long_description"=>nil, "version"=>"3.0", "info_url"=>nil, "active"=>true, "bindable"=>true, "unique_id"=>"bfd55f2a-25c8-4444-9200-4e7d2d443467", "extra"=>nil, "tags"=>[], "requires"=>[], "documentation_url"=>nil, "service_plans_url"=>"/v2/services/d6cbabf1-1ee4-42ef-9053-5fe03a502f5d/service_plans"}}
[2013-10-16 10:02:23.519629] rabbit_gateway - pid=6433 tid=1203 fid=5381 DEBUG -- CCNG Catalog Manager: Processing plans for: d6cbabf1-1ee4-42ef-9053-5fe03a502f5d -Add: 0 plans, Update: 0 plans
[2013-10-16 10:02:23.519803] rabbit_gateway - pid=6433 tid=1203 fid=5381 INFO -- CCNG Catalog Manager: Found 1 active, 0 disabled and 0 new service offerings
[2013-10-16 10:02:27.932238] rabbit_gateway - pid=6433 tid=1203 fid=edeb DEBUG -- [RMQaaS-Provisioner] Received node announcement: {"available_capacity":125,"capacity_unit":1,"id":"rabbit_node_default_0","plan":"default","supported_versions":["3.0"]}
Rabbit node logs when running 'cf create-service'
$ bosh ssh rabbit_service_node 0
$ cd /var/vcap/sys/log/rabbit_node
$ tail -f ***
==> rabbit_node.log <==
[2013-10-16 10:01:47.621729] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
./rabbitmq_common:
total 12
drwxr-xr-x 3 root root 4096 2013-10-15 20:17 .
drwxr-xr-x 6 root root 4096 2013-10-15 20:17 ..
drwxr-xr-x 2 root root 4096 2013-10-15 20:17 bin
./rabbitmq_common/bin:
total 24
drwxr-xr-x 2 root root 4096 2013-10-15 20:17 .
drwxr-xr-x 3 root root 4096 2013-10-15 20:17 ..
-rwxr-xr-x 1 root vcap 2017 2013-10-15 20:16 backup_or_restore2.escript
-rwxr-xr-x 1 root vcap 2047 2013-10-15 20:16 backup_or_restore3.escript
[2013-10-16 10:02:17.622474] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
[2013-10-16 10:02:47.626234] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
[2013-10-16 10:03:05.206481] rabbit_node_default_0 - pid=21390 tid=54cd fid=aa2d ERROR -- Error provision instance: Error Code: 30503, Error Message: Service start timeout
[2013-10-16 10:03:05.388484] rabbit_node_default_0 - pid=21390 tid=54cd fid=aa2d WARN -- Exception at on_provision Error Code: 30503, Error Message: Service start timeout
[2013-10-16 10:03:17.634087] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
[2013-10-16 10:03:47.641870] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
[2013-10-16 10:04:17.646765] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
[2013-10-16 10:04:47.649135] rabbit_node_default_0 - pid=21390 tid=b42e fid=4c96 DEBUG -- RMQaaS-Node: Sending announcement for everyone
Rabbit node warden logs when running 'cf create-service'
$ bosh ssh rabbit_service_node 0
$ cd /var/vcap/sys/log/rabbit/warden
$ tail -f ***
==> warden.log <==
{"timestamp":1381917724.4872634,"message":"Container created","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":10777100,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/linux.rb","lineno":104,"method":"do_create"}
{"timestamp":1381917724.655268,"message":"Container started","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":10777100,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/linux.rb","lineno":110,"method":"do_create"}
{"timestamp":1381917724.6561542,"message":"Wrote snapshot in 0.000584","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":10777100,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.6564014,"message":"create (took 0.267184)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"bind_mounts":["#<Warden::Protocol::CreateRequest::BindMount:0x00000001461a08>","#<Warden::Protocol::CreateRequest::BindMount:0x00000001464f28>","#<Warden::Protocol::CreateRequest::BindMount:0x000000014684e8>","#<Warden::Protocol::CreateRequest::BindMount:0x0000000146cc50>","#<Warden::Protocol::CreateRequest::BindMount:0x00000001472fd8>","#<Warden::Protocol::CreateRequest::BindMount:0x00000001481150>"]},"response":{"handle":"178q2ebslsv"}},"thread_id":10809480,"fiber_id":10777100,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.6639323,"message":"Wrote snapshot in 0.000239","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":12749060,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.6640599,"message":"spawn (took 0.005072)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","script":"chown -R vcap:vcap /var/vcap/sys/log/monit /var/vcap/store/rabbit/instances/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/sys/service-log/rabbit/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/packages/erlang","privileged":true},"response":{"job_id":28}},"thread_id":10809480,"fiber_id":12749060,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.6935806,"message":"Wrote snapshot in 0.000414","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":17179440,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.6938426,"message":"link (took 0.029585)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","job_id":28},"response":{"exit_status":0,"stdout":"","stderr":""}},"thread_id":10809480,"fiber_id":12749060,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.694197,"message":"run (took 0.035323)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","script":"chown -R vcap:vcap /var/vcap/sys/log/monit /var/vcap/store/rabbit/instances/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/sys/service-log/rabbit/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/packages/erlang","privileged":true},"response":{"exit_status":0,"stdout":"","stderr":""}},"thread_id":10809480,"fiber_id":12749060,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.7082088,"message":"Wrote snapshot in 0.000219","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":14489780,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.7083278,"message":"limit_bandwidth (took 0.012984)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","rate":94371,"burst":94371},"response":{"rate":94371,"burst":94371}},"thread_id":10809480,"fiber_id":14489780,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.7129092,"message":"Wrote snapshot in 0.000218","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":14657420,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.713021,"message":"spawn (took 0.003780)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","script":"/var/vcap/store/rabbitmq_common/bin/warden_service_ctl start /var/vcap/store/rabbit/instances/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/sys/service-log/rabbit/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/store/rabbitmq_common /var/vcap/packages/rabbitmq-3.0 /var/vcap/packages/erlang bcaaf0d3-af9d-47b0-a366-97e29423efb0"},"response":{"job_id":29}},"thread_id":10809480,"fiber_id":14657420,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.7237728,"message":"Wrote snapshot in 0.000227","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":16984120,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.7238877,"message":"net_in (took 0.008048)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","host_port":15010,"container_port":10001},"response":{"host_port":15010,"container_port":10001}},"thread_id":10809480,"fiber_id":16984120,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917724.7279842,"message":"Exited with status 1 (0.016s): [[\"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/src/closefds/closefds\", \"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/src/closefds/closefds\"], \"/var/vcap/store/rabbit/containers/178q2ebslsv/bin/iomux-link\", \"-w\", \"/var/vcap/store/rabbit/containers/178q2ebslsv/jobs/29/cursors\", \"/var/vcap/store/rabbit/containers/178q2ebslsv/jobs/29\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","stdout":"","stderr":""},"thread_id":10809480,"fiber_id":17179440,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/spawn.rb","lineno":134,"method":"set_deferred_success"}
{"timestamp":1381917724.7284782,"message":"Wrote snapshot in 0.000340","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":17179440,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917724.7403626,"message":"info (took 0.015594)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv"},"response":{"state":"active","events":[],"host_ip":"10.254.0.37","container_ip":"10.254.0.38","container_path":"/var/vcap/store/rabbit/containers/178q2ebslsv","memory_stat":"#<Warden::Protocol::InfoResponse::MemoryStat:0x0000000203c5a8>","cpu_stat":"#<Warden::Protocol::InfoResponse::CpuStat:0x0000000203e100>","bandwidth_stat":"#<Warden::Protocol::InfoResponse::BandwidthStat:0x00000002186df0>","job_ids":[29]}},"thread_id":10809480,"fiber_id":19469340,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917785.231933,"message":"info (took 0.023895)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv"},"response":{"state":"active","events":[],"host_ip":"10.254.0.37","container_ip":"10.254.0.38","container_path":"/var/vcap/store/rabbit/containers/178q2ebslsv","memory_stat":"#<Warden::Protocol::InfoResponse::MemoryStat:0x000000022bbd38>","cpu_stat":"#<Warden::Protocol::InfoResponse::CpuStat:0x000000022ba438>","bandwidth_stat":"#<Warden::Protocol::InfoResponse::BandwidthStat:0x0000000213d998>","job_ids":[]}},"thread_id":10809480,"fiber_id":17587720,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917785.2377288,"message":"Wrote snapshot in 0.000265","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":17440200,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917785.2393465,"message":"spawn (took 0.002879)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","script":"/var/vcap/store/rabbitmq_common/bin/warden_service_ctl stop /var/vcap/store/rabbit/instances/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/sys/service-log/rabbit/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/store/rabbitmq_common"},"response":{"job_id":30}},"thread_id":10809480,"fiber_id":17440200,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917785.2485044,"message":"Wrote snapshot in 0.000388","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":17179440,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917785.2489347,"message":"link (took 0.009132)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","job_id":30},"response":{"exit_status":0,"stdout":"","stderr":""}},"thread_id":10809480,"fiber_id":17440200,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917785.2490513,"message":"run (took 0.014130)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv","script":"/var/vcap/store/rabbitmq_common/bin/warden_service_ctl stop /var/vcap/store/rabbit/instances/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/sys/service-log/rabbit/bcaaf0d3-af9d-47b0-a366-97e29423efb0 /var/vcap/store/rabbitmq_common"},"response":{"exit_status":0,"stdout":"","stderr":""}},"thread_id":10809480,"fiber_id":17440200,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917785.266088,"message":"Wrote snapshot in 0.000616","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":17980120,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":360,"method":"write_snapshot"}
{"timestamp":1381917785.2663023,"message":"stop (took 0.016283)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv"},"response":{}},"thread_id":10809480,"fiber_id":17980120,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
{"timestamp":1381917785.37557,"message":"Container destroyed","log_level":"debug","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv"},"thread_id":10809480,"fiber_id":16699940,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/linux.rb","lineno":129,"method":"do_destroy"}
{"timestamp":1381917785.3837552,"message":"destroy (took 0.116018)","log_level":"info","source":"Warden::Container::Linux","data":{"handle":"178q2ebslsv","request":{"handle":"178q2ebslsv"},"response":{}},"thread_id":10809480,"fiber_id":16699940,"process_id":21274,"file":"/var/vcap/data/packages/rabbit_node_ng/1.1/warden/lib/warden/container/base.rb","lineno":318,"method":"dispatch"}
This is the full deployment YAML used to deploy the service broker
# -------------------------------------------------------------------------------
# BOSH YML for deploying user services to BXB.
#
# Originally copied from:
# https://github.com/cloudfoundry/cf-services-contrib-release/blob/master/examples/dns-all.yml
# -------------------------------------------------------------------------------
---
name: bxb-cf-services
director_uuid: 248d63ba-5467-4b18-9968-affb9c56d1f2
#--------------------------------------------------------------------
# RELEASES
#--------------------------------------------------------------------
releases:
- name: cf-services-contrib
version: 1
#--------------------------------------------------------------------
# COMPILATION
#--------------------------------------------------------------------
compilation:
workers: 3
network: CPG-PaaS-CF
cloud_properties:
ram: 2048
disk: 8096
cpu: 4
#--------------------------------------------------------------------
# UPDATE
#--------------------------------------------------------------------
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
#--------------------------------------------------------------------
# NETWORKS
#--------------------------------------------------------------------
networks:
- name: CPG-PaaS-CF
subnets:
- range: 10.0.6.0/24
reserved:
- 10.0.6.1 - 10.0.6.79
- 10.0.6.100 - 10.0.6.253
# No static range required
gateway: 10.0.6.254
dns:
- 144.254.71.184
cloud_properties:
name: CPG-PaaS-CF
#--------------------------------------------------------------------
# RESOURCE POOLS
#--------------------------------------------------------------------
resource_pools:
- name: cf-services-gateway
network: CPG-PaaS-CF
size: 1
stemcell:
name: bosh-stemcell
version: 0.8.0
cloud_properties:
ram: 2048
disk: 8192
cpu: 2
env:
bosh:
- name: cf-service-node-redis
network: CPG-PaaS-CF
size: 1
stemcell:
name: bosh-stemcell
version: 0.8.0
cloud_properties:
ram: 16384
disk: 8192
cpu: 2
env:
bosh:
- name: cf-service-node-rabbit
network: CPG-PaaS-CF
size: 1
stemcell:
name: bosh-stemcell
version: 0.8.0
cloud_properties:
ram: 16384
disk: 8192
cpu: 2
env:
bosh:
- name: cf-service-node-postgres
network: CPG-PaaS-CF
size: 0
stemcell:
name: bosh-stemcell
version: 0.8.0
cloud_properties:
ram: 16384
disk: 8192
cpu: 2
env:
bosh:
#--------------------------------------------------------------------
# JOBS
#--------------------------------------------------------------------
jobs:
- name: gateways
release: cf-services-contrib
template:
#- mongodb_gateway
#- memcached_gateway
#- postgresql_gateway_ng
- redis_gateway
- rabbit_gateway
#- vblob_gateway
instances: 1
resource_pool: cf-services-gateway
networks:
- name: CPG-PaaS-CF
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.ft2.cpgpaas.net
uaa_client_auth_credentials:
username: services
password: thisISnotREALLYmyPASSWORDnotATallOHnoNO
- name: redis_service_node
release: cf-services-contrib
template: redis_node_ng
instances: 1
resource_pool: cf-service-node-redis
persistent_disk: 102400
properties:
plan: default
networks:
- name: CPG-PaaS-CF
default: [dns, gateway]
- name: rabbit_service_node
release: cf-services-contrib
template: rabbit_node_ng
instances: 1
resource_pool: cf-service-node-rabbit
persistent_disk: 102400
properties:
plan: default
networks:
- name: CPG-PaaS-CF
default: [dns, gateway]
properties:
networks:
apps: CPG-PaaS-CF
management: CPG-PaaS-CF
cc:
srv_api_uri: http://api.ft2.cpgpaas.net
nats:
address: 10.0.6.102
port: 4222
user: nats
password: nats
authorization_timeout: 15
service_plans:
redis:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
rabbit:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
redis_gateway:
token: REDIS-TOKEN-PQ6cNYLjMWzsEoZEsQijXt5x8zd
default_plan: default
supported_versions: ["2.6"]
version_aliases:
current: "2.6"
cc_api_version: v2
redis_node:
supported_versions: ["2.6"]
default_version: "2.6"
max_tmp: 900
rabbit_gateway:
token: RABBIT-TOKEN-Pl1NnRIe85J8ZzQOb1a3wuqF
default_plan: "default"
supported_versions: ["3.0"]
version_aliases:
current: "3.0"
cc_api_version: v2
rabbit_node:
supported_versions: ["3.0"]
default_version: "3.0"
max_tmp: 900
On postgres node postgresql processes are running, but monit constantly trying to restart it
root@82fd4db5-1738-4186-9e31-77f3402b4312:~# tail -n50 /var/vcap/monit/monit.log
[UTC Dec 16 12:39:16] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:39:27] info : 'postgresql_node' process is running with pid 3595
[UTC Dec 16 12:39:37] error : 'postgresql_node' process is not running
[UTC Dec 16 12:39:37] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:39:37] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:39:48] info : 'postgresql_node' process is running with pid 3635
[UTC Dec 16 12:39:58] error : 'postgresql_node' process is not running
[UTC Dec 16 12:39:58] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:39:58] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:40:09] info : 'postgresql_node' process is running with pid 3675
[UTC Dec 16 12:40:19] error : 'postgresql_node' process is not running
[UTC Dec 16 12:40:19] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:40:19] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:40:30] info : 'postgresql_node' process is running with pid 3719
[UTC Dec 16 12:40:40] error : 'postgresql_node' process is not running
[UTC Dec 16 12:40:40] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:40:40] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:40:51] info : 'postgresql_node' process is running with pid 3759
[UTC Dec 16 12:41:01] error : 'postgresql_node' process is not running
[UTC Dec 16 12:41:01] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:41:01] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:41:12] info : 'postgresql_node' process is running with pid 3799
[UTC Dec 16 12:41:22] error : 'postgresql_node' process is not running
[UTC Dec 16 12:41:22] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:41:22] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:41:33] info : 'postgresql_node' process is running with pid 3840
[UTC Dec 16 12:41:44] error : 'postgresql_node' process is not running
[UTC Dec 16 12:41:44] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:41:44] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:41:55] info : 'postgresql_node' process is running with pid 3880
[UTC Dec 16 12:42:05] error : 'postgresql_node' process is not running
[UTC Dec 16 12:42:05] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:42:05] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:42:16] info : 'postgresql_node' process is running with pid 3920
[UTC Dec 16 12:42:26] error : 'postgresql_node' process is not running
[UTC Dec 16 12:42:26] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:42:26] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:42:37] info : 'postgresql_node' process is running with pid 3961
[UTC Dec 16 12:42:47] error : 'postgresql_node' process is not running
[UTC Dec 16 12:42:47] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:42:47] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:42:58] info : 'postgresql_node' process is running with pid 4001
[UTC Dec 16 12:43:08] error : 'postgresql_node' process is not running
[UTC Dec 16 12:43:08] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:43:08] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:43:19] info : 'postgresql_node' process is running with pid 4042
[UTC Dec 16 12:43:29] error : 'postgresql_node' process is not running
[UTC Dec 16 12:43:29] info : 'postgresql_node' trying to restart
[UTC Dec 16 12:43:29] info : 'postgresql_node' start: /var/vcap/jobs/postgresql_node_ng/bin/postgresql_node_ctl
[UTC Dec 16 12:43:40] info : 'postgresql_node' process is running with pid 4083
root@82fd4db5-1738-4186-9e31-77f3402b4312:~#
root@82fd4db5-1738-4186-9e31-77f3402b4312:~# ps aux|grep postgres
vcap 938 0.0 1.2 177060 21444 ? S< 12:17 0:00 /var/vcap/data/packages/postgresql93/1.1/bin/postgres -D /var/vcap/store/postgresql93
vcap 939 0.0 0.0 15728 484 ? S<s 12:17 0:00 postgres: logger process
vcap 941 0.0 0.1 177196 2140 ? S<s 12:17 0:00 postgres: checkpointer process
vcap 942 0.0 0.0 177060 1652 ? S<s 12:17 0:00 postgres: writer process
vcap 943 0.0 0.0 177060 864 ? S<s 12:17 0:00 postgres: wal writer process
vcap 944 0.0 0.1 178000 2040 ? S<s 12:17 0:00 postgres: autovacuum launcher process
vcap 945 0.0 0.0 17960 904 ? S<s 12:17 0:00 postgres: stats collector process
root 4165 65.0 0.7 56908 13056 ? R<l 12:44 0:00 ruby /var/vcap/packages/postgresql_node_ng/postgresql/bin/postgresql_node -c /var/vcap/jobs/postgresql_node_ng/config/postgresql_node.yml
root 4202 0.0 0.0 7688 836 pts/0 S+ 12:44 0:00 grep --color=auto postgres
As a result, service is not registered in CF and impossible to provision.
This postgres package was restored in cf-release (for everyone not using RDS for cc/uaa DBs). Can it be removed from this release now? 17d5aa7
In redis & rabbit node logs:
==> /var/vcap/sys/log/monit/warden_ctl.err.log <==
tar: /var/vcap/stemcell_base.tar.gz: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
tar: Child returned status 2
tar: Exiting with failure status due to previous errors
What's this all about?
When trying to bind an existing mongo instance to a new app in the space we got an error. I was able to trace it back to the mongo_gateway
:
[2013-10-29 13:40:10.942722] mongodb_gateway - pid=14075 tid=8754 fid=56f6 DEBUG -- Reply status:404, headers:{"Content-Type"=>"application/json"}, body:{"code":30300,"description":"dabd88a3-737f-4873-bcd9-6313d3555a0c not found"}
I checked the logs of the mongo_node
as well but found nothing. I did find a record of the instance in /var/vcap/store/mongodb_node.db
sqlite> select * from vcap_services_mongo_db_node_provisioned_services;
...
dabd88a3-737f-4873-bcd9-6313d3555a0c|10016|3607029e-4e44-4539-9c4f-41dbbe159b05|1|||admin|24ef00f0-b196-4ec0-937b-0343ef477d38|db|179ahlg9oal|10.251.0.6|2.2
...
Later on, I got the exact same problem with elasticsearch
and memcached
. Since I have the gateways for elasticsearch, memcached and mongo collocated on the same vm, I suspect this situation emerged after a bosh recreate of that particular machine.
UPDATE
After talking to @rkoster, I found out the gateway can't find the instance because the node somehow doesn't announce it anymore, even though it has a record of the instance in its database. The temporary solution is to bosh ssh
into the node and to monit restart mongodb_node
. The node will now start announcing all its listed instances again and the service is found again by the gateway. The issue still needs to be debugged though.
Because the serialization_data_server job depends on it (vcap_redis).
I am running into this problem with several services while deploying them with Bosh-Lite.
For example, SSHing into the Redis Node VM:
root@5db9a81b-9229-45a6-8fb8-4d27b131314f:/var/vcap/sys# cat rake aborted!
No such file or directory - /proc/sys/net/ipv4/ip_local_port_range
Tasks: TOP => warden:start
(See full trace by running task with --trace)
rake aborted!
No such file or directory - /proc/sys/net/ipv4/ip_local_port_range...
I've seen this very same issue here:
https://github.com/cloudfoundry/cf-release/issues/179
(from the example deployment file)
Add the following into all gateway.yml.erb:
service:
...
provider: contrib
provider_name: 'Community Contrib'
Currently, the shared PG user created for binding cannot create extensions:
bosh_8lrgjqwzb@0a535b80-9454-4b65-a8b6-b0862b9f49a7:~$ /var/vcap/packages/postgresql92/bin/psql -U u5aac063c3fb64277918b51f3c70267de -p 5434 -d d40b75cae67904ebc937496f72662e6d2
psql (9.2.4)
Type "help" for help.
d40b75cae67904ebc937496f72662e6d2=> create extension hstore;
ERROR: permission denied to create extension "hstore"
HINT: Must be superuser to create this extension.
d40b75cae67904ebc937496f72662e6d2=> ^D\q
bosh_8lrgjqwzb@0a535b80-9454-4b65-a8b6-b0862b9f49a7:~$ /var/vcap/packages/postgresql92/bin/psql -U vcap -p 5434 -d d40b75cae67904ebc937496f72662e6d2
psql (9.2.4)
Type "help" for help.
d40b75cae67904ebc937496f72662e6d2=# create extension hstore;
CREATE EXTENSION
Does the gateway need access to properties.redis_node or does only the redis_node job itself need this?
Currently you must specify high_water & low_water config:
service_plans:
postgresql:
default:
job_management:
high_water: 1400
low_water: 100
configuration:
lifecycle:
enable: false
Since these values aren't used, modify each *_gateway.yml.erb
to insert some arbitrary default values (250, 100) if job_management
isn't mentioned in config.
Currently gateway properties are global; can they go in the gateway job's properties?
The value of property "plan" in the following template files are not quoted with single quotation mark.
./jobs/postgresql_node/templates/postgresql_node.yml.erb
./jobs/memcached_node/templates/memcached_node.yml.erb
./jobs/rabbit_node/templates/rabbit_node.yml.erb
./jobs/mysql_node/templates/mysql_node.yml.erb
./jobs/mongodb_node/templates/mongodb_node.yml.erb
./jobs/redis_node/templates/redis_node.yml.erb
./jobs/vblob_node/templates/vblob_node.yml.erb
Which caused below exception when starting the job:
/var/vcap/packages/mongodb_node/services/mongodb/vendor/bundle/ruby/1.9.1/gems/vcap_services_base-0.1.16/lib/base/node_bin.rb:137:in parse_property': Invalid String object: 100 (RuntimeError) from /var/vcap/packages/mongodb_node/services/mongodb/vendor/bundle/ruby/1.9.1/gems/vcap_services_base-0.1.16/lib/base/node_bin.rb:54:in
start'
from /var/vcap/packages/mongodb_node/services/mongodb/bin/mongodb_node:43:in `
Here is the patch:
diff --git a/jobs/memcached_node/templates/memcached_node.yml.erb b/jobs/memcached_node/templates/memcached_node.yml.erb
index ac80661..409d02e 100644
--- a/jobs/memcached_node/templates/memcached_node.yml.erb
+++ b/jobs/memcached_node/templates/memcached_node.yml.erb
@@ -13,7 +13,7 @@ nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#
%>
capacity: <%= plan_enabled && plan_conf.capacity || 16 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/memcached/memcached_node.db
mbus: <%= nats %>
index: <%= spec.index %>
diff --git a/jobs/mongodb_node/templates/mongodb_node.yml.erb b/jobs/mongodb_node/templates/mongodb_node.yml.erb
index 14f60c4..84ab58d 100644
--- a/jobs/mongodb_node/templates/mongodb_node.yml.erb
+++ b/jobs/mongodb_node/templates/mongodb_node.yml.erb
@@ -15,7 +15,7 @@ nats_props = properties.send(nats_props_name)
nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#{nats_props.port}"
%>
capacity: <%= plan_enabled && plan_conf.capacity || 200 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/mongodb_node.db
mbus: <%= nats %>
index: <%= spec.index %>
diff --git a/jobs/mysql_node/templates/mysql_node.yml.erb b/jobs/mysql_node/templates/mysql_node.yml.erb
index 611a70d..8634bb8 100644
--- a/jobs/mysql_node/templates/mysql_node.yml.erb
+++ b/jobs/mysql_node/templates/mysql_node.yml.erb
@@ -12,7 +12,7 @@ nats_props = properties.send(nats_props_name)
nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#{nats_props.port}"
%>
capacity: <%= plan_enabled && plan_conf.capacity || 200 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/mysql_node.db
base_dir: /var/vcap/store/mysql
mbus: <%= nats %>
diff --git a/jobs/postgresql_node/templates/postgresql_node.yml.erb b/jobs/postgresql_node/templates/postgresql_node.yml.erb
index 7618d5b..71f29a1 100644
--- a/jobs/postgresql_node/templates/postgresql_node.yml.erb
+++ b/jobs/postgresql_node/templates/postgresql_node.yml.erb
@@ -12,7 +12,7 @@ nats_props = properties.send(nats_props_name)
nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#{nats_props.port}"
%>
capacity: <%= plan_enabled && plan_conf.capacity || 200 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/postgresql_node.db
base_dir: /var/vcap/store/postgresql
mbus: <%= nats %>
diff --git a/jobs/rabbit_node/templates/rabbit_node.yml.erb b/jobs/rabbit_node/templates/rabbit_node.yml.erb
index cfe7fc0..6af45c0 100644
--- a/jobs/rabbit_node/templates/rabbit_node.yml.erb
+++ b/jobs/rabbit_node/templates/rabbit_node.yml.erb
@@ -12,7 +12,7 @@ nats_props = properties.send(nats_props_name)
nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#{nats_props.port}"
%>
capacity: <%= plan_enabled && plan_conf.capacity || 200 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/rabbit/rabbit_node.db
base_dir: /var/vcap/store/rabbit/instances
mbus: <%= nats %>
diff --git a/jobs/redis_node/templates/redis_node.yml.erb b/jobs/redis_node/templates/redis_node.yml.erb
index 08c2138..9f48fef 100644
--- a/jobs/redis_node/templates/redis_node.yml.erb
+++ b/jobs/redis_node/templates/redis_node.yml.erb
@@ -14,7 +14,7 @@ nats_props = properties.send(nats_props_name)
nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#{nats_props.port}"
%>
capacity: <%= plan_enabled && plan_conf.capacity || 200 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/redis/redis_node.db
mbus: <%= nats %>
index: <%= spec.index %>
diff --git a/jobs/vblob_node/templates/vblob_node.yml.erb b/jobs/vblob_node/templates/vblob_node.yml.erb
index 2b19c24..c39ec60 100644
--- a/jobs/vblob_node/templates/vblob_node.yml.erb
+++ b/jobs/vblob_node/templates/vblob_node.yml.erb
@@ -13,7 +13,7 @@ nats_props = properties.send(nats_props_name)
nats = "nats://#{nats_props.user}:#{nats_props.password}@#{nats_props.address}:#{nats_props.port}"
%>
capacity: <%= plan_enabled && plan_conf.capacity || 200 %>
-plan: <%= plan %>
+plan: '<%= plan %>'
local_db: sqlite3:/var/vcap/store/vblob_node.db
mbus: <%= nats %>
index: <%= spec.index %>
$ bosh create release
Syncing blobs...
ruby/rubygems-1.7.2.tgz downloading 239.8K (0%).../Users/drnic/.rvm/gems/ruby-1.9.3-p448/gems/blobstore_client-1.5.0.pre.1055/lib/blobstore_client/simple_blobstore_client.rb:43:in `get_file': Could not fetch object, 404/ (Bosh::Blobstore::BlobstoreError)
CloudFoundry version:v136
Deployed 'cf-services-contrib-release' by using the 'dns-all.yml',got a error:
Error 80006: Error filling in template elasticsearch_gateway.yml.erb' for
services_contrib_gateway/0' (line 19: undefined method `user' for nil:NilClass)
Added nats configuration to properties,the error was resolved.
name: cf-services-contrib
director_uuid: ee4cfde0-aabf-4172-add4-afb14cb7d110 # CHANGE
releases:
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
instance_type: m1.medium # CHANGE
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
max_errors: 1
networks:
resource_pools:
jobs:
properties:
nats:
address: 0.core.default.cf.bosh
port: 4222
user: nats
password: "c1oudc0w"
authorization_timeout: 5
networks:
apps: default
management: default
cc:
srv_api_uri: http://api.mycloud.com #CHANGE
service_plans:
mongodb:
default:
description: "Developer, shared VM, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: true
lifecycle:
enable: true
serialization: enable
snapshot:
quota: 1
memcached:
default:
description: "Developer"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
vblob:
default:
description: "Developer"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
elasticsearch:
"free":
description: "Developer"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
postgresql:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
redis:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
rabbit:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
mongodb_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
default_plan: default
supported_versions: ["2.2"]
version_aliases:
current: "2.2"
cc_api_version: v2
mongodb_node:
supported_versions: ["2.2"]
default_version: "2.2"
max_tmp: 900
memcached_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
supported_versions: ["1.4"]
version_aliases:
current: "1.4"
cc_api_version: v2
memcached_node:
supported_versions: ["1.4"]
default_version: "1.4"
vblob_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
supported_versions: ["0.51"]
version_aliases:
current: "0.51"
cc_api_version: v2
vblob_node:
supported_versions: ["0.51"]
default_version: "0.51"
elasticsearch_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
supported_versions: ["0.20"]
version_aliases:
current: "0.20"
cc_api_version: v2
elasticsearch_node:
supported_versions: ["0.20"]
default_version: "0.20"
postgresql_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
default_plan: default
supported_versions: ["9.2"]
version_aliases:
current: "9.2"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.2"]
default_version: "9.2"
max_tmp: 900
password: "c1oudc0w" # CHANGE
redis_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
default_plan: default
supported_versions: ["2.6"]
version_aliases:
current: "2.6"
cc_api_version: v2
redis_node:
supported_versions: ["2.6"]
default_version: "2.6"
max_tmp: 900
rabbit_gateway:
token: "c1oudc0w" # CHANGE - the token you use later with cf create-service-auth-token
default_plan: "default"
supported_versions: ["3.0"]
version_aliases:
current: "3.0"
cc_api_version: v2
rabbit_node:
supported_versions: ["3.0"]
default_version: "3.0"
max_tmp: 900
When I deployed, there was a error
Error 400007: `services_contrib_gateway/0' is not running after update
and bosh vms:
| services_contrib_gateway/0 | failing | small | 80.80.80.217 |
and there is no plan for cf create-service
root@ubuntu:~/bosh-workspace/deployments/cf-services# cf services -m
Getting services... OK
service version provider plans description
blob 0.51 core Blob store
elasticsearch 0.20 core Elasticsearch search service
memcached 1.4 core Memcached in-memory cache
mongodb 2.2 core MongoDB NoSQL database
postgresql 9.2 core PostgreSQL database (vFabric)
rabbitmq 3.0 core RabbitMQ message queue
redis 2.6 core Redis key-value store
No other service puts service_bin_dir
underneath warden
.
$ bosh sync blobs
Syncing blobs...
sqlite/sqlite-autoconf-3070500.tar.gz downloading 1.5M (0%).../Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/blobstore_client-1.5.0.pre.740/lib/blobstore_client/simple_blobstore_client.rb:43:in `get_file': Could not fetch object, 403/ (Bosh::Blobstore::BlobstoreError)
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/blobstore_client-1.5.0.pre.740/lib/blobstore_client/s3_blobstore_client.rb:88:in `get_file'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/blobstore_client-1.5.0.pre.740/lib/blobstore_client/base.rb:50:in `get'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/blob_manager.rb:295:in `download_blob'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/blob_manager.rb:230:in `block in process_index'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/blob_manager.rb:211:in `each_pair'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/blob_manager.rb:211:in `process_index'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/blob_manager.rb:155:in `sync'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/commands/blob_management.rb:50:in `sync'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/command_handler.rb:57:in `run'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/runner.rb:59:in `run'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/lib/cli/runner.rb:18:in `run'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/gems/bosh_cli-1.5.0.pre.740/bin/bosh:7:in `<top (required)>'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/bin/bosh:23:in `load'
from /Users/drnic/.rvm/gems/ruby-1.9.3-p429/bin/bosh:23:in `<main>'
I used the latest cf-services-contrib-release to deploy postgresql,
https://github.com/cloudfoundry/cf-services-contrib-release
I use the example yml:
https://github.com/cloudfoundry/cf-services-contrib-release/blob/master/examples/dns-postgresql.yml
but when I cf create-service,there is no plan to choose:
root@ubuntu:~/bosh-workspace/deployments/cf-openstack# cf create-service
1:postgresql 9.2
What kind?> 1
Name?> postgresql-e4269
Which plan?>
Which plan?> default
Unknown answer, please try again!
When trying to build the package, using the following steps
git clone https://github.com/cloudfoundry/cf-services-contrib-release.git
./update
bosh create release --force
I get:
Package
vblob_node_ng' has a glob that resolves to an empty file list: services/ng/vblob/**/*`
Is there any way to fix this?
While doing a clean bosh deploy I get:
Error 450001: Failed to get updated incarnation from Monit:
["/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/monit.rb:147:in `reload'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/message/apply.rb:180:in `reload_monit'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/message/apply.rb:79:in `apply'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/message/apply.rb:10:in `process'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/handler.rb:277:in `process'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/handler.rb:262:in `process_long_running'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/handler.rb:189:in `block in process_in_thread'",
"<internal:prelude>:10:in `synchronize'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/handler.rb:187:in `process_in_thread'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/bosh_agent-1.5.0.pre.3/lib/bosh_agent/handler.rb:167:in `block in handle_message'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:1060:in `call'",
"/var/vcap/bosh/lib/ruby/gems/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:1060:in `block in spawn_threadpool'"]
To reproduce add mongodb_node job to your manifest and try to deploy.
Stackstrace:
+ set -e
+ cp -a services/tools/mongodb_proxy/README.md services/tools/mongodb_proxy/build.sh services/tools/mongodb_proxy/config services/tools/mongodb_proxy/src /var/vcap/packages/mongodb_proxy
+ cp -a services/govendor /var/vcap/packages/mongodb_proxy
++ readlink -nf /var/vcap/packages/golang
+ GOLANG_PATH=/var/vcap/data/packages/golang/0.1-dev.1
+ PATH=/var/vcap/data/packages/golang/0.1-dev.1/bin:/var/vcap/bosh/bin:/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R6/bin
++ readlink -nf /var/vcap/packages/golang
+ export GOROOT=/var/vcap/data/packages/golang/0.1-dev.1
+ GOROOT=/var/vcap/data/packages/golang/0.1-dev.1
++ readlink -nf /var/vcap/packages/mongodb_proxy/govendor
+ GOVENDOR=/var/vcap/data/packages/mongodb_proxy/0.1-dev.1/govendor
++ readlink -nf /var/vcap/packages/mongodb_proxy
+ export GOPATH=/var/vcap/data/packages/mongodb_proxy/0.1-dev.1:/var/vcap/data/packages/mongodb_proxy/0.1-dev.1/govendor
+ GOPATH=/var/vcap/data/packages/mongodb_proxy/0.1-dev.1:/var/vcap/data/packages/mongodb_proxy/0.1-dev.1/govendor
+ cd /var/vcap/packages/mongodb_proxy/src
+ go install proxyctl
# launchpad.net/goyaml
decode.go: In function '_cgo_bec77eaae85a_Cfunc_event_alias':
decode.go:34: warning: assignment from incompatible pointer type
decode.go: In function '_cgo_bec77eaae85a_Cfunc_event_mapping_start':
decode.go:44: warning: assignment from incompatible pointer type
decode.go: In function '_cgo_bec77eaae85a_Cfunc_event_scalar':
decode.go:54: warning: assignment from incompatible pointer type
decode.go: In function '_cgo_bec77eaae85a_Cfunc_event_sequence_start':
decode.go:64: warning: assignment from incompatible pointer type
# go-mongo-proxy/proxy
go-mongo-proxy/proxy/server.go:144: cannot convert nil to type steno.Logger
Posible cause: cloudfoundry/gosteno@1e6f4ee
Currently we support a schema that looks like:
service_bin_dir:
'2.4': /var/vcap/packages/rabbitmq-2.4
'2.8': /var/vcap/packages/rabbitmq-2.8
'3.0': /var/vcap/packages/rabbitmq-3.0
erlang: /var/vcap/packages/erlang
This means that we have to find a version of erlang that works will all supported service versions. Over time either - a future rabbitmq will not support an old erlang, or an old service will not be able to use a newer erlang. For example.
The loading of the above 'X.Y' bin_dir is done in vcap-service-base; but the extra packages are done in a bespoke way within the service subclasses. This is ikky.
I'd propose the following:
service_bin_dir:
'2.8':
- /var/vcap/packages/rabbitmq-2.8
- /var/vcap/packages/erlangX
'3.0':
- /var/vcap/packages/rabbitmq-3.0
- /var/vcap/packages/erlangY
Now vcap-services-base can quickly and accurately load the correct packages.
For mongo service, there are some scenarios that need to use mongo db server side scripting. By default this is disabled in mongodb.conf as following:
noscripting = true
And this parameter is not configurable in deployment yaml file, which causes that only can modify the mongodb.conf in mongo node after the deployment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.