GithubHelp home page GithubHelp logo

cloudve / cloudbridge Goto Github PK

View Code? Open in Web Editor NEW
111.0 17.0 50.0 16.24 MB

A consistent interface to multiple IaaS clouds; in Python.

Home Page: https://cloudbridge.cloudve.org

License: MIT License

Python 100.00%
multi-cloud aws azure openstack gcp hacktoberfest

cloudbridge's Introduction

CloudBridge provides a consistent layer of abstraction over different Infrastructure-as-a-Service cloud providers, reducing or eliminating the need to write conditional code for each cloud.

Documentation

Detailed documentation can be found at http://cloudbridge.cloudve.org.

Build Status Tests

Integration Tests Code Coverage latest version available on PyPI Documentation Status Download stats
Provider/Environment Python 3.8
Amazon Web Services aws-py38
Google Cloud Platform gcp-py38
Microsoft Azure azure-py38
OpenStack os-py38
Mock Provider mock-py38

Installation

Install the latest release from PyPi:

pip install cloudbridge[full]

For other installation options, see the installation page in the documentation.

Usage example

To get started with CloudBridge, export your cloud access credentials (e.g., AWS_ACCESS_KEY and AWS_SECRET_KEY for your AWS credentials) and start exploring the API:

from cloudbridge.factory import CloudProviderFactory, ProviderList

provider = CloudProviderFactory().create_provider(ProviderList.AWS, {})
print(provider.security.key_pairs.list())

The exact same command (as well as any other CloudBridge method) will run with any of the supported providers: ProviderList.[AWS | AZURE | GCP | OPENSTACK]!

Citation

N. Goonasekera, A. Lonie, J. Taylor, and E. Afgan, "CloudBridge: a Simple Cross-Cloud Python Library," presented at the Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale, Miami, USA, 2016. DOI: http://dx.doi.org/10.1145/2949550.2949648

Quick Reference

The following object graph shows how to access various provider services, and the resource that they return.

CloudBridge Quick Reference

Design Goals

  1. Create a cloud abstraction layer which minimises or eliminates the need for cloud specific special casing (i.e., Not require clients to write if EC2 do x else if OPENSTACK do y.)
  2. Have a suite of conformance tests which are comprehensive enough that goal 1 can be achieved. This would also mean that clients need not manually test against each provider to make sure their application is compatible.
  3. Opt for a minimum set of features that a cloud provider will support, instead of a lowest common denominator approach. This means that reasonably mature clouds like Amazon and OpenStack are used as the benchmark against which functionality & features are determined. Therefore, there is a definite expectation that the cloud infrastructure will support a compute service with support for images and snapshots and various machine sizes. The cloud infrastructure will very likely support block storage, although this is currently optional. It may optionally support object storage.
  4. Make the CloudBridge layer as thin as possible without compromising goal 1. By wrapping the cloud provider's native SDK and doing the minimal work necessary to adapt the interface, we can achieve greater development speed and reliability since the native provider SDK is most likely to have both properties.

Contributing

Community contributions for any part of the project are welcome. If you have a completely new idea or would like to bounce your idea before moving forward with the implementation, feel free to create an issue to start a discussion.

Contributions should come in the form of a pull request. We strive for 100% test coverage so code will only be accepted if it comes with appropriate tests and it does not break existing functionality. Further, the code needs to be well documented and all methods have docstrings. We are largely adhering to the PEP8 style guide with 80 character lines, 4-space indentation (spaces instead of tabs), explicit, one-per-line imports among others. Please keep the style consistent with the rest of the project.

Conceptually, the library is laid out such that there is a factory used to create a reference to a cloud provider. Each provider offers a set of services and resources. Services typically perform actions while resources offer information (and can act on itself, when appropriate). The structure of each object is defined via an abstract interface (see cloudbridge/providers/interfaces) and any object should implement the defined interface. If adding a completely new provider, take a look at the provider development page in the documentation.

cloudbridge's People

Contributors

01000101 avatar abhi005 avatar afgane avatar almahmoud avatar ankit-bhambhani avatar baizhang avatar chandramsekhar avatar chiniforooshan avatar dyex719 avatar fabiorosado avatar jatinkinra avatar jatinkinraais avatar machristie avatar madhugilla avatar madhukargilla avatar malloryfreeberg avatar martinpaulo avatar moshefriedland avatar nsoranzo avatar nuwang avatar patchkez avatar ryansiu1995 avatar sheer-lore avatar vikramais avatar vikramdoda avatar vjalili avatar vveeraais avatar waffle-iron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudbridge's Issues

Non-portable bucket.create()

The code here: https://github.com/galaxyproject/galaxy/pull/4487/files#r135651386 suggests that the process for creating a bucket is not portable across clouds. For example, if we test this code on OpenStack, it might work in one go, and one might not think of writing a loop to retry till the bucket is available. However, when run on AWS, it would fall over due to the lack of a loop.

We should probably handle this within cloudbridge, so that the user experience is consistent. Some issues/solutions come to mind:

  1. Why is this happening? I can't recall this being a problem during tests for example.
  2. However, the go code here has a WaitUntilBucketExists(): http://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/s3-example-basic-bucket-operations.html
  3. We could have a similar waitTillReady() after the bucket is created before returning.

GCECloudProvider: which config options are necessary?

I'm looking at the config options for GCECloudProvider and have noticed the following:

  • 'gce_client_email' appears to not be used anywhere in the code. Regardless, it is also present in the credentials file so it seems redundant to specify it as a config option too.
  • 'gce_project_name' is used but likewise the project id is available in the credentials file.

It would be better to just configure the credentials file and get the client email and project id from the credentials file instead of having this redundancy.

There also seems to be some inconsistency with the default_zone and region_name config options:

  • The GCEInstanceService.create method accepts a zone argument but effectively the code ignores this and uses the default_zone config option instead.
  • Even if GCEInstanceService.create used the zone argument, it seems like the API allows creating an instance in one zone but then not being able to, for example, list it later since GCEInstanceService.list only uses the default_zone config option.
  • region_name is used more like a "default region name" (floating_ips, create_floating_ips), so it would seem a better name for region_name is default_region_name

@afgane @nuwang @mbookman

Network management TODOs

Network management needs some attention so here's a running list of TODOs.

  • Add a library configuration option (as a .cloudbridge file) to specify a default private network to be used by CloudBridge. This will hopefully simplify the process of deploying instances into private networks (e.g., _resolve_launch_options method)
  • Update docs to focus on launch examples using private networking
  • Add functionality to create/delete a router + link it
  • Figure out how to resolve differences between provider network IDs within the library vs. requiring the client to do it (see galaxyproject/cloudlaunch@be1d4f0#commitcomment-17950007 for an example)
  • Add methods to adding and removing static IPs, along with tests (see 93ee031)

Use boto3

The AWS implementation uses the 'old' boto library. Replace it with boto3. Besides being the future version of the library, getting this library closer to v0.2, it will allow much better filtering of returned results than the current implementation.

test_crud_snapshot delete reverting

OpenStack tests when run on Jetstream keep timing out lately and here's the where it happens although it's not at all clear why: https://github.com/gvlproject/cloudbridge/blob/master/test/test_block_store_service.py#L149

Adding a few print statements to that code:

print("\nDeleting snap %s (%s)" % (snap.name, snap.id))
snap.delete()
snap.refresh()
print("Deleted said snap (status: %s); waiting until gone" % snap.state)
snap.wait_for(
  [SnapshotState.UNKNOWN],
  terminal_states=[SnapshotState.ERROR])
print("cleanup snap gone")

the following is shown. OSS and <Response> bits I added in the OpenStack snap delete implementation (https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/providers/openstack/resources.py#L665).

Deleting snap cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 (3a47f71c-0364-4592-94ac-573e768a6572)
OSS deleting cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3
OSS deleted snap
<Response [202]>
Deleted said snap; waiting until gone
Will wait for cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 (configuring) to reach ['unknown']
Object cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 in state configuring. Waiting for ['unknown']...
Object cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 in state available. Waiting for ['unknown']...
Object cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 in state available. Waiting for ['unknown']...
Object cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 in state available. Waiting for ['unknown']...
Object cb_snap-cb_crudsnap-c937cd55-4cac-41e8-a80b-681959c740d3 in state available. Waiting for ['unknown']...
... REPEATS INDEFINITELY

configuring state is a mapped from OS's deleting state (https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/providers/openstack/resources.py#L591), so it looks like for some reason it reverts back to available. The silly part is that afterwards, the snapshot can be manually deleted (using CloudBridge or the dashboard).

The only discovery is that after not attempting to delete the snap during the test run and doing to manually afterwards, the snapshot gets stuck in deleting state (tried it 3 times). Given all of this, most likely it's an issue w/ Jetstream but logging it here for the record.

GCEInstanceService.create doesn't return created instance

The following code, based on the tutorial, doesn't work because GCEInstanceService.create doesn't return the created instance

debian_img_id = 'debian-8-jessie-v20170523'
img = provider.compute.images.get(debian_img_id)
inst_type = sorted([t for t in provider.compute.instance_types.list()
                    if t.vcpus >= 2 and t.ram >= 4],
                   key=lambda x: x.vcpus*x.ram)[0]
inst = provider.compute.instances.create(
    name='cloudbridge-intro', image=img, instance_type=inst_type)
# Wait until ready
inst.wait_till_ready()  # This is a blocking call
# Show instance state
print(inst.state)

Fails with error

Traceback (most recent call last):
  File "test_gce.py", line 29, in <module>
    inst.wait_till_ready()  # This is a blocking call
AttributeError: 'NoneType' object has no attribute 'wait_till_ready'

I confirmed through the Google Cloud Engine dashboard that the instance was in fact created.

The tests expect block storage to be available in all zones

As written, the tests for the block store service expect block storage to be available in the same default zone as the instance created. On our nectar cloud this is not necessarily true: block storage for a given project may only be available in one of the zones. Hence the tests attempt to create block storage in the same default zone as the instance will most often fail.

Zones don't have an internal representation/implementation

The following code returns a Zone object native to the provider vs. a Cloudbridge native object.

>>> r = provider.compute.regions.list()[0]
>>> r.zones
[<AvailabilityZone: nova>]  # OpenStack native object

A complication related to this is that AWS does not provide a Zone listing endpoint.

Add an import_key_pair option + expose public ssh key

Create key pair doesn't expose public ssh key. We also need a way to import an existing keypair.

Therefore, we can combine both requirements to always generate the public, private key locally from within create_key_pair, and then call import_key_pair internally to upload that keypair.

  • Add import_key_pair public method
  • Change create_key_pair to generate keypairs locally and then delegate to import_key_pair

Azure and GCE implementations have reference implementations.

Update tests and cloudbridge code to restrict resource names

In #44 (comment), @nuwang indicated:

... I was thinking that perhaps we should consider enforcing RFC1035 for all names in cloudbridge and document it at an interface level. The only thing that bugs me a little is the lack of spaces for a descriptive name. Perhaps it can be "RFC1035+spaces allowed" and GCE can replace spaces with underscores where necessary?

From the GCP implementation perspective, this would be great. We don't really have any good alternatives (note that GCE accepts dashes, rather than underscores).

The current set of tests, run against GCP, are failing because resource names in the tests do not follow the above assumptions. Can we get the tests updated, along with any anticipated enforcement elsewhere by the CloudBridge API?

Thanks.

Where to place SecurityGroups?

  1. In OpenStack, SecurityGroups appear under networking.
  2. In AWS, SecurityGroups appear under "Security and Networking" in EC2 and also under "Security" in VPCs (along with Network ACLs).
  3. In Azure, only network security groups are available. I'm not sure that the current implementation of SecurityGroups in CB-Azure is correct - I don't think the NSG is being restricted to a VM.
  4. In GCE, there's only a firewall, with a tag applying it to a VM.

So we have a half-half situation here. The first two providers have a SecurityGroup concept that applies only to individual VMs (or rather, individual network interfaces). The last two providers have network security groups, which in general apply to subnets, and are analogous to Network ACLs and Firewalls in AWS and OpenStack respectively.

So I guess that raises two questions:

  1. Should we move SecurityGroups under networking? Or when we eventually introduce Firewalls, should that be placed under Security instead?
  2. Should SecurityGroups be deprecated in the long-run in favour of Firewalls/NSGs instead? Alternatively, we could transparently support both, depending on the target that the firewall rule is applied to. If the target is an instance, we can create a SecurityGroup in AWS/OpenStack. If target is a subnet, we create a Network ACL/Firewall. On Azure and GCE, this will work in a very straightforward way because only the tag changes.

Find() method is inconsistent

The find method lacks consistency across the interface. The following signatures exist:
find(self, name)
find(self, **kwargs)
find(self, name, limit=None, marker=None)

Currently, we only have a meaningful search by name.
Ideally, it should be something like the following:

  1. Find will allow for filtering by any public attribute on the resource. (e.g. img.find(size > 4)
    or instance_type.find(ram > 3, cpu < 4)
  2. AWS for example, support searching with wildcards (e.g. name="abc*")

This would mean that we would have to define a filter vocabulary like in Django, letting provider implementations optimise some searches where possible (e.g. wildcard search by name), but fall back to a default implementation when not. Needs some discussion.

GCE provider issues

In testing PR #31, I came across a few issues with the GCE provider so will summarize those here to keep track of them (@chiniforooshan, @baizhang, @mbookman):

  • KeyError: 'methods' when creating a provider
    Cannot create a GCE provider because getting the following exception:
/Users/eafgan/projects/pprojects/cloudbridge-branches/gce/cloudbridge/cloud/providers/gce/provider.pyc in __init__(self, connection)
     83         # We will not mutate self._desc; it's OK to use items() in Python 2.x.
     84         for resource, resource_desc in desc['resources'].items():
---> 85             methods = resource_desc['methods']
     86             if 'get' not in methods:
     87                 continue

KeyError: 'methods'

This happens on both py27 and py36 with a provider config like this:

gce_config = {'gce_project_name': 'project name',
                       'gce_service_creds_file': 'service acct file.json'}

Simply changing line 85 in gce provider.py to methods = resource_desc.get('methods', "") resolves this but I'm not sure if that's the intended path.

  • TypeError: Missing required parameter "zone" when operating on some compute services
    When trying to run the following commands (possibly others) provider.compute.instances.list() or provider.compute.instance_types.list(), getting the following traceback:
/Users/eafgan/.pyenv/versions/cb-py2/lib/python2.7/site-packages/googleapiclient/discovery.pyc in method(self, **kwargs)
    725     for name in parameters.required_params:
    726       if name not in kwargs:
--> 727         raise TypeError('Missing required parameter "%s"' % name)
    728
    729     for name, regex in six.iteritems(parameters.pattern_params):

TypeError: Missing required parameter "zone"

Explicitly defining gce_default_zone in the provider config solves this but cannot we have a default default?

  • instances.list method does not work when no instances are present
    Just running provide.compute.instances.list() comes back with this when there are no live instances currently available:
/Users/eafgan/projects/pprojects/cloudbridge-branches/gce/cloudbridge/cloud/providers/gce/services.pyc in list(self, limit, marker)
    524             pageToken=marker).execute()
    525         instances = [GCEInstance(self.provider, inst)
--> 526                      for inst in response['items']]
    527         return ServerPagedResultList(len(instances) > max_result,
    528                                      response.get('nextPageToken'),

KeyError: 'items'
  • instance.create method does not return an Instance object of the instance that was just created.

  • instance.refresh() or provider.compute.instances.get(inst_id) causes an exception:

/Users/eafgan/.pyenv/versions/cb-py2/lib/python2.7/site-packages/oauth2client/client.pyc in _do_refresh_request(self, http)
    810             except (TypeError, ValueError):
    811                 pass
--> 812             raise HttpAccessTokenRefreshError(error_msg, status=resp.status)
    813
    814     def _revoke(self, http):

HttpAccessTokenRefreshError: invalid_scope: Empty or missing scope not allowed.

-[ ] On instance.create(name='ea-pr', image=img, instance_type=it, subnet=sn) call, getting

/Users/eafgan/projects/pprojects/cloudbridge-branches/gce/cloudbridge/cloud/providers/gce/services.py in create(self, name, image, instance_type, subnet, zone, key_pair, security_groups, user_data, launch_config, **kwargs)
    495                          .insert(project=self.provider.project_name,
    496                                  zone=self.provider.default_zone,
--> 497                                  body=config)
    498                          .execute())
    499         if 'zone' not in operation:

HttpError: <HttpError 400 when requesting https://www.googleapis.com/compute/v1/projects/isb-cgc-03-0002/zones/us-east1-b/instances?alt=json returned "Invalid value for field 'resource.networkInterfaces[0]': ''. Subnetwork should be specified for custom subnetmode network">

This TODO is probably to blame: https://github.com/gvlproject/cloudbridge/blob/gce/cloudbridge/cloud/providers/gce/services.py#L476

Future release features

A list of features to be added in a future release:

  • Use boto3 for AWS backend (see #22)
  • Increase test coverage to 97%
  • Wrap all exceptions in cloudbridge equivalents
  • Add support for Snapshot sharing
  • Add Metadata service (see #23)

BONUS

  • Add Google Cloud as a new provider (see gce branch)

Does Galaxy need multi-regionality?

An AWS VPC is created in a single region and AWS supports adding one or more subnets in each availability zone. On the other hand, in GCP, a VPC is a global resource, spanning all regions. GCP supports adding subnets in each region. IMO, the simplest way to unify the API in CloudBridge is to restrict everything to a project-default region (and perhaps get rid of the region service/resource altogether). Does this look acceptable to you @nuwang, @afgane?

CC: @mbookman @baizhang

File integrity check

Maybe it would be useful if we could expose means of assessing the integrity of a file. I can think of the following scenarios:

  1. check if a file is correctly uploaded to S3. For this, first calculate the file's checksum using a hash function (e.g., MD5, SHA256), then compare it with S3 generated checksum. If the two hash values match, then the file is correctly upload; otherwise retry uploading the file.

  2. Get a file's checksum without download the file (e.g., see this). A good scenario for this would checking if a local copy match S3 version, if not, then a collaborator has changed the file, hence update the local copy.

It seems boto is exposing MD5 checksum of an object, however, I guess the challenge would be to (1) ensue if/how other providers expose checksum without downloading a file/object; (2) are all the providers use common hash function for checksum?

OpenStackFloatingIP in_use method doesn't necessarily return the correct result

When run against the nectar cloud the test_crud_network_service method fails as follows.

======================================================================
FAIL: test_crud_network_service (test.test_network_service.CloudNetworkServiceTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/mpaulo/PycharmProjects/cloudbridge/test/helpers.py", line 72, in wrapper
    func(self, *args, **kwargs)
  File "/Users/mpaulo/PycharmProjects/cloudbridge/test/test_network_service.py", line 85, in test_crud_network_service
    "Newly created floating IP address should not be in use.")
AssertionError: Newly created floating IP address should not be in use.

When digging into the code it is found that the OpenStackFloatingIP in_use method tests the floating IP status, and if it is found to be 'ACTIVE' returns True. However, on our nectar cloud a newly created floating IP is set to 'ACTIVE' by default - even though it is not in use. A quick trawl through the neutron code seems to show that the status of a newly created floating IP depends on the underlying hardware. Hence ACTIVE is not a good indicator as to if the IP is in use or not.

Auto-inherited doc strings

It would be nice to have all provider implementations inherit the base class docstrings automagically, in addition to their own docstrings. Boto3 may be a potential point of reference.

Make timeout and interval configurable through a global setting

Currently, the wait_for method in ObjectLifeCycleMixin uses a default timeout of 600 and 5 respectively. This value is repeated in subclasses, which increases the likelihood of things drifting out of sync/becoming inconsistent. In addition, there is no way to change these defaults other than by specifying them repeatedly when using the wait_for method (e.g. see test cases).

Therefore, if we move these as properties into the BaseConfiguration object, we can make these defaults globally configurable.

Remaining featureset for release 0.1

  • Update package organization
  • Remove lib versions from factory in favor of releases
  • Block device mapping
  • Support for launching instances into VPC/Neutron
  • Add iterators/generators
  • 95%+ test coverage
  • Basic docs: getting started (e.g., provider configs, env variables), tutorials (e.g., launch an instance, add block storage), API docs
  • Implement support for networking objects (Networks & Subnets)
  • Explicitly set all versions on dependencies
  • Optional: Paging and limits
  • Optional: Add more __eq__ methods

GCEInstanceService.find returns a list of instances

InstanceService find method documentation says

        Searches for an instance by a given list of attributes.

        :rtype: ``object`` of :class:`.Instance`
        :return: an Instance object

However AWSInstanceService.find also returns a list, so I'm not sure if maybe this is a bug in the interface. Since instance names should be unique it does make sense to me that find would only return a single instance (or None if not found).

Doesn't upload files larger than 5 Gig to Swift

If I use cloudbridge to upload a file larger than 5 Gig in size to Swift it eventually fails.

I've walked through the code and this is what I see happening:
The cloudbridge OpenStackBucketObject upload_from_file method is called. This in turn eventually lands up calling the OpenStack swiftclient.client.py Connection put_object method: which is a low level method that does not attempt to break files > 5Gig in size into smaller segments. Hence when you hit the 5 Gig Swift wall it ends in tears..

The OpenStack utility class SwiftService has an upload method that will chunk files if you set the segment_size option. Thus this method will handle files > 5 Gig with aplomb. I've tested this with some rough and ready sample code. I think this class should be delegated to rather than the Connection class when data is being uploaded...

Log everything

When debugging or developing, it's useful to get frequent and detailed logging messages. We should thus have log messages (at the debug level) at every method start and as appropriate elsewhere.

This is a rather mechanical task but will be super-useful in the long run!

Cannot run tests

I'm not sure if there is anything wrong with my local setup or if this is a bug: when I run "tox -e py27" on master I get the following error:

ERROR: invocation failed, logfile: /home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/log/py27-1.log
ERROR: actionid=py27
msg=getenv
cmdargs=[local('/home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/bin/pip'), 'install', '--pre', '-rrequirements.txt', 'coverage']
env={...}
Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Storing debug log for failure in /home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/tmp/pseudo-home/.pip/pip.log

ERROR: could not install deps [-rrequirements.txt, coverage]
___________________________________________________________________________________ summary ____________________________________________________________________________________
ERROR: py27: could not install deps [-rrequirements.txt, coverage]

cat .tox/py27/tmp/pseudo-home/.pip/pip.log

Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Exception information:
Traceback (most recent call last):
File "/home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/local/lib/python2.7/site-packages/pip/commands/install.py", line 262, in run
for req in parse_requirements(filename, finder=finder, options=options, session=session):
File "/home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/local/lib/python2.7/site-packages/pip/req.py", line 1546, in parse_requirements
session=session,
File "/home/chiniforooshan/github/chiniforooshan/cloudbridge/.tox/py27/local/lib/python2.7/site-packages/pip/download.py", line 278, in get_file_content
raise InstallationError('Could not open requirements file: %s' % str(e))
InstallationError: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

ls

CHANGELOG.rst cloudbridge cloudbridge.egg-info docs LICENSE README.rst requirements.txt setup.py temp test tox.ini

KeyError: 'items' when trying to create GCE keypair

Getting an error when following the CloudBridge tutorial at the part where I try to create a keypair:

>>> from cloudbridge.cloud.factory import CloudProviderFactory, ProviderList
>>> config = {
...     'gce_client_email': '[email protected]',
...     'gce_project_name': 'cloudlaunch-169515',
...     'gce_service_creds_file': './CloudLaunch-d9757cca220c.json',
...     'gce_default_zone': 'us-central1-a',
...     'gce_region_name': 'us-central1',
... }
>>> provider = CloudProviderFactory().create_provider(ProviderList.GCE, config)
[aws provider] moto library not available!
>>> kp = provider.security.key_pairs.create('cloudbridge_intro')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/machrist/SGCI/cloudlaunch/cloudbridge/cloudbridge/cloudbridge/cloud/providers/gce/services.py", line 185, in create
    kp = self.find(name=name)
  File "/Users/machrist/SGCI/cloudlaunch/cloudbridge/cloudbridge/cloudbridge/cloud/providers/gce/services.py", line 178, in find
    for kp in self.list():
  File "/Users/machrist/SGCI/cloudlaunch/cloudbridge/cloudbridge/cloudbridge/cloud/providers/gce/services.py", line 166, in list
    for gce_kp in self._iter_gce_key_pairs():
  File "/Users/machrist/SGCI/cloudlaunch/cloudbridge/cloudbridge/cloudbridge/cloud/providers/gce/services.py", line 85, in _iter_gce_key_pairs
    for kpinfo in self._iter_gce_ssh_keys(metadata):
  File "/Users/machrist/SGCI/cloudlaunch/cloudbridge/cloudbridge/cloudbridge/cloud/providers/gce/services.py", line 117, in _iter_gce_ssh_keys
    sshkeys = self._get_or_add_sshkey_entry(metadata)["value"]
  File "/Users/machrist/SGCI/cloudlaunch/cloudbridge/cloudbridge/cloudbridge/cloud/providers/gce/services.py", line 102, in _get_or_add_sshkey_entry
    entries = [item for item in metadata["items"]
KeyError: 'items'

I can successfully list compute images and instance types so I know my setup works.

@afgane @nuwang @mbookman

Should we rename impl.py to something better?

@afgane The current file organisation has impl.py, resources.py and services.py.
However, our conceptual organisation says that we have 3 types of objects: providers, resources and services. (http://cloudbridge.readthedocs.org/en/latest/concepts.html)

impl.py somehow doesn't seem to quite fit. We could rename it to provider.py, which is not that bad, since that is the core provider implementation, and it would make things consistent with the conceptual overview. However, if using provider.py muddies the water with the general concept of a provider as the overarching entity, should it be something like core.py or something?

Wrap provider-specific exceptions into CloudBridge ones

While CloudBridge provides great abstraction from manipulating resources from multiple cloud providers through a single interface and shields the developer from discrepancies, this is not the case with exceptions. CloudBridge has minimal custom exceptions and when it raises an exception that originated on the provider side, it will just raise the original exception (e.g., https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/providers/aws/resources.py#L592). This requires the application to handle possibly different exceptions when deployed on different providers and hence breaks the goal of CloudBridge.

We should create a set of CloudBridge-specific exceptions and consistently raise those instead of the provider-specific ones.

Add services for Security Group rules and Floating IPs

Managing Floating IPs and Security Groups is currently done via methods within the NetworkService (https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/interfaces/services.py#L670) and the Security Group resource (https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/interfaces/resources.py#L1720), respectively.

Instead, to keep the structure of the library consistent and provide a more intuitive interface, extract operations for those two resource types into their own dedicated services.

No return value for instance.terminate()

The library docs say when instance.terminate() method is called, we should get a True/False return value (https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/interfaces/resources.py#L514). However, AWS and OpenStack implementations do not adhere to that interface (OS: https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/providers/openstack/resources.py#L297; AWS: https://github.com/gvlproject/cloudbridge/blob/master/cloudbridge/cloud/providers/aws/resources.py#L282)

The issue is that short of get-ing the just-deleted instance, there doesn't seem to be a good way of getting that info. So should we get the instance or modify the interface?

v0.4 interface changes RFC

After some discussions and based on using CloudBridge for developing CloudLaunch, we're thinking about a few changes to the interface with the intent of providing a more consistent experience and better logical grouping of services.

We've already made changes to the networking side of the library where multiple services got pushed under provider.networking vs. mixing NetworkService andSubnetService under provider.network: 98eebf2. In the process, we also added RouterService.

We are also in favor of the following changes:

  • Rename provider.compute.instance_typesprovider.compute.vm_types and consequently InstanceType resource to VMType
  • Rename method Instance.terminateInstance.delete
  • Rename provider.security.security_groupsprovider.security.vm_firewall
  • Plan for eventual implementation of provider.security.net_firewall, representing network ACLs
  • Join provider.block_store and provider.object_store into provider.storage. In addition, rename notion of object_store into buckets so provider.storage would have the following three modules: volumes, snapshots, and buckets
  • Make firewall rules standalone (The resultant firewall rules class may potentially be reusable in net_firewalls when they eventually arrive too. net_firewalls can't be done at the moment because FwaaS is not widely deployed on OpenStack - by widely, we mean, not available on NeCTAR, JetStream or devstack out of the box)
  • Make floating ips standalone (also see #83)

(ping @baizhang @chiniforooshan @VJalili)

Add Metadata service

Add implementation for the cloud metadata service across all the providers.

For AWS, the list of available metadata is here. For OpenStack, see these docs. For GCE, the docs are here. CloudBridge interface should be a cross-section of the three.

unit tests failing

Hi Guys,

cloned the repo on my windows workstation, ran tox (only python 2.7) and saw that 42 out of the 85 tests failed, is there any extra setup that i have to do to have all the tests pass ?

Attached is the testresults file.
testresults.txt

Thanks

Version Conflict when trying to run tox tests.

If I clone the repository, setup the needed environment and then run tox, it fails under both py27 and py36 (I haven't tried pypy), with the following error message:

running test
Searching for requests!=2.12.2,!=2.13.0,>=2.10.0
# removed lines
Best match: requests 2.12.5
# removed lines
pkg_resources.VersionConflict: (Babel 2.4.0 (/home/travis/build/gvlproject/cloudbridge/.tox/py27-aws/lib/python2.7/site-packages), Requirement.parse('Babel!=2.4.0,>=2.3.4'))

Working through the dependencies in setup.py it would appear that indeed the version of requests that best fits them is 2.12.5. Which is why it is then downloaded and installed. However, that version of requests in turn explicitly states that it requires Babel==2.3.4 and not the version Babel==2.4.0 that has already been installed. Instant end of run :(

Add wait_till_delete and other similar convenience methods

At present, there are no convenient methods for handling this. As a result, we have issues like networks being left over because dependencies aren't deleted etc. These should probably be broken down into

  • Identify list of wait tasks needed (use boto3 as a ref)
  • Implement those common wait tasks (potentially redesigning our current waiting logic)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.