Blog

Using the container abstraction API in 1.0.0-pre1

Background

Containers are the talk of the town, you can't escape an event or meetup without someone talking about containers. The lessons we learnt with compute abstraction are applying widely with containers in 2016. APIs are not consistent between clouds, designs are not standardised and yet, users are trying to consume multiple services.

We introduced Container-as-a-Service support in 1.0.0-pre1, a community pre-release with the intention of sparking feedback from the open-source community about the design and the implementation of 4 example drivers :

  • Docker
  • Joyent Triton
  • Amazon EC2 Container Service
  • Google Kubernetes

In this tutorial we're going to explore how to do this:

Deploying containers across platforms.

Pulling images from the Docker hub, deploying to Docker, Kubernetes and Amazon ECS then auditing them with a single query.

Getting Started with 1.0.0-pre1

First off, let's install the new packages, you probably want to do this within a virtualenv if you're using Apache Libcloud for other projects.

So run these commands at a Linux Shell to create a virtualenv called 'containers' and install the pre-release packages into that environment.

   virtualenv containers
   cd containers
   source bin/activate
   pip install apache-libcloud==1.0.0-pre1

Now you can start using this package with a test script, let's create one called containers.py

   touch containers.py

Using your favourite text editor, update that file to import the 1.0.0-pre1 libraries and the factory methods for instantiating containers.

   from libcloud.container.providers import get_driver
   from libcloud.container.types import Provider

get_driver is a factory method as with all libcloud APIs, you call this method with the Provider that you want to instantiate. Our options are:

  • Provider.DOCKER - Standalone Docker API
  • Provider.KUBERNETES - Kubernetes Cluster endpoint
  • Provider.JOYENT - Joyent Triton Public API
  • Provider.ECS - Amazon EC2 Container Service

Calling get_driver will return a reference to the driver class that you requested. You can then instantiate that class into an object using the contructor. This is always a set of parameters for setting the host or region, the authentication and any other options.

   driver = get_driver(Provider.DOCKER)

Now we can call our driver and get an instance of it called docker_driver and use that to deploy a container. For Docker you need the pem files on the server, the host (IP or FQDN) and the port.

   docker_driver = driver(host='https://198.61.239.128', port=4243,
             key_file='key.pem', cert_file='cert.pem')

Docker requires that images are available in the image database before they can be deployed as containers. With Kubernetes and Amazon ECS this step is not required as when you deploy a container it carries out that download for you.

   image = driver.install_image('tomcat:8.0')

Now that Docker has the version 8.0 image of Apache Tomcat, you can deploy this as a container called my_tomcat_container. Tomcat runs on TCP/8080 by default so we want to bind that port for our container using an optional parameter port_bindings

   bindings = { "22/tcp": [{ "HostPort": "11022" }] }
   container = driver.deploy_container('my_tomcat_container', image, port_bindings=bindings)

This will have deployed the container and started it up for you, you can disable the automatic startup by using start=False as a keyword argument. You can now call upon this container and run methods, restart, start, stop and destroy.

For example, to blow away that test container:

   container.destroy()

Crossing the streams; calling Kubernetes and Amazon EC2 Container Service

With Docker we saw that we needed to "pull" the image before we deployed it. Kubernetes and Amazon ECS don't have that requirement, but as a safeguard you can query the Docker Hub API using a utility class provided

   from libcloud.container.utils.docker import HubClient
   hub = HubClient()
   image = hub.get_image('tomcat', '8.0')

Now image can be used to deploy to any driver instance that you create. Let's try that against Kubernetes and ECS.

Amazon ECS

Before you run this example, you will need an API key and the permissions for that key to have the AmazonEC2ContainerServiceFullAccess role. ap-southeast-2 is my nearest region, but you can swap this out for any of the Amazon public regions that have the ECS service available.

   e_cls = get_driver(Provider.ECS)
   ecs = e_cls(access_id='SDHFISJDIFJSIDFJ',
               secret='THIS_IS)+_MY_SECRET_KEY+I6TVkv68o4H',
               region='ap-southeast-2')

ECS and Kubernetes both support some form of grouping or clustering for your containers. This is available as create_cluster, list_cluster.

   cluster = ecs.create_cluster('default')
   container = ecs.deploy_container(
            cluster=cluster,
            name='hello-world',
            image=image,
            start=False,
            ex_container_port=8080, ex_host_port=8080)

This will have deployed a task definition in Amazon ECS with a single container inside, with a cluster called 'main' and deployed the tomcat:8.0 image from the Docker hub to that region.

Check out the ECS Documentation for more details.

Kubernetes

Kubernetes authentication is currently only implemented for None (off) and Basic HTTP authentication. Let's use the basic HTTP authentication method to connect.

k_cls = get_driver(Provider.KUBERNETES)

kubernetes = k_cls(key='my_username',
                   secret='THIS_IS)+_MY_SECRET_KEY+I6TVkv68o4H',
                   host='126.32.21.4')
cluster2 = kubernetes.create_cluster('default')
container2 = kubernetes.deploy_container(
            cluster=cluster,
            name='hello-world',
            image=image,
            start=False)

Wrapping it up

Now, let's wrap that all up by doing a list comprehension across the 3 drivers to get a list of all containers and print their ID's and Names. Then delete them.

containers = [conn.list_containers() for conn in [docker, ecs, kubernetes]]
for container in containers:
    print("%s : %s" % (container.id, container.name))
    container.destroy()

About the Author

Anthony Shaw is on the PMC for Apache Libcloud, you can follow Anthony on Twitter at @anthonypjshaw.

Libcloud 1.0.0-pre1 released

We are pleased to announce the release of Libcloud 1.0.0-pre1.

This is a first pre-release in the 1.0.0 series which means it brings many new features, improvements, bug-fixes, and DNS drivers.

Release highlights

A full blog post on the new features in 1.0.0 can be found here

This includes:

Full change log can be found at here.

Download

The release can can be downloaded from https://libcloud.apache.org/downloads.html or installed using pip:

pip install apache-libcloud==1.0.0-pre1

Upgrading

If you have installed Libcloud using pip you can also use it to upgrade it:

pip install --upgrade apache-libcloud==1.0.0-pre1

Upgrade notes

A page which describes backward incompatible or semi-incompatible changes and how to preserve the old behavior when this is possible can be found at https://libcloud.readthedocs.org/en/latest/upgrade_notes.html

Documentation

Regular and API documentation is available at https://libcloud.readthedocs.org/en/latest/

Bugs / Issues

If you find any bug or issue, please report it on our issue tracker https://issues.apache.org/jira/browse/LIBCLOUD. Don't forget to attach an example and / or test which reproduces your problem.

Thanks

Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file.

Libcloud 1.0-pre1 open for feedback

We are pleased to announce that version 1.0-pre1 vote thread is open and the release is ready for community feedback.

1.0-pre1 marks the first pre-release of the 1.0 major release. Some years ago, Tomaz Muraus spoke on the podcast FLOSS weekly. Tomaz spoke about how much of a huge challenge porting the project to Python 3.x would be(!) as well as the 1.0 milestone.

It is worth listening to the podcast to see how far things have come, we now average 2 pull-requests a day and have 156 contributors.

As the project has matured over the last 5 years one of the most remarkable changes has been the adoption from the community and continued support from our contributors adding new drivers, patching strange API issues and keeping the project alive.

Anthony Shaw will be speaking on the FLOSS weekly podcast on February 2nd and discussing our community and the project, so please tune in.

The Cloud market as I'm sure you're all aware of is thriving, the purpose of Libcloud was originally:

  • To help prevent lock-in to a particular vendor
  • To abstract the complexity of vendor APIs
  • To give a simple way for deploying to and managing multiple cloud vendors

Since that we have had (at the last count) 2,118,539 downloads. The project continues to grow in popularity with each new release.

So with the 1.0 major release we would like to announce 2 new driver types, containers and backup.

History of our drivers

The compute (IaaS) API is what libcloud is best known for but there is a range of drivers available for many other capabilities.

There is a presentation on the value of using Libcloud to avoid lock in on SlideShare

This is a history of the different driver types in the libcloud project.

  • Compute (v0.1.0)
    • Support for nodes, node images, locations, states
    • 52 providers including every major cloud provider in the market. Plus local services like Vmware, OpenStack, libvirt
  • DNS (v0.6.0)
    • Support for zones, records, recordtypes
    • 19 providers including CloudFlare, DigitalOcean, DNSimple, GoDaddy, Google DNS, Linode, Rackspace, Amazon R53, Zerigo
  • Object Storage (v0.5.0)
    • Support for containers and objects
    • 11 providers including Amazon S3, Azure Blobs, Google storage, CloudFiles, OpenStack Swift
  • Load Balancer (v0.5.0)
    • Support for nodes, balancers, listeners and algorithms
    • 11 providers including CloudStack, Dimension Data, Amazon ELB, Google GCE LB, SoftLayer LB
  • Backup (v0.20.0)
    • Support for backup targets, recovery points and jobs
    • 3 providers, Dimension Data, Amazon EBS snaps, Google snaps

Introducing Backup Drivers

With 1.0-pre1 we have introduced a new driver type for backup, libcloud.backup

Backup API allows you to manage Backup as A Service and services such as EBS Snaps, GCE volume snap and dimension data backup.

Terminology

  • libcloud.backup.base.BackupTarget - Represents a backup target, like a Virtual Machine, a folder or a database.
  • libcloud.backup.base.BackupTargetRecoveryPoint - Represents a copy of the data in the target, a recovery point can be recovered to a backup target. An inplace restore is where you recover to the same target and an out-of-place restore is where you recover to another target.
  • libcloud.backup.base.BackupTargetJob - Represents a backup job running on backup target.

Introducing Container-as-a-Service Drivers

The API is for Container-as-a-Service providers, these new types of cloud services offer container management and hosting as a service. The new services are already providing proprietary APIs, giving the need for a tool like Libcloud if you want to provision to any cloud provider.

Google, Amazon and Joyent have all announced Container cloud services and Microsoft have launched a beta service also, so we are getting on the front foot with an abstraction API for people wishing to gain similar benefits to the compute, load balancer and storage APIs.

A presentation on this topic is available on SlideShare.

Isn't docker a standard? Well, yes and no.

Docker has been the main technology adopted by these providers as the host system for the containers and also as the specification of the containers themselves. But, Docker is not a provisioning system, it is a virtualization host. Also there are alternatives, like CoreOS Rkt.

Container API design

Container-as-a-Service providers will implement the ContainerDriver class to provide functionality for :

  • Listing deployed containers
  • Starting, stopping and restarting containers (where supported)
  • Destroying containers
  • Creating/deploying containers
  • Listing container images
  • Installing container images (pulling an image from a local copy or remote repository)

Simple Container Support

  • libcloud.container.base.ContainerImage - Represents an image that can be deployed, like an application or an operating system
  • libcloud.container.base.Container - Represents a deployed container image running on a container host

Cluster Suppport

Cluster support extends on the basic driver functions, but where drivers implement the class-level attribute supports_clusters as True clusters may be listed, created and destroyed. When containers are deployed, the target cluster can be specified.

  • libcloud.container.base.ContainerCluster - Represents a deployed container image running on a container host
  • libcloud.container.base.ClusterLocation - Represents a location for clusters to be deployed

Using the container drivers

The container drivers have been designed around similar principles to the compute driver. It is simple to use and a flat class design.

from libcloud.container.providers import get_driver
from libcloud.container.types import Provider

Cls = get_driver(Provider.DOCKER)
driver = Cls('user', 'api key')

image = driver.install_image('tomcat:8.0')
container = driver.deploy_container('tomcat', image)

container.restart()

Container Registries

The Docker Registry API is used by services like Amazon ECR, the Docker Hub website and by anyone hosting their own Docker registry. It doesn't belong to a particular driver, so is a utility class. Some providers, like Amazon ECR have a factory method to provide a registry client Images from docker registry can be sent to the deploy_container method for any driver.

from libcloud.container.utils.docker import HubClient 
hub = HubClient() 
image = hub.get_image('ubuntu', 'latest') 

When other container registry services are made available these can be provided in a similar context.

Prototype drivers in libcloud.container

Drivers have been provided to show example implementations of the API, these drivers are experimental and need to go through more thorough community testing before they are ready for a stable release.

The driver with the most contentious implementation is Kubernetes. We would like users of the Amazon ECS, Google Containers and Kubernetes project to provide feedback on how they would like to map clusters, pods, namespaces to the low level concepts in the driver.

Providing feedback

The voting thread is open, please use this as your opportunity to give feedback.

Thanks

Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file.

Libcloud 0.20.1 released

We are pleased to announce the release of Libcloud 0.20.1.

This is a bug-fix release in the 0.20 series.

Release highlights

  • Allow for old and new style service accounts for GCE driver
  • Fix syntax error with DimensionDataStatus object
  • Fix bug in public IP addition command for DimensionData driver
  • Fix error with proxy_url with vCloud Compute driver.
  • Fix with hasattr for Rackspace DNS driver.

Full change log can be found at here.

Download

The release can can be downloaded from https://libcloud.apache.org/downloads.html or installed using pip:

pip install apache-libcloud==0.20.1

Upgrading

If you have installed Libcloud using pip you can also use it to upgrade it:

pip install --upgrade apache-libcloud==0.20.1

Upgrade notes

A page which describes backward incompatible or semi-incompatible changes and how to preserve the old behavior when this is possible can be found at https://libcloud.readthedocs.org/en/latest/upgrade_notes.html

Documentation

Regular and API documentation is available at https://libcloud.readthedocs.org/en/latest/

Bugs / Issues

If you find any bug or issue, please report it on our issue tracker https://issues.apache.org/jira/browse/LIBCLOUD. Don't forget to attach an example and / or test which reproduces your problem.

Thanks

Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file.

Notice for Linode users

This is an announcement for users of the Linode driver for Libcloud who might have started experiencing issues recently.

Background

A couple of Libcloud users have reported that they have recently started experiencing issues when talking to the Linode API using Libcloud. They have received messages similar to the one shown below.

socket.error: [Errno 104] Connection reset by peer

It turns out that the issue is related to the used SSL / TLS version. For compatibility and security reasons (Libcloud also supports older Python versions), Libcloud uses TLS v1.0 by default.

Linode recently dropped support for TLS v1.0 and it now only support TLS >= v1.1. This means Libcloud won't work out of the box anymore.

Solution

If you are experiencing this issue, you should update your code to use TLS v1.2 or TLS v1.1 as shown below.

import ssl

import libcloud.security
libcloud.security.SSL_VERSION = ssl.PROTOCOL_TLSv1_1
# or even better if your system and Python version supports TLS v1.2
libcloud.security.SSL_VERSION = ssl.PROTOCOL_TLSv1_2

# Instantiate and work with the Linode driver here...

Keep in mind that for this to work you need to have a recent version of OpenSSL installed on your system and you need to use Python >= 3.4 or Python 2.7.9.

For more details please see recently updated documentation. If you are still experiencing issues or have any questions, please feel free to reach us via the mailing list or IRC.

Note: Even if you are not experiencing any issues, it's generally a good idea to use the highest version of TLS supported by your system and the provider you use.

Quick note on ssl.PROTOCOL_SSLv23

Python uses ssl.PROTOCOL_SSLv23 constant by default. When this constant is used, it will let client known to pick the highest protocol version which both the client and server support (it will be selecting between SSL v3.0, TLS v1.0, TLS v1.1 and TLS v1.2).

We use ssl.PROTOCOL_TLSv1 instead of ssl.PROTOCOL_SSLv23 for security and compatibility reasons. SSL v3.0 is considered broken and unsafe and using ssl.PROTOCOL_SSLv23 can result in an increased risk for a downgrade attack.

Thanks

Special thanks to Jacob Riley, Steve V, Heath Naylor and everyone from LIBCLOUD-791 who helped debug and track down the root cause of this issue.