CloudFoundry

All Things Pivotal Podcast Episode #15: What is Pivotal Web Services?

10 Feb , 2015  

featured-pivotal-podcastYou have a great idea for an app—and need a quick and easy place to deploy it.

You are interested in Platform as a Service—but you want a low risk and free way to try it out.

You are constrained on dev/test capacity and need somewhere your developers can work effectively and quickly—NOW.

Whilst you might be familiar with Pivotal CF as a Platform as a Service that you can deploy on-premises or in the cloud provider of your choice—you may not know that Pivotal CF is also available in a hosted for as Pivotal Web Services.

In this episode we take a closer look at Pivotal Web Services—what is it used for, and how you can take advantage of it.

PLAY EPISODE #15

 

RESOURCES:

Transcript

Speaker 1:

Welcome to the All Things Pivotal podcast, the podcast at the intersection of agile, cloud, and big data. Stay tuned for regular updates, technical deep dives, architecture discussions, and interviews. Please share your feedback with us by e-mailing podcast@pivotal.io.

Simon Elisha:

Hello everyone and welcome back to the All Things Pivotal podcast. Fantastic to have you back. My name is Simon Elisha. Good to have you with me again today. A quick and punchy podcast this week, but an interesting one nonetheless hopefully and answering a question that comes up quite commonly. Well, it’s really 2 questions. The first question is, how can I try out Pivotal Cloud Foundry really, really quickly without any set up time? Which often relates to my answer being, ‘Well have you heard of Pivotal Web Services?’ To which people say, ‘What is Pivotal Web Services?’ Also known as, PWS. Also known as P-Dubs.

Pivotal Web Services is a service available on the Web, funnily enough, at run.pivotal.io. It is a hosted version of Pivotal Cloud Foundry running on Amazon Web Services in the US. It provides a platform upon which you can use as a developer, push applications to it, organize your workspaces, and really use as a development platform or even as a production location for your applications. It is a fully-featured running version of Pivotal CF in the Cloud. Not surprisingly, that’s what Pivotal CF can do, but this provides a hosted version for you.

Let’s unpack this a little bit and have a look at what it is and why you might want to use it. The first thing that’s good to know is you can connect to run.pivotal.io straight away. You don’t need a credit card to start and you get a 60-day free trial. I’ll talk about what you get in that 60-day free trial shortly, but the good thing to know is you can go and try it straight away. Often when I’m talking to customers and they’re getting their toe in the water with platforms and service and they’re trying to understand what it is and they say, ‘Oh where can I just try and push an app or test something out?’ I say, ‘Hey go to Pivotal Web Services, it’s free, you can try it out, you can grab an application you’ve got on the shelf and just see what it’s like.’ They go, ‘Well that’s pretty cool, I can do it straight away.’ No friction in that happening.

In terms of what you can use on the platform, so we currently support apps written in Java, Grails, Play, Spring, Node.js, Ruby on Rails, Sinatra, Go, Python, or PHP. Any of those ones will automatically be discovered and [hey presto 02:37] CF push and away we go. If you’ve been listening to previous episodes you’ll know the magic of the CF push process. If however you need another language, you can use a Community Buildpack or you can even write a custom one yourself that will run on the platform as well. Obviously, if you’re running an application you may want to consume some services. You can choose from a variety of third party data bases, e-mail services, monitoring services, that exist in the Marketplace, that exist on Pivotal Cloud Foundry. I’ll run you through what some of those services are because there really is a nice selection available for you.

What you then do is you buy into those services within your application and Pivotal Cloud Foundry, and P-Dubs in particular, takes care of all the connection criteria or the buying in process or the credentials etc, which make it nice and easy. You may be saying, ‘Well hmm, what does this cost me to get access to this kind of platform?’ Well, it’s a really simple, simple model. It’s application centric and you pay 3 cents US per gig per hour. That’s the per hour cost is for the amount of memory used by the application to run. Now, with that 3 cents you get included your routing, so your traffic routing, your load balancing, you can up to 1 gig of ephemeral disk space on your app instances. You get free storage for your application files when they get pushed to the platform. You don’t pay for that storage cost at all. You get bandwidth both in and out, up to 2 terabytes of bandwidth. You get unified log streaming, which we’ll talk about and health management, which we’ll also talk about.

As you can imagine, this could be very cost-effective platform for dev test and production workloads because you’re only paying for what you use when you use it and you’re only paying at the application layer on a per memory basis. Now, there’s a really handy pricing tab on the Pivotal Web Services page that lets you put in how many app instances you’d need for your application and will punch out for you that cost on a per month basis for the hosting, which is really, really nice.

What are some of the things that we allow you to do with this platform? What are some of the benefits? As I mentioned, you get the 60-day free trial and the 60-day free trial, you get 2 gig of application memory, so it can run applications that consume up to 2 gig of aggregate memory. It can have up to 10 application services from the free tier of the Marketplace. This means you get to play with quite a lot of capability at very low cost, very, very easily.

Aside from pushing your app, which is yeah, nice and easy and something you want to do, what else do we do with this? Well, we can [elect 05:17] to have performance monitoring. In the developer console, which you can log into, you can see all your spaces, your applications and their status, how many services are bound to them etc. You can drill into them in more detail to see what they’re actually consuming. If you want even more detailed monitoring, so inside the application type monitoring, you can use New Relic for that and that’s a service that’s offered in the Marketplace. It has a zero touch configuration. For Java applications, you can basically [crank and bind 05:47] you New Relic service to your app very, very simply with basically no configuration. It’s amazing. For other languages like Ruby or Java Script, you have to the New Relic [agent 05:56] running, but it’s still a pretty trivial process to get it up and going.

Now, once your application is running, you probably want to make sure it keeps running. A normal desire to have. We have this thing called, The Health Manager. This is an automated system that monitors your application for you and if your application instances exit you to an error or something happens where the number of instances is less than the ones that you actually created when you did your CF push or CF Scale, the platform will automatically recover those particular instances for you. Obviously, the log will be updated to indicate that that took place. If you set up an application and you have 3 instances running, it will run them for you. If one of them fails, it will spin up another one for you and you’re good to go.

Another capability is, of course, the Unified Log Streaming. One of the features of Pivotal CF is the ability to bring logs together from multiple application instances into the one place. In PWS, we do the same thing. We have this streaming log API that will send all the information, all the components, for your application to the one location. You can tailor this interactively yourself or you can use a syslog drain too, once you have a third party tool you may like. Tools like, Splunk or Logstash etc. They’re all scoped by a unique application ID and an instance index, so they can correlate across multiple events and see how they all fit together, which is nice.

The system also has a really nice Web console, which is built for really agile developers to use. You jump in, you can see what applications are running, where, who started them, what’s going on. You can even connect your spacers with your CI pipeline to make sure that builds are going into the correct life cycle stage of being deployed appropriately as well. You can also see quotas and building across your spacers because you have access to organizations and spacers as well. We’ll talk about organizations and spacers in another episode.

What about from a services perspective? What are some of the services that we have available in the Marketplace? Well, it’s growing all the time. It’s a movable face, as I like to say. We have a number. I’ll just call out a few highlight ones. Things like, Searchify for search, BlazeMeter for load testing, Redis Cloud, which is an enterprise-class cache. We talked about caches a little while ago, ClearDB, which is a MySQL database service. We have Searchly, ElasticSearch. We have the Memcached [D 08:19] Cloud. We have SendGrid for sending e-mail, MongoLab for MongoDB as a service, New Relic obviously for access to performance criteria. RabbitMQ, so through Cloud AMQP, ElephantSQL, PostgreSQL as a service etc, etc. A good selection of services there are available to you to use.

It’s interesting seeing what people use this for. Often, customers who use this for a dev and test experience or to get the developers up to speed with using platform as a service. A company called [Synapse, which I’ll say 08:49], which is small, or young I should say, Boston based company that builds software and service web and mobile apps for consumer startups, they decided to use Pivotal Web Services for their platform because they wanted to just have the same develop experience through dev test and production, and it completely suited their needs. It gave them the flexibility in terms of how they built the application, it gave them the sizing requirements they needed etc. The other nice thing that they got out of it was the ability to deploy their particular application both in the public Cloud or in private Clouds that customers wanted to run. What they realized is that if they had customers who said, ‘Hey we really like your particular application, we like your service, but we want to run it in-house for whatever reason that we have,’ they had a very simple and easy way to say that ‘Hey, you just run Pivotal CF internally, we bring our code across, and it will work fine.’ A really interesting example there.

If you’ve ever wanted to have a play with Pivotal CF, you wondered how it looks, and what the experience is from a developer perspective, then Pivotal Web Services or PWS is the place to go. That’s run.pivotal.io. There’s a 60-day free trial. You don’t have to enter your credit card when you sign up for the free trial. You can have a bit of an experiment and see how you go. Hopefully you’ll be able to make something pretty cool and until then, talk to you later, and keep on building.

Speaker 1:

Thanks for listening to the All Things Pivotal podcast. If you enjoyed it, please share it with others. We love hearing your feedback, so please send any comments or suggestions to podcast@pivotal.io.

, , , , , , , , , , , , ,

CloudFoundry

All Things Pivotal Podcast Episode #15: What is Pivotal Web Services?

10 Feb , 2015  

featured-pivotal-podcastYou have a great idea for an app—and need a quick and easy place to deploy it.

You are interested in Platform as a Service—but you want a low risk and free way to try it out.

You are constrained on dev/test capacity and need somewhere your developers can work effectively and quickly—NOW.

Whilst you might be familiar with Pivotal CF as a Platform as a Service that you can deploy on-premises or in the cloud provider of your choice—you may not know that Pivotal CF is also available in a hosted for as Pivotal Web Services.

In this episode we take a closer look at Pivotal Web Services—what is it used for, and how you can take advantage of it.

PLAY EPISODE #15

 

RESOURCES:

Transcript

Speaker 1:

Welcome to the All Things Pivotal podcast, the podcast at the intersection of agile, cloud, and big data. Stay tuned for regular updates, technical deep dives, architecture discussions, and interviews. Please share your feedback with us by e-mailing podcast@pivotal.io.

Simon Elisha:

Hello everyone and welcome back to the All Things Pivotal podcast. Fantastic to have you back. My name is Simon Elisha. Good to have you with me again today. A quick and punchy podcast this week, but an interesting one nonetheless hopefully and answering a question that comes up quite commonly. Well, it’s really 2 questions. The first question is, how can I try out Pivotal Cloud Foundry really, really quickly without any set up time? Which often relates to my answer being, ‘Well have you heard of Pivotal Web Services?’ To which people say, ‘What is Pivotal Web Services?’ Also known as, PWS. Also known as P-Dubs.

Pivotal Web Services is a service available on the Web, funnily enough, at run.pivotal.io. It is a hosted version of Pivotal Cloud Foundry running on Amazon Web Services in the US. It provides a platform upon which you can use as a developer, push applications to it, organize your workspaces, and really use as a development platform or even as a production location for your applications. It is a fully-featured running version of Pivotal CF in the Cloud. Not surprisingly, that’s what Pivotal CF can do, but this provides a hosted version for you.

Let’s unpack this a little bit and have a look at what it is and why you might want to use it. The first thing that’s good to know is you can connect to run.pivotal.io straight away. You don’t need a credit card to start and you get a 60-day free trial. I’ll talk about what you get in that 60-day free trial shortly, but the good thing to know is you can go and try it straight away. Often when I’m talking to customers and they’re getting their toe in the water with platforms and service and they’re trying to understand what it is and they say, ‘Oh where can I just try and push an app or test something out?’ I say, ‘Hey go to Pivotal Web Services, it’s free, you can try it out, you can grab an application you’ve got on the shelf and just see what it’s like.’ They go, ‘Well that’s pretty cool, I can do it straight away.’ No friction in that happening.

In terms of what you can use on the platform, so we currently support apps written in Java, Grails, Play, Spring, Node.js, Ruby on Rails, Sinatra, Go, Python, or PHP. Any of those ones will automatically be discovered and [hey presto 02:37] CF push and away we go. If you’ve been listening to previous episodes you’ll know the magic of the CF push process. If however you need another language, you can use a Community Buildpack or you can even write a custom one yourself that will run on the platform as well. Obviously, if you’re running an application you may want to consume some services. You can choose from a variety of third party data bases, e-mail services, monitoring services, that exist in the Marketplace, that exist on Pivotal Cloud Foundry. I’ll run you through what some of those services are because there really is a nice selection available for you.

What you then do is you buy into those services within your application and Pivotal Cloud Foundry, and P-Dubs in particular, takes care of all the connection criteria or the buying in process or the credentials etc, which make it nice and easy. You may be saying, ‘Well hmm, what does this cost me to get access to this kind of platform?’ Well, it’s a really simple, simple model. It’s application centric and you pay 3 cents US per gig per hour. That’s the per hour cost is for the amount of memory used by the application to run. Now, with that 3 cents you get included your routing, so your traffic routing, your load balancing, you can up to 1 gig of ephemeral disk space on your app instances. You get free storage for your application files when they get pushed to the platform. You don’t pay for that storage cost at all. You get bandwidth both in and out, up to 2 terabytes of bandwidth. You get unified log streaming, which we’ll talk about and health management, which we’ll also talk about.

As you can imagine, this could be very cost-effective platform for dev test and production workloads because you’re only paying for what you use when you use it and you’re only paying at the application layer on a per memory basis. Now, there’s a really handy pricing tab on the Pivotal Web Services page that lets you put in how many app instances you’d need for your application and will punch out for you that cost on a per month basis for the hosting, which is really, really nice.

What are some of the things that we allow you to do with this platform? What are some of the benefits? As I mentioned, you get the 60-day free trial and the 60-day free trial, you get 2 gig of application memory, so it can run applications that consume up to 2 gig of aggregate memory. It can have up to 10 application services from the free tier of the Marketplace. This means you get to play with quite a lot of capability at very low cost, very, very easily.

Aside from pushing your app, which is yeah, nice and easy and something you want to do, what else do we do with this? Well, we can [elect 05:17] to have performance monitoring. In the developer console, which you can log into, you can see all your spaces, your applications and their status, how many services are bound to them etc. You can drill into them in more detail to see what they’re actually consuming. If you want even more detailed monitoring, so inside the application type monitoring, you can use New Relic for that and that’s a service that’s offered in the Marketplace. It has a zero touch configuration. For Java applications, you can basically [crank and bind 05:47] you New Relic service to your app very, very simply with basically no configuration. It’s amazing. For other languages like Ruby or Java Script, you have to the New Relic [agent 05:56] running, but it’s still a pretty trivial process to get it up and going.

Now, once your application is running, you probably want to make sure it keeps running. A normal desire to have. We have this thing called, The Health Manager. This is an automated system that monitors your application for you and if your application instances exit you to an error or something happens where the number of instances is less than the ones that you actually created when you did your CF push or CF Scale, the platform will automatically recover those particular instances for you. Obviously, the log will be updated to indicate that that took place. If you set up an application and you have 3 instances running, it will run them for you. If one of them fails, it will spin up another one for you and you’re good to go.

Another capability is, of course, the Unified Log Streaming. One of the features of Pivotal CF is the ability to bring logs together from multiple application instances into the one place. In PWS, we do the same thing. We have this streaming log API that will send all the information, all the components, for your application to the one location. You can tailor this interactively yourself or you can use a syslog drain too, once you have a third party tool you may like. Tools like, Splunk or Logstash etc. They’re all scoped by a unique application ID and an instance index, so they can correlate across multiple events and see how they all fit together, which is nice.

The system also has a really nice Web console, which is built for really agile developers to use. You jump in, you can see what applications are running, where, who started them, what’s going on. You can even connect your spacers with your CI pipeline to make sure that builds are going into the correct life cycle stage of being deployed appropriately as well. You can also see quotas and building across your spacers because you have access to organizations and spacers as well. We’ll talk about organizations and spacers in another episode.

What about from a services perspective? What are some of the services that we have available in the Marketplace? Well, it’s growing all the time. It’s a movable face, as I like to say. We have a number. I’ll just call out a few highlight ones. Things like, Searchify for search, BlazeMeter for load testing, Redis Cloud, which is an enterprise-class cache. We talked about caches a little while ago, ClearDB, which is a MySQL database service. We have Searchly, ElasticSearch. We have the Memcached [D 08:19] Cloud. We have SendGrid for sending e-mail, MongoLab for MongoDB as a service, New Relic obviously for access to performance criteria. RabbitMQ, so through Cloud AMQP, ElephantSQL, PostgreSQL as a service etc, etc. A good selection of services there are available to you to use.

It’s interesting seeing what people use this for. Often, customers who use this for a dev and test experience or to get the developers up to speed with using platform as a service. A company called [Synapse, which I’ll say 08:49], which is small, or young I should say, Boston based company that builds software and service web and mobile apps for consumer startups, they decided to use Pivotal Web Services for their platform because they wanted to just have the same develop experience through dev test and production, and it completely suited their needs. It gave them the flexibility in terms of how they built the application, it gave them the sizing requirements they needed etc. The other nice thing that they got out of it was the ability to deploy their particular application both in the public Cloud or in private Clouds that customers wanted to run. What they realized is that if they had customers who said, ‘Hey we really like your particular application, we like your service, but we want to run it in-house for whatever reason that we have,’ they had a very simple and easy way to say that ‘Hey, you just run Pivotal CF internally, we bring our code across, and it will work fine.’ A really interesting example there.

If you’ve ever wanted to have a play with Pivotal CF, you wondered how it looks, and what the experience is from a developer perspective, then Pivotal Web Services or PWS is the place to go. That’s run.pivotal.io. There’s a 60-day free trial. You don’t have to enter your credit card when you sign up for the free trial. You can have a bit of an experiment and see how you go. Hopefully you’ll be able to make something pretty cool and until then, talk to you later, and keep on building.

Speaker 1:

Thanks for listening to the All Things Pivotal podcast. If you enjoyed it, please share it with others. We love hearing your feedback, so please send any comments or suggestions to podcast@pivotal.io.

, , , , , , , , , , , , ,

CloudFoundry

Why Services are Essential to Your Platform as a Service

27 Gen , 2015  

For most organizations, there is a constant battle between the need to rapidly develop and deploy software while effectively managing the environment and deployment process. As a developer, you struggle with the ability to move new applications to production, and regular provisioning of support services can take weeks, if not months. IT operations, on the other hand, is balancing the backlog of new services requests with the need to keep all the existing (growing) services up and patched. Each side is challenged with meeting the needs of an ever-changing business.

What are Services?

Service is defined as “an act of helpful activity; help, aid.” A Service should make your life easier. Pivotal believes that Platform as a Service (PaaS) should make administrator’s and developer’s lives easier, not harder. Services available through the Pivotal Cloud Foundry platform allow resources to be easily provisioned on-demand. These services are typically middleware, frameworks, and other “components” used by developers when creating their applications.

Services extend a PaaS to become a flexible platform for all types of applications. Services can be as unique as an organization or an application requires. They can bind applications to databases or allow the integration of continuous delivery tools into a platform. Services, especially user-provided services, can also wrap other applications, like a company’s ERP back-end or a package tracking API. The accessibility of Services through a single platform ensures developers and IT operators can truly be agile.

Extensibility Through a Services-Enabled Platform

The availability of Services within the platform are one of the most powerful and extensible features of Pivotal Cloud Foundry. A broad ecosystem of software can run and be managed from within the Pivotal Cloud Foundry platform, and this ensures that enterprises get the functionality they need.

Robust functionality from a single source reduces the time spent on configuration and monitoring. It also has the added benefit of improving scalability and time-to-production. Services allow administrators to provide pre-defined database and middleware services, and this gives developers the ability to rapidly deploy a software product from a menu of options without the typical slow and manual provisioning process. This is also done in a consistent and supportable way.

Managed Services Ensure Simple Provisioning

One of the features that sets Pivotal Cloud Foundry apart from other platforms is the extent of the integration of Managed Services. These Services are managed and operated ‘as a Service,’ and this means they are automatically configured upon request. The provisioning process also incorporates full lifecycle management support, like software updates and patching.

Automation removes the overhead from developers, who are often saddled with service configuration responsibility. It makes administrators’ lives easier and addresses security risks by standardizing how services are configured and used—no more one-off weirdness in configuration. The result is true self-provisioning.

A few of the Pivotal Services, like RabbitMQ, are provided in a highly available capacity. This means that when the Service is provisioned it is automatically clustered to provide high availability. This relieves much of the administrative overhead of deploying and managing database and middleware Services, as well as the significant effort of correctly configuring a cluster.

Broad Accessibility With User-Provided Services

In addition to the integrated and Managed Services, Pivotal Cloud Foundry supports a broad range of User-Provided Services. User-Provided Services are services that are currently not available through the Services Marketplace, meaning the Services are managed externally from the platform, but are still accessible by the applications.

The User-Provided Services are completely accessible by the application, but are created outside of the platform. This Service extension enables database Services, like Oracle and DB2 Mainframe, to be easily bound to an application, guaranteeing access to all the Services needed by applications.

Flexible Integration Model

Access to all services, both managed and user-provided, is handled via the Service Broker API within the Pivotal Cloud Foundry platform. This module provides a flexible, RESTful API and allows service authors (those that create the services) to provide self-provisioning services to developers.

The Service Broker is not opinionated. It can be crafted to suit the unique needs of the environment and organization. The Service Broker functionality ensures the extensibility of the platform and also allows administrators to create a framework that developers can operate within, supporting agile deployments. This framework provides consistency and reproducibility within the platform. It also has the added benefit of limiting code changes required by applications as they move through the development lifecycle.

As an example of the customization capabilities, a customer created a Service Broker that not only adjusts the network topology when an application is deployed to an environment, it also adjusts the security attributes. An application could have fairly open access to an organization’s raw market material, but access to a core billing system would be limited and privileged.

Security Integrated into Each Service

The Service Broker gives administrators the ability to define access control of services. Service-level access control ensures developers and applications only have access to the environment and service necessary. When a Service is fully managed, credentials are encapsulated in the Service Broker. The result is that passwords no longer need to be passed across different teams and resources, but instead are managed by a single administrator.

Finally, the Service Broker provides full auditing of all services. Auditing simply keeps track of what Services have been created or changed, all through the API. This type of audit trail is great if you are an administrator and trying to figure out who last made changes to a critical database service.

Self-Service for the Masses

Managed Services are available for download from Pivotal Network, and are added by an administrator to the platform. All Services available within the platform are accessible to developers via the Marketplace. The Marketplace allows self-provisioning of software as it is needed by developers and applications.

Services from Pivotal like RabbitMQ, Redis, and Pivotal Tracker, as well as popular third-party software, like Jenkins Enterprise by CloudBees and Datastax Enterprise Cassandra, are available immediately. The Marketplace provides a complete self-service catalog, speeding up the development cycle.

Services

Services View in Pivotal Network

The breadth and availability of Services ensures that operators provide development teams access to the resources that they need, when they need them. A developer, who is writing a new application that requires a MySQL database, can easily select and provision MySQL from the Marketplace. The platform then creates the unique credentials for the database and applies those to the application.

Rapid Time to Market with Mobile Services

The expansive Services catalog extends to the Pivotal Mobile Services, announced in August 2014. These mobile backend services allow organizations to rapidly develop and deploy mobile applications. The accessibility of mobile backend services through the Pivotal Cloud Foundry platform ensures that developers are able to easily build new mobile applications leveraging capabilities such as push notifications and data sync.

Essentials of a Services-Enabled Platform

Developers want to quickly deploy a database or middleware service, without a slow and manual provisioning process. IT operators want to be able to quickly meet the growing requests for new services, while also securely managing a complex environment. Provisioning Services through a PaaS is the natural solution to balancing the needs of the developers and IT operators, all while meeting the needs of the business.

A PaaS should provide simple access to full lifecycle management for services—from click-through provisioning to patch management and high availability. At Pivotal we have seen tremendous success with the release of services based on CloudBees and DataStax. The Pivotal Services ecosystem continues to grow, as does the growing capabilities of the Service Broker. This growth ensures the Pivotal Cloud Foundry platform will continue to meet the needs of organizations.

Recommended Reading:

, , , , , , , , , , ,

CloudFoundry

Case Study: How shrebo, the Sharing Economy Platform, Innovates with Cloud Foundry

3 Set , 2014  

featured-time-to-innovateIn this post, former banking technology architect and current CTO, Patrick Senti, allows us to pick his brain and learn about his experiences with Cloud Foundry and Python/Django.

As background, Senti is an award-winning architect and programmer who is also certified in machine learning. He brings over 20 years of software experience from working on high profile projects at companies like IBM, SAS, and Credit Suisse.

Recently Senti founded shrebo and is currently CTO. Much like Uber, AirBnB, and Zipcar, his company uses cloud platforms to create new ways of sharing resources.  shrebo recently won an innovation award from Swiss Federal Railways (who is also a customer) and provides an entire web-services-based SaaS platform—a set of APIs for application developers to build with. Shrebo is the cloud platform for the sharing economy—allowing you to build your own AirBnB or Lyft on shrebo.

The functionality includes search, booking, scheduling, provisioning, payment, and social functionality endpoints surrounding the entire process of sharing resources. With their services, third parties can cost effectively and quickly create sharing applications in mobile, social, local and other contexts to help automate logistics, optimize machinery, increase asset and fleet utilization, and run services for sharing any type of resource like cars, bikes, or parking spaces.

In three short sentences, what defined your needs for a platform like Cloud Foundry?

  1. Architecting a start-up with a big vision means we needed stability at the ground level.
  2. We need developers to focus on solving problems and coding instead of technical details.
  3. While stacks like Amazon were interesting, we needed to be higher up the stack to make service distribution easier.

If you could summarize, what would you say the outcomes have been so far?
This PaaS platform reduces risk and creates a lot of efficiency and effectiveness. Cloud Foundry is a real blast of an experience as a developer. We essentially package the application along with the services manifest and press a button. Everything else is done by the platform. This type of thing used to take days or weeks. Now, it takes minutes.

OK, let’s cover some background. How did you, as an entrepreneur, decide to start this company?
Well, it was really three ideas that came together.

A few years back, I shared a sailboat with other people. It was always difficult to manage the logistics because smartphones were not as common. However, the idea of making the process easier…that stuck with me. I thought, “It would be really useful to have a sharing service for any private resource.”

Two, it’s easy to see how more and more resources are shared between people across companies. If you work at the same company, everyone is on a shared calendar; so, it’s not needed. When people come together across companies, like in a start-up environment or if you share some large asset like trains, you need a better way to share. It seemed like a good idea to have an API-based platform to help build solutions like this.

Lastly, I have done a lot with analytics. I realized there could be process improvement and optimization if people and companies could successfully share information and results. For example, we cooperate with Swiss Federal Railways, who awarded us a winner for innovation as a start-up, and they operate tons of trains and cars each day. The trains are very overused in many city areas. When they are full, commuters don’t know where to sit. In fact, this is a sharable resource. So, my “aha” moment here was about connecting people via a smartphone app—we could allow them to see and chose travel based on load and occupancy. They could make a conscious decision to board a certain train. We could really help improve this if we use predictive analytics. So, we do that too—this is much like managing meeting rooms but on a really large scale.

Could you tell us more about shrebo as a business and how Pivotal and Cloud Foundry fit?
In a nutshell, companies use our stuff to build their own end-user sharing applications.

We are a platform software services for any sharing solution. If you are familiar with car sharing, like a renting process or a smartphone app for scheduling a car like Zipcar or Uber in the U.S., our platform supports all the functionality you would need to build such an app through a set of developer APIs. The shrebo platform allows companies of any size to more easily and cost effectively develop their own sharing applications and online services and mix them with other mobile, social, and local apps.

In our initial architecture process, we evaluated several PaaS providers. Ultimately, we chose App Fog, who is now owned by CenturyLink. They run Cloud Foundry with Python, Java, Node.js, PHP, and Ruby. Our technology framework allows us to elastically provision services through App Fog’s services API. Also, Pivotal provides us with Redis as a data structure service and RabbitMQ as a message broker service.

How will your customer experience be improved through our technologies?
Well, our app is not directly visible through some user interface for end users, it is more of a white label solution. However, we provide app and data services that enable others to provide end-user applications. So, those companies can rely on our API to meet SLAs cost effectively. Eventually, they need their apps to perform or users will go somewhere else. If we fail, so do they.

In the context of us powering other technologies, we use Cloud Foundry and AppFog to provision services much faster, at a lower cost, and with greater scale as real-time demand shifts. This PaaS is certainly more cost effective than Amazon, and it’s less complex and risky than traditional hosting providers who make us build the stack by ourselves. It’s also very automated. All of these things allow us to deliver code, deploy updates, and fix issues on our platform quickly.

Do you have any examples where Cloud Foundry’s Elastic Runtime Service delivered something unexpected?
Yes, absolutely. At one point in time, the AppFog infrastructure needed an upgrade. They had some planned downtime and planned to make changes to one data center at a time across Singapore, Ireland, and the U.S. With Cloud Foundry, we had zero down time even though they did. As they were bringing down a data center to make updates, we set up new instances on the fly and shifted workloads. As they moved around the world, we automatically migrated across data centers. We didn’t have any downtime even though they did! Before Cloud Foundry, this was really not possible. We could increase and decrease instances on the fly. Quite amazing.

How does Cloud Foundry compare to other platforms you have worked with?

Provisioning is so different. So, I spent a lot of time in the financial services industry—for about 10 years. We primarily worked with traditional hosting platforms. You get a Linux machine, then you do an install. Along the process, a group of experts all touch it before it is ready to deploy code. Cloud Foundry it totally different.

Like I said before, Cloud Foundry is a real blast of an experience as a developer. This type of thing used to take days or weeks. Now, it takes minutes. As a CTO, this is the experience every developer has been looking for over the past 10 years. We get the power of scripting—an event can trigger just about anything we need.

What other overarching Cloud Foundry capabilities are top of mind?
Flexibility and agility are top of the list. I see extensibility as a major benefit too. The whole architecture is built with open source contributions in mind. Adding new stacks is encouraged.

How does Cloud Foundry and AppFog support your stack?
Cloud Foundry is the PaaS and AppFog integrates and operates the PaaS and IaaS. They also provide a Django/Python stack on Cloud Foundry. We didn’t have to do anything to get this set up. It was provided when we turned on our account. This is the way development should work!

Could you tell us a bit about the other services you use on Cloud Foundry?

We use Redis and RabbitMQ—these are both very powerful services for data structure and messaging middleware, and we have two main use cases. We use MySQL for persistence, also provisioned and managed by Cloud Foundry.

First, we provide elements of a web app and APIs as a service to the outside world. The runtime is a traditional Python process intended for short-lived, synchronous workloads. Second, there are many asynchronous processes that trigger over time. For example, the website generates a lot of events. If someone is booking a resource, confirming a booking, or cancels, these are all events. Workflow workloads run asynchronously in the background and do various things like send an SMS or email. There is a separation of concerns here between the two types of workloads, short-lived web requests, and longer-lived  background tasks, and we use RabbitMQ to decouple these sub-systems, and it is used on both sides.

Our RabbitMQ uses Redis as an in-memory data store to transport the messages between the two areas—the web APIs and the messaging, which is based on Celery as well. In addition, we use Redis as a distributed log instance—as soon as a reservation is made, we want to ensure no one else can grab the resource. Since we have four or five instances on the web API side, we run into a distributed lock problem. Redis solves this problem for us. So, RabbitMQ is the broker, Redis powers the state of notification events, and it also acts as a data store to log transaction events—for all events and messages across the two workloads.

The workflows, events, and messages store data in an analytics warehouse. At this point, we are using Hadoop, but we haven’t deployed it at a large scale quite yet. We are also looking at Storm.

cta-casestudy-shrebo

While you are a CTO, you were a developer at one point and probably get your hands dirty at times. What makes Cloud Foundry so cool for developers?
Cloud Foundry has a command line interface, and AppFog has built an API. As I mentioned earlier, this gives developers the power of scripting. We spend more time coding and less time dealing with the foundational platform. So, all deployment tasks—infrastructure, new instances, deploying code, unit testing, integration testing, screen tests—all of this can be scripted. We push a button to trigger these processes. While this is possible with any modern infrastructure, Cloud Foundry really gives us the ability to stay at a high level in the stack and avoid getting into details.

We also really like the separation between applications and code. When we write the deployment manifest and need services like MySQL or Redis, we just tell the system we need an instance of it in a few lines of configuration without having to worry about the config of the underlying service. The underlying stuff is all handled by the platform.

We have gotten off the ground very quickly. In fact, Facebook has this concept of hiring developers to deploy code in the same day they started. Now, we have the same type of environment—one of our new developers can deploy code the same day they start with shrebo. This is one of the most exciting facts about Cloud Foundry—we can have a small team of 4 people work on the app and run the infrastructure, and we can scale people this way too. Five years ago, it would have required 10 or more people to run this same infrastructure and do development.

In terms of monitoring, are you using any of Cloud Foundry’s built in capabilities?

Well, we aren’t using the Cloud Foundry health check services API directly. We decided to use New Relic as the application-level performance metrics monitor. From outside, we use Pingdom to verify uptimes. New Relic as extremely easy to plug into Cloud Foundry applications—they have a Python extension that works out of the box with Cloud Foundry. As described we use RabbitMQ to do app-level event processing, but we also use it to send New Relic custom events to their services infrastructure.

As a CTO/CIO level person with a financial services background, you have a clear handle on ROI, C-level metrics, and risk management. What about Cloud Foundry is powerful from this perspective?
If I talk about this from my banking background, I can easily see how Cloud Foundry produces efficiency and effectiveness—this helps reduce costs and risk. Let me explain.

When we release code without Cloud Foundry, it might take 4-6 months at a traditional bank with more complexity and legacy systems. With Cloud Foundry, you can reduce that time to days or weeks. There is an order of magnitude level of improvement in terms of these process metrics. There are fewer people involved, it is easier for developers to fix problems, we can scale dynamically without emailing anyone—this all reduces cost and time, and as a result increases the productivity of the whole organization

In terms of risk, Cloud Foundry clearly reduces operational risk. This is largely related to the lower number of people that need to be involved. For example, if an app shows signs of crashing, we don’t have to involve a team to resolve. A developer can fix the code, and an automated, daily build can update the problems or a button can redeploy without five people having to coordinate on a conference call or group chat. As well, I’ve described how the scale and uptime scenarios work—this reduces downtime and ensures SLA support. This is particularly important for mission-critical, revenue generating apps. So, Cloud Foundry has a direct impact on the risk of our top and bottom line.

Would you mind sharing an architecture slide on your stack?
Sure, here see below. We use HTML5/Bootstrap, JSON/REST, Redis, MongoDB, RabbitMQ, MySQL, and Python/Django for the online services or real-time runtime. There is some overlap here with our batch or periodic workloads. This part of the architecture is another conversation all together. However, we use R, Elastic Search, Storm, Hadoop, and much more. The infrastructure includes Nginx, Varnish, Gunicorn, Cloud Foundry, Cloudinary, Searchly, Amazon AWS, Mailgun, and GlobalSMS.

Thank you so much, is there anything you’d like to add from a personal perspective? What do you like to do in your off time? Anything you’d like to accomplish before you leave this little rock we call Earth?
Sure. In my spare time I like to spend time with friends and family. I also very much enjoy to ride the bike in the woods, or sail that very boat that we used to share, which is based on a small lake in Switzerland. In extension of this, I plan to do a lot more sailing at sea, in fact one of my dreams is to become a certified skipper for maritime sailing yachts.

, , , , , , , , , , , , , , , , , , ,

CloudFoundry

Case Study: How shrebo, the Sharing Economy Platform, Innovates with Cloud Foundry

3 Set , 2014  

featured-time-to-innovateIn this post, former banking technology architect and current CTO, Patrick Senti, allows us to pick his brain and learn about his experiences with Cloud Foundry and Python/Django.

As background, Senti is an award-winning architect and programmer who is also certified in machine learning. He brings over 20 years of software experience from working on high profile projects at companies like IBM, SAS, and Credit Suisse.

Recently Senti founded shrebo and is currently CTO. Much like Uber, AirBnB, and Zipcar, his company uses cloud platforms to create new ways of sharing resources.  shrebo recently won an innovation award from Swiss Federal Railways (who is also a customer) and provides an entire web-services-based SaaS platform—a set of APIs for application developers to build with. Shrebo is the cloud platform for the sharing economy—allowing you to build your own AirBnB or Lyft on shrebo.

The functionality includes search, booking, scheduling, provisioning, payment, and social functionality endpoints surrounding the entire process of sharing resources. With their services, third parties can cost effectively and quickly create sharing applications in mobile, social, local and other contexts to help automate logistics, optimize machinery, increase asset and fleet utilization, and run services for sharing any type of resource like cars, bikes, or parking spaces.

In three short sentences, what defined your needs for a platform like Cloud Foundry?

  1. Architecting a start-up with a big vision means we needed stability at the ground level.
  2. We need developers to focus on solving problems and coding instead of technical details.
  3. While stacks like Amazon were interesting, we needed to be higher up the stack to make service distribution easier.

If you could summarize, what would you say the outcomes have been so far?
This PaaS platform reduces risk and creates a lot of efficiency and effectiveness. Cloud Foundry is a real blast of an experience as a developer. We essentially package the application along with the services manifest and press a button. Everything else is done by the platform. This type of thing used to take days or weeks. Now, it takes minutes.

OK, let’s cover some background. How did you, as an entrepreneur, decide to start this company?
Well, it was really three ideas that came together.

A few years back, I shared a sailboat with other people. It was always difficult to manage the logistics because smartphones were not as common. However, the idea of making the process easier…that stuck with me. I thought, “It would be really useful to have a sharing service for any private resource.”

Two, it’s easy to see how more and more resources are shared between people across companies. If you work at the same company, everyone is on a shared calendar; so, it’s not needed. When people come together across companies, like in a start-up environment or if you share some large asset like trains, you need a better way to share. It seemed like a good idea to have an API-based platform to help build solutions like this.

Lastly, I have done a lot with analytics. I realized there could be process improvement and optimization if people and companies could successfully share information and results. For example, we cooperate with Swiss Federal Railways, who awarded us a winner for innovation as a start-up, and they operate tons of trains and cars each day. The trains are very overused in many city areas. When they are full, commuters don’t know where to sit. In fact, this is a sharable resource. So, my “aha” moment here was about connecting people via a smartphone app—we could allow them to see and chose travel based on load and occupancy. They could make a conscious decision to board a certain train. We could really help improve this if we use predictive analytics. So, we do that too—this is much like managing meeting rooms but on a really large scale.

Could you tell us more about shrebo as a business and how Pivotal and Cloud Foundry fit?
In a nutshell, companies use our stuff to build their own end-user sharing applications.

We are a platform software services for any sharing solution. If you are familiar with car sharing, like a renting process or a smartphone app for scheduling a car like Zipcar or Uber in the U.S., our platform supports all the functionality you would need to build such an app through a set of developer APIs. The shrebo platform allows companies of any size to more easily and cost effectively develop their own sharing applications and online services and mix them with other mobile, social, and local apps.

In our initial architecture process, we evaluated several PaaS providers. Ultimately, we chose App Fog, who is now owned by CenturyLink. They run Cloud Foundry with Python, Java, Node.js, PHP, and Ruby. Our technology framework allows us to elastically provision services through App Fog’s services API. Also, Pivotal provides us with Redis as a data structure service and RabbitMQ as a message broker service.

How will your customer experience be improved through our technologies?
Well, our app is not directly visible through some user interface for end users, it is more of a white label solution. However, we provide app and data services that enable others to provide end-user applications. So, those companies can rely on our API to meet SLAs cost effectively. Eventually, they need their apps to perform or users will go somewhere else. If we fail, so do they.

In the context of us powering other technologies, we use Cloud Foundry and AppFog to provision services much faster, at a lower cost, and with greater scale as real-time demand shifts. This PaaS is certainly more cost effective than Amazon, and it’s less complex and risky than traditional hosting providers who make us build the stack by ourselves. It’s also very automated. All of these things allow us to deliver code, deploy updates, and fix issues on our platform quickly.

Do you have any examples where Cloud Foundry’s Elastic Runtime Service delivered something unexpected?
Yes, absolutely. At one point in time, the AppFog infrastructure needed an upgrade. They had some planned downtime and planned to make changes to one data center at a time across Singapore, Ireland, and the U.S. With Cloud Foundry, we had zero down time even though they did. As they were bringing down a data center to make updates, we set up new instances on the fly and shifted workloads. As they moved around the world, we automatically migrated across data centers. We didn’t have any downtime even though they did! Before Cloud Foundry, this was really not possible. We could increase and decrease instances on the fly. Quite amazing.

How does Cloud Foundry compare to other platforms you have worked with?

Provisioning is so different. So, I spent a lot of time in the financial services industry—for about 10 years. We primarily worked with traditional hosting platforms. You get a Linux machine, then you do an install. Along the process, a group of experts all touch it before it is ready to deploy code. Cloud Foundry it totally different.

Like I said before, Cloud Foundry is a real blast of an experience as a developer. This type of thing used to take days or weeks. Now, it takes minutes. As a CTO, this is the experience every developer has been looking for over the past 10 years. We get the power of scripting—an event can trigger just about anything we need.

What other overarching Cloud Foundry capabilities are top of mind?
Flexibility and agility are top of the list. I see extensibility as a major benefit too. The whole architecture is built with open source contributions in mind. Adding new stacks is encouraged.

How does Cloud Foundry and AppFog support your stack?
Cloud Foundry is the PaaS and AppFog integrates and operates the PaaS and IaaS. They also provide a Django/Python stack on Cloud Foundry. We didn’t have to do anything to get this set up. It was provided when we turned on our account. This is the way development should work!

Could you tell us a bit about the other services you use on Cloud Foundry?

We use Redis and RabbitMQ—these are both very powerful services for data structure and messaging middleware, and we have two main use cases. We use MySQL for persistence, also provisioned and managed by Cloud Foundry.

First, we provide elements of a web app and APIs as a service to the outside world. The runtime is a traditional Python process intended for short-lived, synchronous workloads. Second, there are many asynchronous processes that trigger over time. For example, the website generates a lot of events. If someone is booking a resource, confirming a booking, or cancels, these are all events. Workflow workloads run asynchronously in the background and do various things like send an SMS or email. There is a separation of concerns here between the two types of workloads, short-lived web requests, and longer-lived  background tasks, and we use RabbitMQ to decouple these sub-systems, and it is used on both sides.

Our RabbitMQ uses Redis as an in-memory data store to transport the messages between the two areas—the web APIs and the messaging, which is based on Celery as well. In addition, we use Redis as a distributed log instance—as soon as a reservation is made, we want to ensure no one else can grab the resource. Since we have four or five instances on the web API side, we run into a distributed lock problem. Redis solves this problem for us. So, RabbitMQ is the broker, Redis powers the state of notification events, and it also acts as a data store to log transaction events—for all events and messages across the two workloads.

The workflows, events, and messages store data in an analytics warehouse. At this point, we are using Hadoop, but we haven’t deployed it at a large scale quite yet. We are also looking at Storm.

cta-casestudy-shrebo

While you are a CTO, you were a developer at one point and probably get your hands dirty at times. What makes Cloud Foundry so cool for developers?
Cloud Foundry has a command line interface, and AppFog has built an API. As I mentioned earlier, this gives developers the power of scripting. We spend more time coding and less time dealing with the foundational platform. So, all deployment tasks—infrastructure, new instances, deploying code, unit testing, integration testing, screen tests—all of this can be scripted. We push a button to trigger these processes. While this is possible with any modern infrastructure, Cloud Foundry really gives us the ability to stay at a high level in the stack and avoid getting into details.

We also really like the separation between applications and code. When we write the deployment manifest and need services like MySQL or Redis, we just tell the system we need an instance of it in a few lines of configuration without having to worry about the config of the underlying service. The underlying stuff is all handled by the platform.

We have gotten off the ground very quickly. In fact, Facebook has this concept of hiring developers to deploy code in the same day they started. Now, we have the same type of environment—one of our new developers can deploy code the same day they start with shrebo. This is one of the most exciting facts about Cloud Foundry—we can have a small team of 4 people work on the app and run the infrastructure, and we can scale people this way too. Five years ago, it would have required 10 or more people to run this same infrastructure and do development.

In terms of monitoring, are you using any of Cloud Foundry’s built in capabilities?

Well, we aren’t using the Cloud Foundry health check services API directly. We decided to use New Relic as the application-level performance metrics monitor. From outside, we use Pingdom to verify uptimes. New Relic as extremely easy to plug into Cloud Foundry applications—they have a Python extension that works out of the box with Cloud Foundry. As described we use RabbitMQ to do app-level event processing, but we also use it to send New Relic custom events to their services infrastructure.

As a CTO/CIO level person with a financial services background, you have a clear handle on ROI, C-level metrics, and risk management. What about Cloud Foundry is powerful from this perspective?
If I talk about this from my banking background, I can easily see how Cloud Foundry produces efficiency and effectiveness—this helps reduce costs and risk. Let me explain.

When we release code without Cloud Foundry, it might take 4-6 months at a traditional bank with more complexity and legacy systems. With Cloud Foundry, you can reduce that time to days or weeks. There is an order of magnitude level of improvement in terms of these process metrics. There are fewer people involved, it is easier for developers to fix problems, we can scale dynamically without emailing anyone—this all reduces cost and time, and as a result increases the productivity of the whole organization

In terms of risk, Cloud Foundry clearly reduces operational risk. This is largely related to the lower number of people that need to be involved. For example, if an app shows signs of crashing, we don’t have to involve a team to resolve. A developer can fix the code, and an automated, daily build can update the problems or a button can redeploy without five people having to coordinate on a conference call or group chat. As well, I’ve described how the scale and uptime scenarios work—this reduces downtime and ensures SLA support. This is particularly important for mission-critical, revenue generating apps. So, Cloud Foundry has a direct impact on the risk of our top and bottom line.

Would you mind sharing an architecture slide on your stack?
Sure, here see below. We use HTML5/Bootstrap, JSON/REST, Redis, MongoDB, RabbitMQ, MySQL, and Python/Django for the online services or real-time runtime. There is some overlap here with our batch or periodic workloads. This part of the architecture is another conversation all together. However, we use R, Elastic Search, Storm, Hadoop, and much more. The infrastructure includes Nginx, Varnish, Gunicorn, Cloud Foundry, Cloudinary, Searchly, Amazon AWS, Mailgun, and GlobalSMS.

Thank you so much, is there anything you’d like to add from a personal perspective? What do you like to do in your off time? Anything you’d like to accomplish before you leave this little rock we call Earth?
Sure. In my spare time I like to spend time with friends and family. I also very much enjoy to ride the bike in the woods, or sail that very boat that we used to share, which is based on a small lake in Switzerland. In extension of this, I plan to do a lot more sailing at sea, in fact one of my dreams is to become a certified skipper for maritime sailing yachts.

, , , , , , , , , , , , , , , , , , ,

CloudFoundry

Docker Service Broker for Cloud Foundry

22 Ago , 2014  

There is no doubt that Docker is having an explosive growth since debuting last year.

The Pivotal CF team is a fan of Docker.

We firmly believe PaaS and containers are a good match, and Docker makes it super simple to build and share a consistent container image. With just a quick look at the Docker Hub Registry, we will find already 14,000 Docker images ready to use.

Previously, I have talked about integration points between Cloud Foundry and Docker. Recently, we introduced a Cloud Foundry BOSH release to manage stateful Docker containers. We are also replacing our Warden Linux backend to support libcontainer. If you attended the last CF-Summit, I’m pretty sure you heard about our plans with Diego and Docker.

But, our curiosity about how we can provide better integration points never ends!

Today, we take a step forward. In this article I will explain how using Docker you can easily add, just in minutes, a full catalog of development and testing services to your Cloud Foundry deployment like this:

Docker Services Marketplace

Cloud Foundry Service Brokers

Applications typically depend on services (for example, databases, messaging, third-party SaaS providers) to run, store data, push mobile notifications, etc. In addition to providing an easy, fast way for developers to deploy applications, Cloud Foundry also provisions new service instances and binds credentials on demand without the need to wait for service administrators to provide them.

When a Cloud Foundry developer provisions and binds a service to an application, the component responsible for providing the service instance is the “Service Broker.” This broker advertises a catalog of service offerings and service plans to Cloud Foundry, and receives calls from Cloud Foundry for four lifecycle functions: create, delete, bind, and unbind. The broker passes these calls to the service itself. Service providers determine how a service is implemented, and Cloud Foundry only requires that the service provider implements the “Service broker API”.

The task for a Cloud Foundry operator to add a new service offering usually involves two steps:

  • Deploy the services nodes. This could be a pre-existing and shared service node or a specific node to be used only by Cloud Foundry (deployable or not via CF-BOSH).
  • Create a specific service broker for that type of service. The service broker must adhere to the “Service broker API”, must implement the necessary actions to provision/unprovision a service instance, and, optionally, can implement a mechanism to grant/revoke credentials for a specific, bound application.

This task is not trivial. It is somewhat complex, especially when creating services for development and testing purposes. In these cases, there is no need to guarantee that the data can be recovered if it is destroyed or corrupted.

One of the main topics that arise in our conversations with prospects and customers is about improvement. We have talked a lot about making it easier to offer a full catalog of development services. Lately, due to Docker hype, we also talk about why teams have not been able to use Docker to run their development and testing services.

Docker Service Broker

Today, I am proud to announce an experimental project. We are providing the “Containers Service Broker for Cloud Foundry,” a generic containers service broker for the Cloud Foundry v2 services API.

This service broker allows Cloud Foundry operators to do a few things. They can expose and provision services offerings that run inside a compatible container backend (currently only Docker is supported) and bind applications to the service. The management tasks that the broker can perform are:

  • Provision a service container with random credentials
  • Bind a service container to an application:
    • Expose the credentials to access the provisioned service
    • Provide a syslog drain service for your application events and logs
  • Unbind a service container from an application
  • Unprovision a service container
  • Expose a service container management dashboard

The service broker can be started as a standalone Docker container (from a pre-built public Docker image; or creating your custom image from a Dockerfile) or be deployed alongside the Docker CF-BOSH release (who will also monitor the state of the service broker).

The configuration of the service broker is also pretty simple. You will need to expose your service catalog offering (name, description, metadata, plan, …), set the backend to run your containers (Docker), configure the properties of the service containers to start, and define the credentials to expose. There is more documentation on how to configure the service broker at the github repository README, but let’s see a service offering properties sample:

services:
  - id: '23eefc93-6814-4db8-bbf0-eaa6c3c5fecc'
    name: 'mongodb26'
    description: 'MongoDB 2.6 service for application development and testing'
    bindable: true
    tags:
      - 'mongodb26'
      - 'mongodb'
      - 'document'
    metadata:
      displayName: 'MongoDB 2.6'
      longDescription: 'A MongoDB 2.6 service for development and testing running inside a Docker container'
      providerDisplayName: 'Pivotal Software'
      documentationUrl: 'http://docs.run.pivotal.io'
      supportUrl: 'http://support.run.pivotal.io/home'
    dashboard_client:
      id: 'p-mongodb26-client'
      secret: 'p-mongodb26-secret'
      redirect_uri: "http://cf-containers-broker.run.pivotal.io/manage/auth/cloudfoundry/callback"
    plans:
      - id: '6d8024b1-7deb-4dc7-a72a-33ec3f7a1bfc'
        name: 'free'
        container:
          backend: 'docker'
          image: 'frodenas/mongodb'
          tag: '2.6'
          command: '--smallfiles --httpinterface'
          persistent_volumes:
            - '/data'
        credentials:
          username:
            key: 'MONGODB_USERNAME'
          password:
            key: 'MONGODB_PASSWORD'
          dbname:
            key: 'MONGODB_DBNAME'
          uri:
            prefix: 'mongodb'
            port: '27017/tcp'
        description: 'Free Trial'
          metadata:
            costs:
              - amount:
                  usd: 0.0
                unit: 'MONTHLY'
            bullets:
              - 'Dedicated 2.6 MongoDB server'
              - 'MongoDB 2.6 running inside a Docker container'

Now, let’s see this in action. I will start the service broker on a Linux VM as a Docker container from the public image hosted at the Docker Hub Registry. Once started, I will register the service broker at our Cloud Foundry deployment and make all services in the broker’s catalog publicly visible. The Cloud Foundry marketplace will show us all available services. So, we can start provisioning several services. You will see how running the Cloud Foundry services provision command from our laptop will create a docker container on the service broker host running our “Dockerized” service.

Learn more in “Docker Service Broker for Cloud Foundry – Part 1″ at youtube.com/watch?v=cxBKN_nV59g.

Where are My Credentials?

Having the ability to provide a single-tenant service on-demand allows developers to interact with the service as if they were administrators. This is very powerful. It allows admins to create/destroy databases, configure the engine system, and more, without interfering with other users.

On the other hand, most applications expect that a database and a user with enough privileges to access the database have been created previously to start the application. Cloud Foundry Services API allows services to provide “bindable” service instances. This means that binding a service to an application will add credentials for the service instance through the VCAP_SERVICES environment variable.

But how we can make a service container bindable and set up credentials when binding an application? Let’s see an example. At the properties showed previously, there is a section inside the plan named “credentials.” This is where you define the set of credentials (user, password and dbname) for the container:

credentials:
  username:
    key: 'MONGODB_USERNAME'
  password:
    key: 'MONGODB_PASSWORD'
  dbname:
    key: 'MONGODB_DBNAME'
  uri:
    prefix: 'mongodb'
    port: '27017/tcp'

Each container backend is responsible for injecting those credentials options into the container. For example, the Docker backend will inject the credentials using environment variables, the “username” will be injected using the environment variable “MONGODB_USERNAME,” and the value of the username will be created randomly by the service broker.

To make this really work, the container must support creating a username and a database on container’s start up based on the injected environment variables. Take a quick look on how the MongoDB Docker image uses the injected environment variables to create a username and a database. There is also more documentation about how to set the credentials at the CREDENTIALS document at the service broker github repository.

Let’s see the above example in action. I will provision a Docker MongoDB service and bind it to a demo application. I will show how the credentials are injected into the Docker service container on boot time and later how the application receives the credentials via the environment variable VCAP_SERVICES, which can also be seen with the “cf env APPNAME” command.

Visit youtu.be/AaWguQi_18g for the “Docker Service Broker for Cloud Foundry – Part 2″ video.

Use Cases and Multi-Database Engines Tests

A nice use case for the Docker service broker is when you want to test your application on several database engines. Enabling multiple database services with the service broker is now really easy. You just need to find the appropriate Docker image. As well, developers can switch from one service to another by simply binding the first service to the application, performing the tests, unbinding the first service, binding the second service, and testing again. This provides developers with an outstanding and fast way to test their applications on multiple database services.

Let’s see how this is possible using the “Containers Service Broker for Cloud Foundry”.

The Spring-Music application has been built to store the same domain objects in one of a variety of different persistence technologies – relational, document, and key-value stores. The application uses Spring Java configuration and bean profiles to configure the connection objects needed to use the persistence stores. It also uses the Spring Cloud library to inspect the environment when running on Cloud Foundry and configure the appropriate Spring profile. First, I will bind a MySQL service to the application. Once the application is restarted, the application is reconfigured automatically to use the MySQL instance as the persistence layer. Later, I will switch from a MySQL to PostgreSQL instance, and I will show again how the application, after restart, is reconfigured to use the new persistence backend.

Learn more with Part 3 of the video series at youtu.be/192ogfmJPPc.

Draining logs, ‘cause Operational Problems Happen

Cloud Foundry keeps a limited amount of application logging information in memory. When you want to persist more log information than this, you must drain logs to a third-party log management service. You can then use the third-party service to analyze the logs. For example, you could produce metrics for application response times or error rates.

The “Containers Service Broker for Cloud Foundry” allows services to declare a syslog drain port. When the service container is provisioned and bound to an application, the service broker will expose the syslog drain URL to Cloud Foundry. It will start automatically and emit application events and logs to the bound service.

Again, let’s see a demo example. I will provision a “Dockerized” Logstash service that enables syslog draining. Once I bind the provisioned service to an application, I will check the service logs via the service broker management dashboard to see how the bound application events and logs are started to be drained to the Logstash service.

Whatch the demo in Part 4 of the video series at youtu.be/9hTo6Vk_cWk.

Known Limitations

The “Containers Service Broker for Cloud Foundry”  is not presently a production ready service broker. It is suitable for experimentation and may not become supported in the future.

There are also some caveats:

  • The service broker, when using Docker as a backend, can only be configured to connect a single daemon. That means that you can only have a single node for a service broker. If you want to provide more than one node, you will need to create as many service broker nodes as you want to support.
  • The service broker does not provide any kind of monitoring and self-healing for containers. In the case a container started by the service broker fails or it is stopped, the service broker will not restart it.
  • The service broker allows to set up persistent data for your services by defining a host volume mount point for the container. The broker will automatically create a host directory using the service instance ID and the volume name as the directory name. Then, it will bind it to the container volume mount point. If the container dies or it is stopped, as there is no self-healing mechanism, a manual action will be required to start the container again and mount the correct host directory, allowing the persistent data to be available again for the service.
  • When defining and mounting a container’s volumes using a host directory, there is no disk quota. So, heavy applications can quickly fill the host storage disk and affect all containers running in the same host.

Summary & Links to Get Started

In this post, we explained how the “Containers Service Broker for Cloud Foundry” is pretty easy to use. We can create a full catalog of “Dockerized” services for development and testing purposes. Then, we can expose them to your Cloud Foundry users. This happens without the need to deploy a multi-tenant service node or the overhead of a separate service broker for each service instance type.

We also explained how easy it is to run the service broker as a single Docker container or deployed using the Docker CF-BOSH release.

Lastly, all Docker service examples showed in this post are available at the Docker Hub Registry: ArangoDB, CouchDB, Elasticsearch, Logstash, Memcached, MongoDB, MySQL, Neo4j, PostgreSQL, RabbitMQ and Redis. An example of a configuration for each service offering is available at the Docker CF-BOSH release examples directory.

So, what are you waiting for to expose to your developers a full set of Docker services ready to consume?

Engaging the Community on Additional Work

As the project is open source, we hope the community will contribute to the project with whatever ideas to make this Cloud Foundry Service Broker even better. Please feel free also to share your Docker services with the whole, vibrant Cloud Foundry community. This way, everyone can use them within their environments (hmm, we will need a “Docker services for Cloud Foundry” web page 🙂 ).

Some ideas to improve the service broker experience: enable developers with the ability to start/restart/stop containers from the service broker management dashboard; add monitoring and self-healing capabilities for containers; adding quotas for persistent data, etc.

Also, the service broker allows you to add a new container backend, how about supporting Warden? or better, having multiple nodes for a single service broker (Diego?). Other ideas?

Learn More:

, , , , , , , , , , , , , ,

CloudFoundry

Docker Service Broker for Cloud Foundry

22 Ago , 2014  

Editors Note (Update): Today (8/25/2014) VMware announced joint initiatives with Docker, Google and Pivotal to help enterprises run and manage container-based applications on a common platform at scale in private, public and hybrid clouds. In cooperation with Docker, Google, and Pivotal, VMware will enable enterprises to run and manage their containerized applications on their VMware infrastructure or on VMware vCloud Air hybrid service while minimizing complexity by reducing the need to build out new and separate infrastructure silos for their container initiatives. By offering a common platform, developers gain the speed and agility they need while providing IT teams with the control they require. Read full press release here.

There is no doubt that Docker is having an explosive growth since debuting last year.

The Pivotal CF team is a fan of Docker.

We firmly believe PaaS and containers are a good match, and Docker makes it super simple to build and share a consistent container image. With just a quick look at the Docker Hub Registry, we will find already 14,000 Docker images ready to use.

Previously, I have talked about integration points between Cloud Foundry and Docker. Recently, we introduced a Cloud Foundry BOSH release to manage stateful Docker containers. We are also replacing our Warden Linux backend to support libcontainer. If you attended the last CF-Summit, I’m pretty sure you heard about our plans with Diego and Docker.

But, our curiosity about how we can provide better integration points never ends!

Today, we take a step forward. In this article I will explain how using Docker you can easily add, just in minutes, a full catalog of development and testing services to your Cloud Foundry deployment like this:

Docker Services Marketplace

Cloud Foundry Service Brokers

Applications typically depend on services (for example, databases, messaging, third-party SaaS providers) to run, store data, push mobile notifications, etc. In addition to providing an easy, fast way for developers to deploy applications, Cloud Foundry also provisions new service instances and binds credentials on demand without the need to wait for service administrators to provide them.

When a Cloud Foundry developer provisions and binds a service to an application, the component responsible for providing the service instance is the “Service Broker.” This broker advertises a catalog of service offerings and service plans to Cloud Foundry, and receives calls from Cloud Foundry for four lifecycle functions: create, delete, bind, and unbind. The broker passes these calls to the service itself. Service providers determine how a service is implemented, and Cloud Foundry only requires that the service provider implements the “Service broker API”.

The task for a Cloud Foundry operator to add a new service offering usually involves two steps:

  • Deploy the services nodes. This could be a pre-existing and shared service node or a specific node to be used only by Cloud Foundry (deployable or not via CF-BOSH).
  • Create a specific service broker for that type of service. The service broker must adhere to the “Service broker API”, must implement the necessary actions to provision/unprovision a service instance, and, optionally, can implement a mechanism to grant/revoke credentials for a specific, bound application.

This task is not trivial. It is somewhat complex, especially when creating services for development and testing purposes. In these cases, there is no need to guarantee that the data can be recovered if it is destroyed or corrupted.

One of the main topics that arise in our conversations with prospects and customers is about improvement. We have talked a lot about making it easier to offer a full catalog of development services. Lately, due to Docker hype, we also talk about why teams have not been able to use Docker to run their development and testing services.

Docker Service Broker

Today, I am proud to announce an experimental project. We are providing the “Containers Service Broker for Cloud Foundry,” a generic containers service broker for the Cloud Foundry v2 services API.

This service broker allows Cloud Foundry operators to do a few things. They can expose and provision services offerings that run inside a compatible container backend (currently only Docker is supported) and bind applications to the service. The management tasks that the broker can perform are:

  • Provision a service container with random credentials
  • Bind a service container to an application:
    • Expose the credentials to access the provisioned service
    • Provide a syslog drain service for your application events and logs
  • Unbind a service container from an application
  • Unprovision a service container
  • Expose a service container management dashboard

The service broker can be started as a standalone Docker container (from a pre-built public Docker image; or creating your custom image from a Dockerfile) or be deployed alongside the Docker CF-BOSH release (who will also monitor the state of the service broker).

The configuration of the service broker is also pretty simple. You will need to expose your service catalog offering (name, description, metadata, plan, …), set the backend to run your containers (Docker), configure the properties of the service containers to start, and define the credentials to expose. There is more documentation on how to configure the service broker at the github repository README, but let’s see a service offering properties sample:

services:
  - id: '23eefc93-6814-4db8-bbf0-eaa6c3c5fecc'
    name: 'mongodb26'
    description: 'MongoDB 2.6 service for application development and testing'
    bindable: true
    tags:
      - 'mongodb26'
      - 'mongodb'
      - 'document'
    metadata:
      displayName: 'MongoDB 2.6'
      longDescription: 'A MongoDB 2.6 service for development and testing running inside a Docker container'
      providerDisplayName: 'Pivotal Software'
      documentationUrl: 'http://docs.run.pivotal.io'
      supportUrl: 'http://support.run.pivotal.io/home'
    dashboard_client:
      id: 'p-mongodb26-client'
      secret: 'p-mongodb26-secret'
      redirect_uri: "http://cf-containers-broker.run.pivotal.io/manage/auth/cloudfoundry/callback"
    plans:
      - id: '6d8024b1-7deb-4dc7-a72a-33ec3f7a1bfc'
        name: 'free'
        container:
          backend: 'docker'
          image: 'frodenas/mongodb'
          tag: '2.6'
          command: '--smallfiles --httpinterface'
          persistent_volumes:
            - '/data'
        credentials:
          username:
            key: 'MONGODB_USERNAME'
          password:
            key: 'MONGODB_PASSWORD'
          dbname:
            key: 'MONGODB_DBNAME'
          uri:
            prefix: 'mongodb'
            port: '27017/tcp'
        description: 'Free Trial'
          metadata:
            costs:
              - amount:
                  usd: 0.0
                unit: 'MONTHLY'
            bullets:
              - 'Dedicated 2.6 MongoDB server'
              - 'MongoDB 2.6 running inside a Docker container'

Now, let’s see this in action. I will start the service broker on a Linux VM as a Docker container from the public image hosted at the Docker Hub Registry. Once started, I will register the service broker at our Cloud Foundry deployment and make all services in the broker’s catalog publicly visible. The Cloud Foundry marketplace will show us all available services. So, we can start provisioning several services. You will see how running the Cloud Foundry services provision command from our laptop will create a docker container on the service broker host running our “Dockerized” service.

Learn more in “Docker Service Broker for Cloud Foundry – Part 1″ at youtube.com/watch?v=cxBKN_nV59g.

Where are My Credentials?

Having the ability to provide a single-tenant service on-demand allows developers to interact with the service as if they were administrators. This is very powerful. It allows admins to create/destroy databases, configure the engine system, and more, without interfering with other users.

On the other hand, most applications expect that a database and a user with enough privileges to access the database have been created previously to start the application. Cloud Foundry Services API allows services to provide “bindable” service instances. This means that binding a service to an application will add credentials for the service instance through the VCAP_SERVICES environment variable.

But how we can make a service container bindable and set up credentials when binding an application? Let’s see an example. At the properties showed previously, there is a section inside the plan named “credentials.” This is where you define the set of credentials (user, password and dbname) for the container:

credentials:
  username:
    key: 'MONGODB_USERNAME'
  password:
    key: 'MONGODB_PASSWORD'
  dbname:
    key: 'MONGODB_DBNAME'
  uri:
    prefix: 'mongodb'
    port: '27017/tcp'

Each container backend is responsible for injecting those credentials options into the container. For example, the Docker backend will inject the credentials using environment variables, the “username” will be injected using the environment variable “MONGODB_USERNAME,” and the value of the username will be created randomly by the service broker.

To make this really work, the container must support creating a username and a database on container’s start up based on the injected environment variables. Take a quick look on how the MongoDB Docker image uses the injected environment variables to create a username and a database. There is also more documentation about how to set the credentials at the CREDENTIALS document at the service broker github repository.

Let’s see the above example in action. I will provision a Docker MongoDB service and bind it to a demo application. I will show how the credentials are injected into the Docker service container on boot time and later how the application receives the credentials via the environment variable VCAP_SERVICES, which can also be seen with the “cf env APPNAME” command.

Visit youtu.be/AaWguQi_18g for the “Docker Service Broker for Cloud Foundry – Part 2″ video.

Use Cases and Multi-Database Engines Tests

A nice use case for the Docker service broker is when you want to test your application on several database engines. Enabling multiple database services with the service broker is now really easy. You just need to find the appropriate Docker image. As well, developers can switch from one service to another by simply binding the first service to the application, performing the tests, unbinding the first service, binding the second service, and testing again. This provides developers with an outstanding and fast way to test their applications on multiple database services.

Let’s see how this is possible using the “Containers Service Broker for Cloud Foundry”.

The Spring-Music application has been built to store the same domain objects in one of a variety of different persistence technologies – relational, document, and key-value stores. The application uses Spring Java configuration and bean profiles to configure the connection objects needed to use the persistence stores. It also uses the Spring Cloud library to inspect the environment when running on Cloud Foundry and configure the appropriate Spring profile. First, I will bind a MySQL service to the application. Once the application is restarted, the application is reconfigured automatically to use the MySQL instance as the persistence layer. Later, I will switch from a MySQL to PostgreSQL instance, and I will show again how the application, after restart, is reconfigured to use the new persistence backend.

Learn more with Part 3 of the video series at youtu.be/192ogfmJPPc.

Draining logs, ‘cause Operational Problems Happen

Cloud Foundry keeps a limited amount of application logging information in memory. When you want to persist more log information than this, you must drain logs to a third-party log management service. You can then use the third-party service to analyze the logs. For example, you could produce metrics for application response times or error rates.

The “Containers Service Broker for Cloud Foundry” allows services to declare a syslog drain port. When the service container is provisioned and bound to an application, the service broker will expose the syslog drain URL to Cloud Foundry. It will start automatically and emit application events and logs to the bound service.

Again, let’s see a demo example. I will provision a “Dockerized” Logstash service that enables syslog draining. Once I bind the provisioned service to an application, I will check the service logs via the service broker management dashboard to see how the bound application events and logs are started to be drained to the Logstash service.

Whatch the demo in Part 4 of the video series at youtu.be/9hTo6Vk_cWk.

Known Limitations

The “Containers Service Broker for Cloud Foundry”  is not presently a production ready service broker. It is suitable for experimentation and may not become supported in the future.

There are also some caveats:

  • The service broker, when using Docker as a backend, can only be configured to connect a single daemon. That means that you can only have a single node for a service broker. If you want to provide more than one node, you will need to create as many service broker nodes as you want to support.
  • The service broker does not provide any kind of monitoring and self-healing for containers. In the case a container started by the service broker fails or it is stopped, the service broker will not restart it.
  • The service broker allows to set up persistent data for your services by defining a host volume mount point for the container. The broker will automatically create a host directory using the service instance ID and the volume name as the directory name. Then, it will bind it to the container volume mount point. If the container dies or it is stopped, as there is no self-healing mechanism, a manual action will be required to start the container again and mount the correct host directory, allowing the persistent data to be available again for the service.
  • When defining and mounting a container’s volumes using a host directory, there is no disk quota. So, heavy applications can quickly fill the host storage disk and affect all containers running in the same host.

Summary & Links to Get Started

In this post, we explained how the “Containers Service Broker for Cloud Foundry” is pretty easy to use. We can create a full catalog of “Dockerized” services for development and testing purposes. Then, we can expose them to your Cloud Foundry users. This happens without the need to deploy a multi-tenant service node or the overhead of a separate service broker for each service instance type.

We also explained how easy it is to run the service broker as a single Docker container or deployed using the Docker CF-BOSH release.

Lastly, all Docker service examples showed in this post are available at the Docker Hub Registry: ArangoDB, CouchDB, Elasticsearch, Logstash, Memcached, MongoDB, MySQL, Neo4j, PostgreSQL, RabbitMQ and Redis. An example of a configuration for each service offering is available at the Docker CF-BOSH release examples directory.

So, what are you waiting for to expose to your developers a full set of Docker services ready to consume?

Engaging the Community on Additional Work

As the project is open source, we hope the community will contribute to the project with whatever ideas to make this Cloud Foundry Service Broker even better. Please feel free also to share your Docker services with the whole, vibrant Cloud Foundry community. This way, everyone can use them within their environments (hmm, we will need a “Docker services for Cloud Foundry” web page 🙂 ).

Some ideas to improve the service broker experience: enable developers with the ability to start/restart/stop containers from the service broker management dashboard; add monitoring and self-healing capabilities for containers; adding quotas for persistent data, etc.

Also, the service broker allows you to add a new container backend, how about supporting Warden? or better, having multiple nodes for a single service broker (Diego?). Other ideas?

Learn More:

, , , , , , , , , , , , , , , ,

CloudFoundry

Docker Service Broker for Cloud Foundry

22 Ago , 2014  

Editors Note (Update): Today (8/25/2014) VMware announced joint initiatives with Docker, Google and Pivotal to help enterprises run and manage container-based applications on a common platform at scale in private, public and hybrid clouds. In cooperation with Docker, Google, and Pivotal, VMware will enable enterprises to run and manage their containerized applications on their VMware infrastructure or on VMware vCloud Air hybrid service while minimizing complexity by reducing the need to build out new and separate infrastructure silos for their container initiatives. By offering a common platform, developers gain the speed and agility they need while providing IT teams with the control they require. Read full press release here.

There is no doubt that Docker is having an explosive growth since debuting last year.

The Pivotal CF team is a fan of Docker.

We firmly believe PaaS and containers are a good match, and Docker makes it super simple to build and share a consistent container image. With just a quick look at the Docker Hub Registry, we will find already 14,000 Docker images ready to use.

Previously, I have talked about integration points between Cloud Foundry and Docker. Recently, we introduced a Cloud Foundry BOSH release to manage stateful Docker containers. We are also replacing our Warden Linux backend to support libcontainer. If you attended the last CF-Summit, I’m pretty sure you heard about our plans with Diego and Docker.

But, our curiosity about how we can provide better integration points never ends!

Today, we take a step forward. In this article I will explain how using Docker you can easily add, just in minutes, a full catalog of development and testing services to your Cloud Foundry deployment like this:

Docker Services Marketplace

Cloud Foundry Service Brokers

Applications typically depend on services (for example, databases, messaging, third-party SaaS providers) to run, store data, push mobile notifications, etc. In addition to providing an easy, fast way for developers to deploy applications, Cloud Foundry also provisions new service instances and binds credentials on demand without the need to wait for service administrators to provide them.

When a Cloud Foundry developer provisions and binds a service to an application, the component responsible for providing the service instance is the “Service Broker.” This broker advertises a catalog of service offerings and service plans to Cloud Foundry, and receives calls from Cloud Foundry for four lifecycle functions: create, delete, bind, and unbind. The broker passes these calls to the service itself. Service providers determine how a service is implemented, and Cloud Foundry only requires that the service provider implements the “Service broker API”.

The task for a Cloud Foundry operator to add a new service offering usually involves two steps:

  • Deploy the services nodes. This could be a pre-existing and shared service node or a specific node to be used only by Cloud Foundry (deployable or not via CF-BOSH).
  • Create a specific service broker for that type of service. The service broker must adhere to the “Service broker API”, must implement the necessary actions to provision/unprovision a service instance, and, optionally, can implement a mechanism to grant/revoke credentials for a specific, bound application.

This task is not trivial. It is somewhat complex, especially when creating services for development and testing purposes. In these cases, there is no need to guarantee that the data can be recovered if it is destroyed or corrupted.

One of the main topics that arise in our conversations with prospects and customers is about improvement. We have talked a lot about making it easier to offer a full catalog of development services. Lately, due to Docker hype, we also talk about why teams have not been able to use Docker to run their development and testing services.

Docker Service Broker

Today, I am proud to announce an experimental project. We are providing the “Containers Service Broker for Cloud Foundry,” a generic containers service broker for the Cloud Foundry v2 services API.

This service broker allows Cloud Foundry operators to do a few things. They can expose and provision services offerings that run inside a compatible container backend (currently only Docker is supported) and bind applications to the service. The management tasks that the broker can perform are:

  • Provision a service container with random credentials
  • Bind a service container to an application:
    • Expose the credentials to access the provisioned service
    • Provide a syslog drain service for your application events and logs
  • Unbind a service container from an application
  • Unprovision a service container
  • Expose a service container management dashboard

The service broker can be started as a standalone Docker container (from a pre-built public Docker image; or creating your custom image from a Dockerfile) or be deployed alongside the Docker CF-BOSH release (who will also monitor the state of the service broker).

The configuration of the service broker is also pretty simple. You will need to expose your service catalog offering (name, description, metadata, plan, …), set the backend to run your containers (Docker), configure the properties of the service containers to start, and define the credentials to expose. There is more documentation on how to configure the service broker at the github repository README, but let’s see a service offering properties sample:

services:
  - id: '23eefc93-6814-4db8-bbf0-eaa6c3c5fecc'
    name: 'mongodb26'
    description: 'MongoDB 2.6 service for application development and testing'
    bindable: true
    tags:
      - 'mongodb26'
      - 'mongodb'
      - 'document'
    metadata:
      displayName: 'MongoDB 2.6'
      longDescription: 'A MongoDB 2.6 service for development and testing running inside a Docker container'
      providerDisplayName: 'Pivotal Software'
      documentationUrl: 'http://docs.run.pivotal.io'
      supportUrl: 'http://support.run.pivotal.io/home'
    dashboard_client:
      id: 'p-mongodb26-client'
      secret: 'p-mongodb26-secret'
      redirect_uri: "http://cf-containers-broker.run.pivotal.io/manage/auth/cloudfoundry/callback"
    plans:
      - id: '6d8024b1-7deb-4dc7-a72a-33ec3f7a1bfc'
        name: 'free'
        container:
          backend: 'docker'
          image: 'frodenas/mongodb'
          tag: '2.6'
          command: '--smallfiles --httpinterface'
          persistent_volumes:
            - '/data'
        credentials:
          username:
            key: 'MONGODB_USERNAME'
          password:
            key: 'MONGODB_PASSWORD'
          dbname:
            key: 'MONGODB_DBNAME'
          uri:
            prefix: 'mongodb'
            port: '27017/tcp'
        description: 'Free Trial'
          metadata:
            costs:
              - amount:
                  usd: 0.0
                unit: 'MONTHLY'
            bullets:
              - 'Dedicated 2.6 MongoDB server'
              - 'MongoDB 2.6 running inside a Docker container'

Now, let’s see this in action. I will start the service broker on a Linux VM as a Docker container from the public image hosted at the Docker Hub Registry. Once started, I will register the service broker at our Cloud Foundry deployment and make all services in the broker’s catalog publicly visible. The Cloud Foundry marketplace will show us all available services. So, we can start provisioning several services. You will see how running the Cloud Foundry services provision command from our laptop will create a docker container on the service broker host running our “Dockerized” service.

Learn more in “Docker Service Broker for Cloud Foundry – Part 1″ at youtube.com/watch?v=cxBKN_nV59g.

Where are My Credentials?

Having the ability to provide a single-tenant service on-demand allows developers to interact with the service as if they were administrators. This is very powerful. It allows admins to create/destroy databases, configure the engine system, and more, without interfering with other users.

On the other hand, most applications expect that a database and a user with enough privileges to access the database have been created previously to start the application. Cloud Foundry Services API allows services to provide “bindable” service instances. This means that binding a service to an application will add credentials for the service instance through the VCAP_SERVICES environment variable.

But how we can make a service container bindable and set up credentials when binding an application? Let’s see an example. At the properties showed previously, there is a section inside the plan named “credentials.” This is where you define the set of credentials (user, password and dbname) for the container:

credentials:
  username:
    key: 'MONGODB_USERNAME'
  password:
    key: 'MONGODB_PASSWORD'
  dbname:
    key: 'MONGODB_DBNAME'
  uri:
    prefix: 'mongodb'
    port: '27017/tcp'

Each container backend is responsible for injecting those credentials options into the container. For example, the Docker backend will inject the credentials using environment variables, the “username” will be injected using the environment variable “MONGODB_USERNAME,” and the value of the username will be created randomly by the service broker.

To make this really work, the container must support creating a username and a database on container’s start up based on the injected environment variables. Take a quick look on how the MongoDB Docker image uses the injected environment variables to create a username and a database. There is also more documentation about how to set the credentials at the CREDENTIALS document at the service broker github repository.

Let’s see the above example in action. I will provision a Docker MongoDB service and bind it to a demo application. I will show how the credentials are injected into the Docker service container on boot time and later how the application receives the credentials via the environment variable VCAP_SERVICES, which can also be seen with the “cf env APPNAME” command.

Visit youtu.be/AaWguQi_18g for the “Docker Service Broker for Cloud Foundry – Part 2″ video.

Use Cases and Multi-Database Engines Tests

A nice use case for the Docker service broker is when you want to test your application on several database engines. Enabling multiple database services with the service broker is now really easy. You just need to find the appropriate Docker image. As well, developers can switch from one service to another by simply binding the first service to the application, performing the tests, unbinding the first service, binding the second service, and testing again. This provides developers with an outstanding and fast way to test their applications on multiple database services.

Let’s see how this is possible using the “Containers Service Broker for Cloud Foundry”.

The Spring-Music application has been built to store the same domain objects in one of a variety of different persistence technologies – relational, document, and key-value stores. The application uses Spring Java configuration and bean profiles to configure the connection objects needed to use the persistence stores. It also uses the Spring Cloud library to inspect the environment when running on Cloud Foundry and configure the appropriate Spring profile. First, I will bind a MySQL service to the application. Once the application is restarted, the application is reconfigured automatically to use the MySQL instance as the persistence layer. Later, I will switch from a MySQL to PostgreSQL instance, and I will show again how the application, after restart, is reconfigured to use the new persistence backend.

Learn more with Part 3 of the video series at youtu.be/192ogfmJPPc.

Draining logs, ‘cause Operational Problems Happen

Cloud Foundry keeps a limited amount of application logging information in memory. When you want to persist more log information than this, you must drain logs to a third-party log management service. You can then use the third-party service to analyze the logs. For example, you could produce metrics for application response times or error rates.

The “Containers Service Broker for Cloud Foundry” allows services to declare a syslog drain port. When the service container is provisioned and bound to an application, the service broker will expose the syslog drain URL to Cloud Foundry. It will start automatically and emit application events and logs to the bound service.

Again, let’s see a demo example. I will provision a “Dockerized” Logstash service that enables syslog draining. Once I bind the provisioned service to an application, I will check the service logs via the service broker management dashboard to see how the bound application events and logs are started to be drained to the Logstash service.

Whatch the demo in Part 4 of the video series at youtu.be/9hTo6Vk_cWk.

Known Limitations

The “Containers Service Broker for Cloud Foundry”  is not presently a production ready service broker. It is suitable for experimentation and may not become supported in the future.

There are also some caveats:

  • The service broker, when using Docker as a backend, can only be configured to connect a single daemon. That means that you can only have a single node for a service broker. If you want to provide more than one node, you will need to create as many service broker nodes as you want to support.
  • The service broker does not provide any kind of monitoring and self-healing for containers. In the case a container started by the service broker fails or it is stopped, the service broker will not restart it.
  • The service broker allows to set up persistent data for your services by defining a host volume mount point for the container. The broker will automatically create a host directory using the service instance ID and the volume name as the directory name. Then, it will bind it to the container volume mount point. If the container dies or it is stopped, as there is no self-healing mechanism, a manual action will be required to start the container again and mount the correct host directory, allowing the persistent data to be available again for the service.
  • When defining and mounting a container’s volumes using a host directory, there is no disk quota. So, heavy applications can quickly fill the host storage disk and affect all containers running in the same host.

Summary & Links to Get Started

In this post, we explained how the “Containers Service Broker for Cloud Foundry” is pretty easy to use. We can create a full catalog of “Dockerized” services for development and testing purposes. Then, we can expose them to your Cloud Foundry users. This happens without the need to deploy a multi-tenant service node or the overhead of a separate service broker for each service instance type.

We also explained how easy it is to run the service broker as a single Docker container or deployed using the Docker CF-BOSH release.

Lastly, all Docker service examples showed in this post are available at the Docker Hub Registry: ArangoDB, CouchDB, Elasticsearch, Logstash, Memcached, MongoDB, MySQL, Neo4j, PostgreSQL, RabbitMQ and Redis. An example of a configuration for each service offering is available at the Docker CF-BOSH release examples directory.

So, what are you waiting for to expose to your developers a full set of Docker services ready to consume?

Engaging the Community on Additional Work

As the project is open source, we hope the community will contribute to the project with whatever ideas to make this Cloud Foundry Service Broker even better. Please feel free also to share your Docker services with the whole, vibrant Cloud Foundry community. This way, everyone can use them within their environments (hmm, we will need a “Docker services for Cloud Foundry” web page 🙂 ).

Some ideas to improve the service broker experience: enable developers with the ability to start/restart/stop containers from the service broker management dashboard; add monitoring and self-healing capabilities for containers; adding quotas for persistent data, etc.

Also, the service broker allows you to add a new container backend, how about supporting Warden? or better, having multiple nodes for a single service broker (Diego?). Other ideas?

Learn More:

, , , , , , , , , , , , , , , ,

Uncategorized

Strengthening Apache Hadoop in the Enterprise with Apache Ambari

28 Lug , 2014  

featured-elephant-onlyAt Pivotal, we strongly support open source software. This, of course, includes significant contributions to important technologies like Cloud Foundry, Redis, Spring XD and RabbitMQ, just to name a few. We also have a deep commitment to Apache Hadoop. We invest heavily in Hadoop and complementary modules like HAWQ, GemFire XD and Pivotal Command Center. It is essential to our core, allows us to serve our customers in profound ways, and it is a fundamental part of our innovative Big Data Suite. Today we are excited to announce that Pivotal and Hortonworks will collaborate on the Apache Ambari project to help strengthen Hadoop as an enterprise offering and to further advance the Apache Hadoop ecosystem.

Apache Hadoop projects are central to our efforts to drive the most value for the enterprise. An open source, extensible and vendor neutral application to manage services in a standardized way benefits the entire ecosystem. It increases customer agility and reduces operational costs and can ultimately help drive Hadoop adoption.

We have great respect for the progress that has been made by Hortonworks in this area, specifically around the Ambari project.

Our Pivotal HD and Big Data Suite customers who favor Ambari—and the broader Apache Hadoop community—will benefit from additional development resources and functionality contributions Pivotal will make to the Ambari project, working closely with Hortonworks and others.

cta-big-data-suite

“Pivotal has a strong record of contribution to open source and has proven their commitment with projects such as Cloud Foundry, Spring, Redis and more.  Collaborating with Hortonworks and others in the Apache Hadoop ecosystem to further invest in Apache Ambari as the standard management tool for Hadoop will be quite powerful,” says Shaun Connolly, VP Strategy at Hortonworks “Pivotal’s track record in open source overall and the breadth of skills they bring will go a long way towards helping enterprises be successful, faster, with Hadoop.”

We are expanding our open source investment by dedicating Pivotal engineers to contribute installation, configuration and management capabilities to Ambari.  We will collaborate closely with Hortonworks, ASF, and the broader Apache Hadoop community in this effort. At the same time, we will continue to deliver on our commitments to existing customers and work closely with them to benefit from this collaboration.

Pivotal will continue to drive technology forward, to further the goal of enterprise standard Hadoop.  We are looking forward to Apache Ambari contributions, along with further innovations in our Big Data Suite.

, , , , , , , ,

Cumulogic,PaaS

CumuLogic’s Eagle Release is Here

12 Feb , 2014  

Bald-Eagle-Jet

The CumuLogic team just announced our Eagle release of the CumuLogic Cloud Services Platform, adding some significant new features to the product, including: Oracle DB support for Elastic WebStack, Advanced DB clustering and multi-zone replication features for our DBaaS feature using the Percona engine, CouchDB support was added to our NoSQL as a Service feature, and we’ve released a Queue-as-a-Service feature with support for both RabbitMQ and IBM WebSphere MQ.

I’ve been talking with early adopters about this release for a couple of months now, and three things stand out from these conversations:

  • Application development teams that adopt the CumuLogic platform are able to develop and deploy applications at a faster pace than ever before
  • IT infrastructure teams are able to use the platform to operationalize newer technologies faster (like MongoDB and CouchBase), and reduce the cost of operations for known technologies, by jumping to “as a Service” delivery models
  • Cloud operators (both public and private) are seeing that their users truly are looking for more than just VMs-as-a-service. They want IaaS+

Feel free to dig into the links above to find out more about each of the new features in our Eagle release, or take a peek at our How It Works page to understand the larger platform’s approach to delivering “anything as a service”.

Now that our Eagle release is officially out, we’re excited about the potential for our customers (and partners), but we’re not done yet…  The team is actively working on our next release: Falcon. There are some big new features coming in Falcon that are going to give CumuLogic users even more choice in how they design, build and deploy applications.

Interested in learning how CumuLogic can help your organization deploy apps faster, reduce the operational risks of manual configuration or build a cloud platform that offers what your users have come to expect, get in touch with us here: http://www.cumulogic.com/get-started/contact/

-chip

The post CumuLogic’s Eagle Release is Here appeared first on Platform as a Service Magazine.

, , , , , ,