AWS

Amazon ElastiCache Launches Enhanced Redis Backup and Restore with Cluster Resizing

16 Mar , 2017  

We are excited to announce that Amazon ElastiCache now supports enhanced Redis Backup and Restore with Cluster Resizing. In October 2016, we launched support for Redis Cluster with Redis 3.2.4. In addition to scaling your Redis workloads across up to 15 shards with 3.5TiB of data, it also allowed creating cluster-level backups, which contain snapshots of each of the cluster’s shards. With this launch, we are adding the capability to restore a backup into a Redis Cluster with a different number of shards and slot distribution, allowing you to resize your Redis workload. ElastiCache will parse the Redis key space across the backup’s individual snapshots, and redistribute the keys in the new Cluster according to the requested number of shards and hash slots. Your new cluster can be either larger or smaller in size, as long as the data fits in the selected configuration.

,

AWS

Announcing availability of Amazon EC2 R3 Instances AWS South America (Sao Paulo) region

26 Giu , 2015  

You can now launch R3 instances, the latest generation of Amazon EC2 memory optimized instances in the South America (Sao Paulo) AWS region. R3 offers the best price point per GiB of RAM and high memory performance. R3 instances are recommended for In-memory analytics like SAP HANA, high performance databases including relational databases and NoSQL databases such as MongoDB, and MemcacheD/Redis applications. R3 instances support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only.

R3 instances, powered by Intel Xeon Ivy Bridge processors, offer up to 32 vCPUs, 244 GiB of memory, and can deliver up to 150,000 4 KB random reads per second. R3 instances support Enhanced Networking for higher packet per second (PPS) performance, lower network jitter, and lower network latencies. Please refer to R3 technical documentation for additional reference. R3 instances are available in two instance sizes in South America (Sao Paulo) AWS region with following specification:

, , ,

CloudFoundry

All Things Pivotal Podcast Episode #15: What is Pivotal Web Services?

10 Feb , 2015  

featured-pivotal-podcastYou have a great idea for an app—and need a quick and easy place to deploy it.

You are interested in Platform as a Service—but you want a low risk and free way to try it out.

You are constrained on dev/test capacity and need somewhere your developers can work effectively and quickly—NOW.

Whilst you might be familiar with Pivotal CF as a Platform as a Service that you can deploy on-premises or in the cloud provider of your choice—you may not know that Pivotal CF is also available in a hosted for as Pivotal Web Services.

In this episode we take a closer look at Pivotal Web Services—what is it used for, and how you can take advantage of it.

PLAY EPISODE #15

 

RESOURCES:

Transcript

Speaker 1:

Welcome to the All Things Pivotal podcast, the podcast at the intersection of agile, cloud, and big data. Stay tuned for regular updates, technical deep dives, architecture discussions, and interviews. Please share your feedback with us by e-mailing podcast@pivotal.io.

Simon Elisha:

Hello everyone and welcome back to the All Things Pivotal podcast. Fantastic to have you back. My name is Simon Elisha. Good to have you with me again today. A quick and punchy podcast this week, but an interesting one nonetheless hopefully and answering a question that comes up quite commonly. Well, it’s really 2 questions. The first question is, how can I try out Pivotal Cloud Foundry really, really quickly without any set up time? Which often relates to my answer being, ‘Well have you heard of Pivotal Web Services?’ To which people say, ‘What is Pivotal Web Services?’ Also known as, PWS. Also known as P-Dubs.

Pivotal Web Services is a service available on the Web, funnily enough, at run.pivotal.io. It is a hosted version of Pivotal Cloud Foundry running on Amazon Web Services in the US. It provides a platform upon which you can use as a developer, push applications to it, organize your workspaces, and really use as a development platform or even as a production location for your applications. It is a fully-featured running version of Pivotal CF in the Cloud. Not surprisingly, that’s what Pivotal CF can do, but this provides a hosted version for you.

Let’s unpack this a little bit and have a look at what it is and why you might want to use it. The first thing that’s good to know is you can connect to run.pivotal.io straight away. You don’t need a credit card to start and you get a 60-day free trial. I’ll talk about what you get in that 60-day free trial shortly, but the good thing to know is you can go and try it straight away. Often when I’m talking to customers and they’re getting their toe in the water with platforms and service and they’re trying to understand what it is and they say, ‘Oh where can I just try and push an app or test something out?’ I say, ‘Hey go to Pivotal Web Services, it’s free, you can try it out, you can grab an application you’ve got on the shelf and just see what it’s like.’ They go, ‘Well that’s pretty cool, I can do it straight away.’ No friction in that happening.

In terms of what you can use on the platform, so we currently support apps written in Java, Grails, Play, Spring, Node.js, Ruby on Rails, Sinatra, Go, Python, or PHP. Any of those ones will automatically be discovered and [hey presto 02:37] CF push and away we go. If you’ve been listening to previous episodes you’ll know the magic of the CF push process. If however you need another language, you can use a Community Buildpack or you can even write a custom one yourself that will run on the platform as well. Obviously, if you’re running an application you may want to consume some services. You can choose from a variety of third party data bases, e-mail services, monitoring services, that exist in the Marketplace, that exist on Pivotal Cloud Foundry. I’ll run you through what some of those services are because there really is a nice selection available for you.

What you then do is you buy into those services within your application and Pivotal Cloud Foundry, and P-Dubs in particular, takes care of all the connection criteria or the buying in process or the credentials etc, which make it nice and easy. You may be saying, ‘Well hmm, what does this cost me to get access to this kind of platform?’ Well, it’s a really simple, simple model. It’s application centric and you pay 3 cents US per gig per hour. That’s the per hour cost is for the amount of memory used by the application to run. Now, with that 3 cents you get included your routing, so your traffic routing, your load balancing, you can up to 1 gig of ephemeral disk space on your app instances. You get free storage for your application files when they get pushed to the platform. You don’t pay for that storage cost at all. You get bandwidth both in and out, up to 2 terabytes of bandwidth. You get unified log streaming, which we’ll talk about and health management, which we’ll also talk about.

As you can imagine, this could be very cost-effective platform for dev test and production workloads because you’re only paying for what you use when you use it and you’re only paying at the application layer on a per memory basis. Now, there’s a really handy pricing tab on the Pivotal Web Services page that lets you put in how many app instances you’d need for your application and will punch out for you that cost on a per month basis for the hosting, which is really, really nice.

What are some of the things that we allow you to do with this platform? What are some of the benefits? As I mentioned, you get the 60-day free trial and the 60-day free trial, you get 2 gig of application memory, so it can run applications that consume up to 2 gig of aggregate memory. It can have up to 10 application services from the free tier of the Marketplace. This means you get to play with quite a lot of capability at very low cost, very, very easily.

Aside from pushing your app, which is yeah, nice and easy and something you want to do, what else do we do with this? Well, we can [elect 05:17] to have performance monitoring. In the developer console, which you can log into, you can see all your spaces, your applications and their status, how many services are bound to them etc. You can drill into them in more detail to see what they’re actually consuming. If you want even more detailed monitoring, so inside the application type monitoring, you can use New Relic for that and that’s a service that’s offered in the Marketplace. It has a zero touch configuration. For Java applications, you can basically [crank and bind 05:47] you New Relic service to your app very, very simply with basically no configuration. It’s amazing. For other languages like Ruby or Java Script, you have to the New Relic [agent 05:56] running, but it’s still a pretty trivial process to get it up and going.

Now, once your application is running, you probably want to make sure it keeps running. A normal desire to have. We have this thing called, The Health Manager. This is an automated system that monitors your application for you and if your application instances exit you to an error or something happens where the number of instances is less than the ones that you actually created when you did your CF push or CF Scale, the platform will automatically recover those particular instances for you. Obviously, the log will be updated to indicate that that took place. If you set up an application and you have 3 instances running, it will run them for you. If one of them fails, it will spin up another one for you and you’re good to go.

Another capability is, of course, the Unified Log Streaming. One of the features of Pivotal CF is the ability to bring logs together from multiple application instances into the one place. In PWS, we do the same thing. We have this streaming log API that will send all the information, all the components, for your application to the one location. You can tailor this interactively yourself or you can use a syslog drain too, once you have a third party tool you may like. Tools like, Splunk or Logstash etc. They’re all scoped by a unique application ID and an instance index, so they can correlate across multiple events and see how they all fit together, which is nice.

The system also has a really nice Web console, which is built for really agile developers to use. You jump in, you can see what applications are running, where, who started them, what’s going on. You can even connect your spacers with your CI pipeline to make sure that builds are going into the correct life cycle stage of being deployed appropriately as well. You can also see quotas and building across your spacers because you have access to organizations and spacers as well. We’ll talk about organizations and spacers in another episode.

What about from a services perspective? What are some of the services that we have available in the Marketplace? Well, it’s growing all the time. It’s a movable face, as I like to say. We have a number. I’ll just call out a few highlight ones. Things like, Searchify for search, BlazeMeter for load testing, Redis Cloud, which is an enterprise-class cache. We talked about caches a little while ago, ClearDB, which is a MySQL database service. We have Searchly, ElasticSearch. We have the Memcached [D 08:19] Cloud. We have SendGrid for sending e-mail, MongoLab for MongoDB as a service, New Relic obviously for access to performance criteria. RabbitMQ, so through Cloud AMQP, ElephantSQL, PostgreSQL as a service etc, etc. A good selection of services there are available to you to use.

It’s interesting seeing what people use this for. Often, customers who use this for a dev and test experience or to get the developers up to speed with using platform as a service. A company called [Synapse, which I’ll say 08:49], which is small, or young I should say, Boston based company that builds software and service web and mobile apps for consumer startups, they decided to use Pivotal Web Services for their platform because they wanted to just have the same develop experience through dev test and production, and it completely suited their needs. It gave them the flexibility in terms of how they built the application, it gave them the sizing requirements they needed etc. The other nice thing that they got out of it was the ability to deploy their particular application both in the public Cloud or in private Clouds that customers wanted to run. What they realized is that if they had customers who said, ‘Hey we really like your particular application, we like your service, but we want to run it in-house for whatever reason that we have,’ they had a very simple and easy way to say that ‘Hey, you just run Pivotal CF internally, we bring our code across, and it will work fine.’ A really interesting example there.

If you’ve ever wanted to have a play with Pivotal CF, you wondered how it looks, and what the experience is from a developer perspective, then Pivotal Web Services or PWS is the place to go. That’s run.pivotal.io. There’s a 60-day free trial. You don’t have to enter your credit card when you sign up for the free trial. You can have a bit of an experiment and see how you go. Hopefully you’ll be able to make something pretty cool and until then, talk to you later, and keep on building.

Speaker 1:

Thanks for listening to the All Things Pivotal podcast. If you enjoyed it, please share it with others. We love hearing your feedback, so please send any comments or suggestions to podcast@pivotal.io.

, , , , , , , , , , , , ,

CloudFoundry

All Things Pivotal Podcast Episode #15: What is Pivotal Web Services?

10 Feb , 2015  

featured-pivotal-podcastYou have a great idea for an app—and need a quick and easy place to deploy it.

You are interested in Platform as a Service—but you want a low risk and free way to try it out.

You are constrained on dev/test capacity and need somewhere your developers can work effectively and quickly—NOW.

Whilst you might be familiar with Pivotal CF as a Platform as a Service that you can deploy on-premises or in the cloud provider of your choice—you may not know that Pivotal CF is also available in a hosted for as Pivotal Web Services.

In this episode we take a closer look at Pivotal Web Services—what is it used for, and how you can take advantage of it.

PLAY EPISODE #15

 

RESOURCES:

Transcript

Speaker 1:

Welcome to the All Things Pivotal podcast, the podcast at the intersection of agile, cloud, and big data. Stay tuned for regular updates, technical deep dives, architecture discussions, and interviews. Please share your feedback with us by e-mailing podcast@pivotal.io.

Simon Elisha:

Hello everyone and welcome back to the All Things Pivotal podcast. Fantastic to have you back. My name is Simon Elisha. Good to have you with me again today. A quick and punchy podcast this week, but an interesting one nonetheless hopefully and answering a question that comes up quite commonly. Well, it’s really 2 questions. The first question is, how can I try out Pivotal Cloud Foundry really, really quickly without any set up time? Which often relates to my answer being, ‘Well have you heard of Pivotal Web Services?’ To which people say, ‘What is Pivotal Web Services?’ Also known as, PWS. Also known as P-Dubs.

Pivotal Web Services is a service available on the Web, funnily enough, at run.pivotal.io. It is a hosted version of Pivotal Cloud Foundry running on Amazon Web Services in the US. It provides a platform upon which you can use as a developer, push applications to it, organize your workspaces, and really use as a development platform or even as a production location for your applications. It is a fully-featured running version of Pivotal CF in the Cloud. Not surprisingly, that’s what Pivotal CF can do, but this provides a hosted version for you.

Let’s unpack this a little bit and have a look at what it is and why you might want to use it. The first thing that’s good to know is you can connect to run.pivotal.io straight away. You don’t need a credit card to start and you get a 60-day free trial. I’ll talk about what you get in that 60-day free trial shortly, but the good thing to know is you can go and try it straight away. Often when I’m talking to customers and they’re getting their toe in the water with platforms and service and they’re trying to understand what it is and they say, ‘Oh where can I just try and push an app or test something out?’ I say, ‘Hey go to Pivotal Web Services, it’s free, you can try it out, you can grab an application you’ve got on the shelf and just see what it’s like.’ They go, ‘Well that’s pretty cool, I can do it straight away.’ No friction in that happening.

In terms of what you can use on the platform, so we currently support apps written in Java, Grails, Play, Spring, Node.js, Ruby on Rails, Sinatra, Go, Python, or PHP. Any of those ones will automatically be discovered and [hey presto 02:37] CF push and away we go. If you’ve been listening to previous episodes you’ll know the magic of the CF push process. If however you need another language, you can use a Community Buildpack or you can even write a custom one yourself that will run on the platform as well. Obviously, if you’re running an application you may want to consume some services. You can choose from a variety of third party data bases, e-mail services, monitoring services, that exist in the Marketplace, that exist on Pivotal Cloud Foundry. I’ll run you through what some of those services are because there really is a nice selection available for you.

What you then do is you buy into those services within your application and Pivotal Cloud Foundry, and P-Dubs in particular, takes care of all the connection criteria or the buying in process or the credentials etc, which make it nice and easy. You may be saying, ‘Well hmm, what does this cost me to get access to this kind of platform?’ Well, it’s a really simple, simple model. It’s application centric and you pay 3 cents US per gig per hour. That’s the per hour cost is for the amount of memory used by the application to run. Now, with that 3 cents you get included your routing, so your traffic routing, your load balancing, you can up to 1 gig of ephemeral disk space on your app instances. You get free storage for your application files when they get pushed to the platform. You don’t pay for that storage cost at all. You get bandwidth both in and out, up to 2 terabytes of bandwidth. You get unified log streaming, which we’ll talk about and health management, which we’ll also talk about.

As you can imagine, this could be very cost-effective platform for dev test and production workloads because you’re only paying for what you use when you use it and you’re only paying at the application layer on a per memory basis. Now, there’s a really handy pricing tab on the Pivotal Web Services page that lets you put in how many app instances you’d need for your application and will punch out for you that cost on a per month basis for the hosting, which is really, really nice.

What are some of the things that we allow you to do with this platform? What are some of the benefits? As I mentioned, you get the 60-day free trial and the 60-day free trial, you get 2 gig of application memory, so it can run applications that consume up to 2 gig of aggregate memory. It can have up to 10 application services from the free tier of the Marketplace. This means you get to play with quite a lot of capability at very low cost, very, very easily.

Aside from pushing your app, which is yeah, nice and easy and something you want to do, what else do we do with this? Well, we can [elect 05:17] to have performance monitoring. In the developer console, which you can log into, you can see all your spaces, your applications and their status, how many services are bound to them etc. You can drill into them in more detail to see what they’re actually consuming. If you want even more detailed monitoring, so inside the application type monitoring, you can use New Relic for that and that’s a service that’s offered in the Marketplace. It has a zero touch configuration. For Java applications, you can basically [crank and bind 05:47] you New Relic service to your app very, very simply with basically no configuration. It’s amazing. For other languages like Ruby or Java Script, you have to the New Relic [agent 05:56] running, but it’s still a pretty trivial process to get it up and going.

Now, once your application is running, you probably want to make sure it keeps running. A normal desire to have. We have this thing called, The Health Manager. This is an automated system that monitors your application for you and if your application instances exit you to an error or something happens where the number of instances is less than the ones that you actually created when you did your CF push or CF Scale, the platform will automatically recover those particular instances for you. Obviously, the log will be updated to indicate that that took place. If you set up an application and you have 3 instances running, it will run them for you. If one of them fails, it will spin up another one for you and you’re good to go.

Another capability is, of course, the Unified Log Streaming. One of the features of Pivotal CF is the ability to bring logs together from multiple application instances into the one place. In PWS, we do the same thing. We have this streaming log API that will send all the information, all the components, for your application to the one location. You can tailor this interactively yourself or you can use a syslog drain too, once you have a third party tool you may like. Tools like, Splunk or Logstash etc. They’re all scoped by a unique application ID and an instance index, so they can correlate across multiple events and see how they all fit together, which is nice.

The system also has a really nice Web console, which is built for really agile developers to use. You jump in, you can see what applications are running, where, who started them, what’s going on. You can even connect your spacers with your CI pipeline to make sure that builds are going into the correct life cycle stage of being deployed appropriately as well. You can also see quotas and building across your spacers because you have access to organizations and spacers as well. We’ll talk about organizations and spacers in another episode.

What about from a services perspective? What are some of the services that we have available in the Marketplace? Well, it’s growing all the time. It’s a movable face, as I like to say. We have a number. I’ll just call out a few highlight ones. Things like, Searchify for search, BlazeMeter for load testing, Redis Cloud, which is an enterprise-class cache. We talked about caches a little while ago, ClearDB, which is a MySQL database service. We have Searchly, ElasticSearch. We have the Memcached [D 08:19] Cloud. We have SendGrid for sending e-mail, MongoLab for MongoDB as a service, New Relic obviously for access to performance criteria. RabbitMQ, so through Cloud AMQP, ElephantSQL, PostgreSQL as a service etc, etc. A good selection of services there are available to you to use.

It’s interesting seeing what people use this for. Often, customers who use this for a dev and test experience or to get the developers up to speed with using platform as a service. A company called [Synapse, which I’ll say 08:49], which is small, or young I should say, Boston based company that builds software and service web and mobile apps for consumer startups, they decided to use Pivotal Web Services for their platform because they wanted to just have the same develop experience through dev test and production, and it completely suited their needs. It gave them the flexibility in terms of how they built the application, it gave them the sizing requirements they needed etc. The other nice thing that they got out of it was the ability to deploy their particular application both in the public Cloud or in private Clouds that customers wanted to run. What they realized is that if they had customers who said, ‘Hey we really like your particular application, we like your service, but we want to run it in-house for whatever reason that we have,’ they had a very simple and easy way to say that ‘Hey, you just run Pivotal CF internally, we bring our code across, and it will work fine.’ A really interesting example there.

If you’ve ever wanted to have a play with Pivotal CF, you wondered how it looks, and what the experience is from a developer perspective, then Pivotal Web Services or PWS is the place to go. That’s run.pivotal.io. There’s a 60-day free trial. You don’t have to enter your credit card when you sign up for the free trial. You can have a bit of an experiment and see how you go. Hopefully you’ll be able to make something pretty cool and until then, talk to you later, and keep on building.

Speaker 1:

Thanks for listening to the All Things Pivotal podcast. If you enjoyed it, please share it with others. We love hearing your feedback, so please send any comments or suggestions to podcast@pivotal.io.

, , , , , , , , , , , , ,

CloudFoundry

Why Services are Essential to Your Platform as a Service

27 Gen , 2015  

For most organizations, there is a constant battle between the need to rapidly develop and deploy software while effectively managing the environment and deployment process. As a developer, you struggle with the ability to move new applications to production, and regular provisioning of support services can take weeks, if not months. IT operations, on the other hand, is balancing the backlog of new services requests with the need to keep all the existing (growing) services up and patched. Each side is challenged with meeting the needs of an ever-changing business.

What are Services?

Service is defined as “an act of helpful activity; help, aid.” A Service should make your life easier. Pivotal believes that Platform as a Service (PaaS) should make administrator’s and developer’s lives easier, not harder. Services available through the Pivotal Cloud Foundry platform allow resources to be easily provisioned on-demand. These services are typically middleware, frameworks, and other “components” used by developers when creating their applications.

Services extend a PaaS to become a flexible platform for all types of applications. Services can be as unique as an organization or an application requires. They can bind applications to databases or allow the integration of continuous delivery tools into a platform. Services, especially user-provided services, can also wrap other applications, like a company’s ERP back-end or a package tracking API. The accessibility of Services through a single platform ensures developers and IT operators can truly be agile.

Extensibility Through a Services-Enabled Platform

The availability of Services within the platform are one of the most powerful and extensible features of Pivotal Cloud Foundry. A broad ecosystem of software can run and be managed from within the Pivotal Cloud Foundry platform, and this ensures that enterprises get the functionality they need.

Robust functionality from a single source reduces the time spent on configuration and monitoring. It also has the added benefit of improving scalability and time-to-production. Services allow administrators to provide pre-defined database and middleware services, and this gives developers the ability to rapidly deploy a software product from a menu of options without the typical slow and manual provisioning process. This is also done in a consistent and supportable way.

Managed Services Ensure Simple Provisioning

One of the features that sets Pivotal Cloud Foundry apart from other platforms is the extent of the integration of Managed Services. These Services are managed and operated ‘as a Service,’ and this means they are automatically configured upon request. The provisioning process also incorporates full lifecycle management support, like software updates and patching.

Automation removes the overhead from developers, who are often saddled with service configuration responsibility. It makes administrators’ lives easier and addresses security risks by standardizing how services are configured and used—no more one-off weirdness in configuration. The result is true self-provisioning.

A few of the Pivotal Services, like RabbitMQ, are provided in a highly available capacity. This means that when the Service is provisioned it is automatically clustered to provide high availability. This relieves much of the administrative overhead of deploying and managing database and middleware Services, as well as the significant effort of correctly configuring a cluster.

Broad Accessibility With User-Provided Services

In addition to the integrated and Managed Services, Pivotal Cloud Foundry supports a broad range of User-Provided Services. User-Provided Services are services that are currently not available through the Services Marketplace, meaning the Services are managed externally from the platform, but are still accessible by the applications.

The User-Provided Services are completely accessible by the application, but are created outside of the platform. This Service extension enables database Services, like Oracle and DB2 Mainframe, to be easily bound to an application, guaranteeing access to all the Services needed by applications.

Flexible Integration Model

Access to all services, both managed and user-provided, is handled via the Service Broker API within the Pivotal Cloud Foundry platform. This module provides a flexible, RESTful API and allows service authors (those that create the services) to provide self-provisioning services to developers.

The Service Broker is not opinionated. It can be crafted to suit the unique needs of the environment and organization. The Service Broker functionality ensures the extensibility of the platform and also allows administrators to create a framework that developers can operate within, supporting agile deployments. This framework provides consistency and reproducibility within the platform. It also has the added benefit of limiting code changes required by applications as they move through the development lifecycle.

As an example of the customization capabilities, a customer created a Service Broker that not only adjusts the network topology when an application is deployed to an environment, it also adjusts the security attributes. An application could have fairly open access to an organization’s raw market material, but access to a core billing system would be limited and privileged.

Security Integrated into Each Service

The Service Broker gives administrators the ability to define access control of services. Service-level access control ensures developers and applications only have access to the environment and service necessary. When a Service is fully managed, credentials are encapsulated in the Service Broker. The result is that passwords no longer need to be passed across different teams and resources, but instead are managed by a single administrator.

Finally, the Service Broker provides full auditing of all services. Auditing simply keeps track of what Services have been created or changed, all through the API. This type of audit trail is great if you are an administrator and trying to figure out who last made changes to a critical database service.

Self-Service for the Masses

Managed Services are available for download from Pivotal Network, and are added by an administrator to the platform. All Services available within the platform are accessible to developers via the Marketplace. The Marketplace allows self-provisioning of software as it is needed by developers and applications.

Services from Pivotal like RabbitMQ, Redis, and Pivotal Tracker, as well as popular third-party software, like Jenkins Enterprise by CloudBees and Datastax Enterprise Cassandra, are available immediately. The Marketplace provides a complete self-service catalog, speeding up the development cycle.

Services

Services View in Pivotal Network

The breadth and availability of Services ensures that operators provide development teams access to the resources that they need, when they need them. A developer, who is writing a new application that requires a MySQL database, can easily select and provision MySQL from the Marketplace. The platform then creates the unique credentials for the database and applies those to the application.

Rapid Time to Market with Mobile Services

The expansive Services catalog extends to the Pivotal Mobile Services, announced in August 2014. These mobile backend services allow organizations to rapidly develop and deploy mobile applications. The accessibility of mobile backend services through the Pivotal Cloud Foundry platform ensures that developers are able to easily build new mobile applications leveraging capabilities such as push notifications and data sync.

Essentials of a Services-Enabled Platform

Developers want to quickly deploy a database or middleware service, without a slow and manual provisioning process. IT operators want to be able to quickly meet the growing requests for new services, while also securely managing a complex environment. Provisioning Services through a PaaS is the natural solution to balancing the needs of the developers and IT operators, all while meeting the needs of the business.

A PaaS should provide simple access to full lifecycle management for services—from click-through provisioning to patch management and high availability. At Pivotal we have seen tremendous success with the release of services based on CloudBees and DataStax. The Pivotal Services ecosystem continues to grow, as does the growing capabilities of the Service Broker. This growth ensures the Pivotal Cloud Foundry platform will continue to meet the needs of organizations.

Recommended Reading:

, , , , , , , , , , ,

PaaS

Azure Resource Manager 2.5 for Visual Studio

20 Nov , 2014  

Previously known as Azure Resource Manager Tooling Preview

The Azure Resource Manager 2.5 for Visual Studio enables you to:

  • Create an application using the Azure Gallery templates.
  • Create and edit Azure Resource Manager deployment templates (for example a web site with a database) and parameter files (for example you can have different settings for development, staging and production).
  • Create resource groups and deploy templates into these to simplify the creation of resources.

The Azure Resource Manager enables you to create reusable deployment templates that declaratively describe the resources that make up your application (for example an Azure Website and a SQL Azure database). This simplifies the process of creating complex environments for development, testing and production in a repeatable manner. And it provides a unified way to manage and monitor the resources that make up an application from the Azure preview portal.

You are able to create an application using the Azure Gallery Templates and define and manage your Azure resources using JSON templates. This makes it easier for you to quickly setup the environment you need to Dev/Test your application in Azure. The two key features are the Visual Studio integration with the Azure Gallery and the ability to create and edit Azure Resource Manager deployment templates.

We’ll get started using this tooling by walking through a scenario. First we’ll create a web site based on a Cloud Deployment project and we’ll look at what artifacts are added to your solution when you create your project. Then we are going to create and deploy the Azure resource group and resources we need for our application, which will include publishing of our application.

This tooling is available in the Azure SDK 2.5 for .NET [download for VS 2013 | VS 2015 Preview]

Create a website with a Cloud Deployment Project

With the Azure Resource Manager Tooling we’ve made it possible to create Visual Studio applications using the Azure Gallery templates. You can find these templates by navigating to File->New Project-> Cloud-> “Cloud Deployment Project” after installing the SDK In the screenshot below I’ve done that and called my project “MyAzureCloudApp”.

CreateCloudProject

Once you create a Cloud Deployment Project, you will find a list of the available templates. We’ve made a couple of the more popular Azure Gallery templates available.

SelectAZTemplates

  • Website – This template will create an Azure Website (with automatic publishing and Application Insights monitoring).
  • Website + SQL – This template will create an Azure Website (with automatic publishing and Application Insights monitoring) and a SQL Azure Server with database.
  • Website + SQL + Redis – This template will create an Azure Website (with automatic publishing and Application Insights monitoring), a SQL Azure Server with database, and a Redis Cache.

While this release only has three templates available, going forward, more templates will be added for common application scenarios that use other Azure features including networking, storage, virtual machines and more.

For this walkthrough, we will select the Website + SQL template.

Your Cloud App Solution

After selecting an ASP .NET application in the wizard , you will find the ASP.NET Website Project and a new project type – an Azure Resource Manager Deployment project called MyAzureCloudApp.Deployment. The Deployment project includes: a reference to the web project, a deployment template file (WebSiteDeploySQL.json), a template parameter definition (WebSiteDeploySQL.param.dev.json), and a PowerShell script (Publish-AzureResourceGroup.ps1) that can be used to deploy your resources to Azure.

SolnExpExpanded

Let’s take a look at each of these artifacts in your solution:
The reference MyAzureCloudApp connects the web project to the deployment project and has properties for the package location, web deploy package file and actions to Build and Package.

The WebSiteDeploySQL.json file is the deployment template file where your resources are defined. This file contains all the resources that we are going to deploy later. As you might imagine, since we selected the Website + SQL template, this file contains the definitions needed to create a website and a SQL Azure database. We’ll take a closer look at this later.

The WebSiteDeploySQL.param.dev.json contains the values for the non-default parameters needed by the deployment template file. For example, the name of the website is a parameter and that value would go in this file.

Create My Azure Resources – Using the Dialog

There are a couple of ways to deploy your resources and resource group to Azure. The simplest method is to right click on the Deployment Project and select Deploy / New Deployment…

2014-11-13_13h08_13

You may not run into the Select a deployment type dialog as it is only enabled with the proper configuration.  I’ll talk about the Environment option later in this walkthrough, but now select Resource group and this will bring up the Deploy to Resource Group dialog.EnvironmentvsRG

DeploytoRg

We need to create an Azure Resource Group which will contain the logical grouping of all of the resources we need for our web application. To do that click on the Resource group combo box and select “Create New”.

CreateNewRG

Name your Azure Resource Group whatever you want (I’ve used the default which is based on the solution name “MyAzureCloudApp”) and give it a location. Click the Create button when you are ready and your Azure Resource Group will automatically be provisioned for you (but with no resources yet).

Make sure you’ve selected a Deployment template (WebSiteDeploySQL.json), Template Parameter file (WebSiteDeploySQL.param.dev.json), and a storage account as I’ve done above. If you don’t already have a storage account, you will need to create a storage account before continuing.

Next click the “Edit Parameters” button. We are going to define our website name, web hosting plan name and website location so that it looks something like I’ve done here:

ParamtersBlur

The red exclamation marks are required parameters that we’ll have to add. Below is a table with information on each parameter.

dropLocation An auto-generated location for the web deployment package. The dropLocation is a folder in the Azure storage that was entered in the “Deploy to Resource Group” dialog, where the webSitePackage will be copied to.
dropLocationSasToken An auto-generated security key.
webSitePackage The name of the web deployment package, the deployment project references properties has more information on the dropLocation and the webSitePackage.
webSiteName This parameter is the name of your website.
webSiteHostingPlanName This is the Web Hosting Plan name. A hosting plan represents features and capacity settings that you can share out across more than one website.
webSiteLocation The region where our website will reside, and will be something like “West US” or “Central US” or any of the valid website regions.
weSiteHostingPlanSku Default to “Free”, and this is the website’s pricing tier (other options are Shared, Basic, and Standard).
webSiteHostingPlanWorkerSize Default to zero. This setting is used to describe the size of the virtual machine that runs your website (0=small, 1= medium, and 2=large). In this example, workerSize has no effect, since we choose a sku size of “Free”. For it to be applicable we would have to pick the sku size to be Basic or Standard and then you would need to select the appropriate workerSize.
sqlServerName The name of the Azure SQL Server.
sqlServerLocation The location of the Azure SQL Server.
sqlServerAdminLogin The administrator name for the SQL server.
sqlServerAdminPassword The password for the administrator.
sqlDbName The name of the database created in the server.
sqlDbCollation SQL Server Collation Support
sqlDbEdition Azure SQL Database Service Tiers
sqlDbMaxSizeBytes Database Size Limits
sqlDbServiceObjectiveId Edition Performance Levels

The “Save Passwords” check box will store the password into the JSON file, but it will be stored as plain text so you need to be extra diligent with this option.

After you fill out these parameters click the “Deploy” button or if you have edited the parameters, as in this example, the “Save” and then “Deploy” and you’ll have your resource group and resources deployed to Azure! In this case we have deployed a website with your custom web application inside of an Azure resource group (check it out on the new and enhanced Azure Portal):

AzurePortal
After you have deployed your resources, you’ll see that it has written the parameter values back to the WebSiteDeploySQL.param.dev.json file like so:

ParamDevFull

Save the JSON file so that the changes are persisted. The WebSiteDeploySQL.param.dev.json will have “null” for the parameter values if you haven’t deployed the resource group or edited the parameters.

ParamDevEmpty
The deployment output is sent to the Azure Provisioning windows.

DeployOutputWindow

So now that we’ve published, let’s take a look at another way that you can create an Azure Resource Group and deploy Azure resources.

Deploy My Azure Resources – Using PowerShell

The second way to create a resource group in Azure is to run the PowerShell script provided as part of the Deployment project (Publish-AzureResourceGroup.ps1). The script leverages the latest Azure PowerShell (at least version 0.8.3 or later).

The script we give you uses the Azure PowerShell cmdlets to create an Azure Resource Group if one does not exist (which is specified by the –Name parameter in the script). The script passes along the WebSiteDeploySQL.json and WebSiteDeploySQL.param.dev.json files to the Azure Resource Manager service which figures out exactly what Azure resources need to be deployed.

Before running, you’ll need to ensure that the WebsiteDeploySQL.param.dev.json file contains the correct name of your website, hosting plan, web site location, and SQL parameters. Make sure you save your changes.

Notice below I’ve changed the website name to “mattsAwesomeSite”. You’ll want to change your website name as well. I did this so to make sure that it creates a new website for me.

ParamDevFull

Just to show something off here, go ahead and double click on the Publish-AzureResourceGroup.ps1 file to bring it up in the document window in Visual Studio (in case you haven’t seen yet, we’ve added coloring for PowerShell scripts). After that, you can right click on the Publish-AzureResourceGroup.ps1 file, select “Open with PowerShell ISE”.

OpenPSISE

This will launch the PowerShell ISE. You can run the PowerShell script at this point. But just to make sure you have an error free experience, double check a few things:

WPIPowerShell

  • If you haven’t done much with PowerShell before, then you probably need to set the execution policy to allow PowerShell scripts to run. To do this you need to run “Set-ExecutionPolicy RemoteSigned” from the PowerShell ISE to allow remote signed scripts like ours to execute (note: you must run this command as an administrator). You’ll be prompted with a dialog to confirm that you want to change your policy settings.
  • Make sure you’ve run the Azure PowerShell command “Add-AzureAccount” to login your Azure account to the current PowerShell session. A dialog will display and you’ll be prompted to enter in your Azure credentials.

Go ahead and run the PowerShell Script now: “> .Publish-AzureResourceGroup.ps1”
You will be prompted for the storage location and the location of the resource group, which is any valid region (I’ve picked “West US”).
EnterPSLocation

After a few seconds, the PowerShell script should complete. The script also provides some great verbose output, so you can clearly see what resources were made, and what errors (if any) were encountered:

When the script is complete, you’ll find that your Azure resource group and the Azure resources you specified have been created for you (just as we did in the previous example) and you can view these in the Azure Portal.

Deploying to Environment

EnvironmentvsRG2

For more information about the Environment deployment.

Wrapping it up

After all this we’ve now introduced you to the Cloud Deployment Project. We’ve shown you how you can create projects in Visual Studio based off of the Azure Gallery Templates. And how you can define and deploy your Azure resource groups and resources using our tooling. Azure Resource Groups really are a great way to setup the environment you need to host your application.  We have a survey to help get feedback for current and future improvements at http://www.instant.ly/s/TQnbZ.

The post Azure Resource Manager 2.5 for Visual Studio appeared first on Platform as a Service Magazine.

, , , ,

AWS

Amazon ElastiCache Now Supports T2 Cache Nodes

11 Set , 2014  

You can now use T2 cache nodes with Amazon ElastiCache for both Memcached and Redis engines. T2 cache nodes are a new low-cost cache node family that is ideal for test and development environments, as well as small workloads. They are up to 70% cheaper per-GB than comparable T1/M1 cache nodes. Customers eligible for the AWS Free Usage tier can start for free and use up to 750 hours per month of a t2.micro cache node for a year.

Pricing for T2 cache nodes starts at $0.008/hour (effective price) for 3 year Heavy Utilization Reserved Instance and $0.017/hour for On-Demand usage. These pricing examples are for the US East (N. Virginia) AWS Region. For more information on pricing, visit the Amazon ElastiCache pricing page.

To learn more about T2 cache nodes, please see the Amazon ElastiCache Cache Node Types table.

CloudFoundry

Case Study: How shrebo, the Sharing Economy Platform, Innovates with Cloud Foundry

3 Set , 2014  

featured-time-to-innovateIn this post, former banking technology architect and current CTO, Patrick Senti, allows us to pick his brain and learn about his experiences with Cloud Foundry and Python/Django.

As background, Senti is an award-winning architect and programmer who is also certified in machine learning. He brings over 20 years of software experience from working on high profile projects at companies like IBM, SAS, and Credit Suisse.

Recently Senti founded shrebo and is currently CTO. Much like Uber, AirBnB, and Zipcar, his company uses cloud platforms to create new ways of sharing resources.  shrebo recently won an innovation award from Swiss Federal Railways (who is also a customer) and provides an entire web-services-based SaaS platform—a set of APIs for application developers to build with. Shrebo is the cloud platform for the sharing economy—allowing you to build your own AirBnB or Lyft on shrebo.

The functionality includes search, booking, scheduling, provisioning, payment, and social functionality endpoints surrounding the entire process of sharing resources. With their services, third parties can cost effectively and quickly create sharing applications in mobile, social, local and other contexts to help automate logistics, optimize machinery, increase asset and fleet utilization, and run services for sharing any type of resource like cars, bikes, or parking spaces.

In three short sentences, what defined your needs for a platform like Cloud Foundry?

  1. Architecting a start-up with a big vision means we needed stability at the ground level.
  2. We need developers to focus on solving problems and coding instead of technical details.
  3. While stacks like Amazon were interesting, we needed to be higher up the stack to make service distribution easier.

If you could summarize, what would you say the outcomes have been so far?
This PaaS platform reduces risk and creates a lot of efficiency and effectiveness. Cloud Foundry is a real blast of an experience as a developer. We essentially package the application along with the services manifest and press a button. Everything else is done by the platform. This type of thing used to take days or weeks. Now, it takes minutes.

OK, let’s cover some background. How did you, as an entrepreneur, decide to start this company?
Well, it was really three ideas that came together.

A few years back, I shared a sailboat with other people. It was always difficult to manage the logistics because smartphones were not as common. However, the idea of making the process easier…that stuck with me. I thought, “It would be really useful to have a sharing service for any private resource.”

Two, it’s easy to see how more and more resources are shared between people across companies. If you work at the same company, everyone is on a shared calendar; so, it’s not needed. When people come together across companies, like in a start-up environment or if you share some large asset like trains, you need a better way to share. It seemed like a good idea to have an API-based platform to help build solutions like this.

Lastly, I have done a lot with analytics. I realized there could be process improvement and optimization if people and companies could successfully share information and results. For example, we cooperate with Swiss Federal Railways, who awarded us a winner for innovation as a start-up, and they operate tons of trains and cars each day. The trains are very overused in many city areas. When they are full, commuters don’t know where to sit. In fact, this is a sharable resource. So, my “aha” moment here was about connecting people via a smartphone app—we could allow them to see and chose travel based on load and occupancy. They could make a conscious decision to board a certain train. We could really help improve this if we use predictive analytics. So, we do that too—this is much like managing meeting rooms but on a really large scale.

Could you tell us more about shrebo as a business and how Pivotal and Cloud Foundry fit?
In a nutshell, companies use our stuff to build their own end-user sharing applications.

We are a platform software services for any sharing solution. If you are familiar with car sharing, like a renting process or a smartphone app for scheduling a car like Zipcar or Uber in the U.S., our platform supports all the functionality you would need to build such an app through a set of developer APIs. The shrebo platform allows companies of any size to more easily and cost effectively develop their own sharing applications and online services and mix them with other mobile, social, and local apps.

In our initial architecture process, we evaluated several PaaS providers. Ultimately, we chose App Fog, who is now owned by CenturyLink. They run Cloud Foundry with Python, Java, Node.js, PHP, and Ruby. Our technology framework allows us to elastically provision services through App Fog’s services API. Also, Pivotal provides us with Redis as a data structure service and RabbitMQ as a message broker service.

How will your customer experience be improved through our technologies?
Well, our app is not directly visible through some user interface for end users, it is more of a white label solution. However, we provide app and data services that enable others to provide end-user applications. So, those companies can rely on our API to meet SLAs cost effectively. Eventually, they need their apps to perform or users will go somewhere else. If we fail, so do they.

In the context of us powering other technologies, we use Cloud Foundry and AppFog to provision services much faster, at a lower cost, and with greater scale as real-time demand shifts. This PaaS is certainly more cost effective than Amazon, and it’s less complex and risky than traditional hosting providers who make us build the stack by ourselves. It’s also very automated. All of these things allow us to deliver code, deploy updates, and fix issues on our platform quickly.

Do you have any examples where Cloud Foundry’s Elastic Runtime Service delivered something unexpected?
Yes, absolutely. At one point in time, the AppFog infrastructure needed an upgrade. They had some planned downtime and planned to make changes to one data center at a time across Singapore, Ireland, and the U.S. With Cloud Foundry, we had zero down time even though they did. As they were bringing down a data center to make updates, we set up new instances on the fly and shifted workloads. As they moved around the world, we automatically migrated across data centers. We didn’t have any downtime even though they did! Before Cloud Foundry, this was really not possible. We could increase and decrease instances on the fly. Quite amazing.

How does Cloud Foundry compare to other platforms you have worked with?

Provisioning is so different. So, I spent a lot of time in the financial services industry—for about 10 years. We primarily worked with traditional hosting platforms. You get a Linux machine, then you do an install. Along the process, a group of experts all touch it before it is ready to deploy code. Cloud Foundry it totally different.

Like I said before, Cloud Foundry is a real blast of an experience as a developer. This type of thing used to take days or weeks. Now, it takes minutes. As a CTO, this is the experience every developer has been looking for over the past 10 years. We get the power of scripting—an event can trigger just about anything we need.

What other overarching Cloud Foundry capabilities are top of mind?
Flexibility and agility are top of the list. I see extensibility as a major benefit too. The whole architecture is built with open source contributions in mind. Adding new stacks is encouraged.

How does Cloud Foundry and AppFog support your stack?
Cloud Foundry is the PaaS and AppFog integrates and operates the PaaS and IaaS. They also provide a Django/Python stack on Cloud Foundry. We didn’t have to do anything to get this set up. It was provided when we turned on our account. This is the way development should work!

Could you tell us a bit about the other services you use on Cloud Foundry?

We use Redis and RabbitMQ—these are both very powerful services for data structure and messaging middleware, and we have two main use cases. We use MySQL for persistence, also provisioned and managed by Cloud Foundry.

First, we provide elements of a web app and APIs as a service to the outside world. The runtime is a traditional Python process intended for short-lived, synchronous workloads. Second, there are many asynchronous processes that trigger over time. For example, the website generates a lot of events. If someone is booking a resource, confirming a booking, or cancels, these are all events. Workflow workloads run asynchronously in the background and do various things like send an SMS or email. There is a separation of concerns here between the two types of workloads, short-lived web requests, and longer-lived  background tasks, and we use RabbitMQ to decouple these sub-systems, and it is used on both sides.

Our RabbitMQ uses Redis as an in-memory data store to transport the messages between the two areas—the web APIs and the messaging, which is based on Celery as well. In addition, we use Redis as a distributed log instance—as soon as a reservation is made, we want to ensure no one else can grab the resource. Since we have four or five instances on the web API side, we run into a distributed lock problem. Redis solves this problem for us. So, RabbitMQ is the broker, Redis powers the state of notification events, and it also acts as a data store to log transaction events—for all events and messages across the two workloads.

The workflows, events, and messages store data in an analytics warehouse. At this point, we are using Hadoop, but we haven’t deployed it at a large scale quite yet. We are also looking at Storm.

cta-casestudy-shrebo

While you are a CTO, you were a developer at one point and probably get your hands dirty at times. What makes Cloud Foundry so cool for developers?
Cloud Foundry has a command line interface, and AppFog has built an API. As I mentioned earlier, this gives developers the power of scripting. We spend more time coding and less time dealing with the foundational platform. So, all deployment tasks—infrastructure, new instances, deploying code, unit testing, integration testing, screen tests—all of this can be scripted. We push a button to trigger these processes. While this is possible with any modern infrastructure, Cloud Foundry really gives us the ability to stay at a high level in the stack and avoid getting into details.

We also really like the separation between applications and code. When we write the deployment manifest and need services like MySQL or Redis, we just tell the system we need an instance of it in a few lines of configuration without having to worry about the config of the underlying service. The underlying stuff is all handled by the platform.

We have gotten off the ground very quickly. In fact, Facebook has this concept of hiring developers to deploy code in the same day they started. Now, we have the same type of environment—one of our new developers can deploy code the same day they start with shrebo. This is one of the most exciting facts about Cloud Foundry—we can have a small team of 4 people work on the app and run the infrastructure, and we can scale people this way too. Five years ago, it would have required 10 or more people to run this same infrastructure and do development.

In terms of monitoring, are you using any of Cloud Foundry’s built in capabilities?

Well, we aren’t using the Cloud Foundry health check services API directly. We decided to use New Relic as the application-level performance metrics monitor. From outside, we use Pingdom to verify uptimes. New Relic as extremely easy to plug into Cloud Foundry applications—they have a Python extension that works out of the box with Cloud Foundry. As described we use RabbitMQ to do app-level event processing, but we also use it to send New Relic custom events to their services infrastructure.

As a CTO/CIO level person with a financial services background, you have a clear handle on ROI, C-level metrics, and risk management. What about Cloud Foundry is powerful from this perspective?
If I talk about this from my banking background, I can easily see how Cloud Foundry produces efficiency and effectiveness—this helps reduce costs and risk. Let me explain.

When we release code without Cloud Foundry, it might take 4-6 months at a traditional bank with more complexity and legacy systems. With Cloud Foundry, you can reduce that time to days or weeks. There is an order of magnitude level of improvement in terms of these process metrics. There are fewer people involved, it is easier for developers to fix problems, we can scale dynamically without emailing anyone—this all reduces cost and time, and as a result increases the productivity of the whole organization

In terms of risk, Cloud Foundry clearly reduces operational risk. This is largely related to the lower number of people that need to be involved. For example, if an app shows signs of crashing, we don’t have to involve a team to resolve. A developer can fix the code, and an automated, daily build can update the problems or a button can redeploy without five people having to coordinate on a conference call or group chat. As well, I’ve described how the scale and uptime scenarios work—this reduces downtime and ensures SLA support. This is particularly important for mission-critical, revenue generating apps. So, Cloud Foundry has a direct impact on the risk of our top and bottom line.

Would you mind sharing an architecture slide on your stack?
Sure, here see below. We use HTML5/Bootstrap, JSON/REST, Redis, MongoDB, RabbitMQ, MySQL, and Python/Django for the online services or real-time runtime. There is some overlap here with our batch or periodic workloads. This part of the architecture is another conversation all together. However, we use R, Elastic Search, Storm, Hadoop, and much more. The infrastructure includes Nginx, Varnish, Gunicorn, Cloud Foundry, Cloudinary, Searchly, Amazon AWS, Mailgun, and GlobalSMS.

Thank you so much, is there anything you’d like to add from a personal perspective? What do you like to do in your off time? Anything you’d like to accomplish before you leave this little rock we call Earth?
Sure. In my spare time I like to spend time with friends and family. I also very much enjoy to ride the bike in the woods, or sail that very boat that we used to share, which is based on a small lake in Switzerland. In extension of this, I plan to do a lot more sailing at sea, in fact one of my dreams is to become a certified skipper for maritime sailing yachts.

, , , , , , , , , , , , , , , , , , ,

CloudFoundry

Case Study: How shrebo, the Sharing Economy Platform, Innovates with Cloud Foundry

3 Set , 2014  

featured-time-to-innovateIn this post, former banking technology architect and current CTO, Patrick Senti, allows us to pick his brain and learn about his experiences with Cloud Foundry and Python/Django.

As background, Senti is an award-winning architect and programmer who is also certified in machine learning. He brings over 20 years of software experience from working on high profile projects at companies like IBM, SAS, and Credit Suisse.

Recently Senti founded shrebo and is currently CTO. Much like Uber, AirBnB, and Zipcar, his company uses cloud platforms to create new ways of sharing resources.  shrebo recently won an innovation award from Swiss Federal Railways (who is also a customer) and provides an entire web-services-based SaaS platform—a set of APIs for application developers to build with. Shrebo is the cloud platform for the sharing economy—allowing you to build your own AirBnB or Lyft on shrebo.

The functionality includes search, booking, scheduling, provisioning, payment, and social functionality endpoints surrounding the entire process of sharing resources. With their services, third parties can cost effectively and quickly create sharing applications in mobile, social, local and other contexts to help automate logistics, optimize machinery, increase asset and fleet utilization, and run services for sharing any type of resource like cars, bikes, or parking spaces.

In three short sentences, what defined your needs for a platform like Cloud Foundry?

  1. Architecting a start-up with a big vision means we needed stability at the ground level.
  2. We need developers to focus on solving problems and coding instead of technical details.
  3. While stacks like Amazon were interesting, we needed to be higher up the stack to make service distribution easier.

If you could summarize, what would you say the outcomes have been so far?
This PaaS platform reduces risk and creates a lot of efficiency and effectiveness. Cloud Foundry is a real blast of an experience as a developer. We essentially package the application along with the services manifest and press a button. Everything else is done by the platform. This type of thing used to take days or weeks. Now, it takes minutes.

OK, let’s cover some background. How did you, as an entrepreneur, decide to start this company?
Well, it was really three ideas that came together.

A few years back, I shared a sailboat with other people. It was always difficult to manage the logistics because smartphones were not as common. However, the idea of making the process easier…that stuck with me. I thought, “It would be really useful to have a sharing service for any private resource.”

Two, it’s easy to see how more and more resources are shared between people across companies. If you work at the same company, everyone is on a shared calendar; so, it’s not needed. When people come together across companies, like in a start-up environment or if you share some large asset like trains, you need a better way to share. It seemed like a good idea to have an API-based platform to help build solutions like this.

Lastly, I have done a lot with analytics. I realized there could be process improvement and optimization if people and companies could successfully share information and results. For example, we cooperate with Swiss Federal Railways, who awarded us a winner for innovation as a start-up, and they operate tons of trains and cars each day. The trains are very overused in many city areas. When they are full, commuters don’t know where to sit. In fact, this is a sharable resource. So, my “aha” moment here was about connecting people via a smartphone app—we could allow them to see and chose travel based on load and occupancy. They could make a conscious decision to board a certain train. We could really help improve this if we use predictive analytics. So, we do that too—this is much like managing meeting rooms but on a really large scale.

Could you tell us more about shrebo as a business and how Pivotal and Cloud Foundry fit?
In a nutshell, companies use our stuff to build their own end-user sharing applications.

We are a platform software services for any sharing solution. If you are familiar with car sharing, like a renting process or a smartphone app for scheduling a car like Zipcar or Uber in the U.S., our platform supports all the functionality you would need to build such an app through a set of developer APIs. The shrebo platform allows companies of any size to more easily and cost effectively develop their own sharing applications and online services and mix them with other mobile, social, and local apps.

In our initial architecture process, we evaluated several PaaS providers. Ultimately, we chose App Fog, who is now owned by CenturyLink. They run Cloud Foundry with Python, Java, Node.js, PHP, and Ruby. Our technology framework allows us to elastically provision services through App Fog’s services API. Also, Pivotal provides us with Redis as a data structure service and RabbitMQ as a message broker service.

How will your customer experience be improved through our technologies?
Well, our app is not directly visible through some user interface for end users, it is more of a white label solution. However, we provide app and data services that enable others to provide end-user applications. So, those companies can rely on our API to meet SLAs cost effectively. Eventually, they need their apps to perform or users will go somewhere else. If we fail, so do they.

In the context of us powering other technologies, we use Cloud Foundry and AppFog to provision services much faster, at a lower cost, and with greater scale as real-time demand shifts. This PaaS is certainly more cost effective than Amazon, and it’s less complex and risky than traditional hosting providers who make us build the stack by ourselves. It’s also very automated. All of these things allow us to deliver code, deploy updates, and fix issues on our platform quickly.

Do you have any examples where Cloud Foundry’s Elastic Runtime Service delivered something unexpected?
Yes, absolutely. At one point in time, the AppFog infrastructure needed an upgrade. They had some planned downtime and planned to make changes to one data center at a time across Singapore, Ireland, and the U.S. With Cloud Foundry, we had zero down time even though they did. As they were bringing down a data center to make updates, we set up new instances on the fly and shifted workloads. As they moved around the world, we automatically migrated across data centers. We didn’t have any downtime even though they did! Before Cloud Foundry, this was really not possible. We could increase and decrease instances on the fly. Quite amazing.

How does Cloud Foundry compare to other platforms you have worked with?

Provisioning is so different. So, I spent a lot of time in the financial services industry—for about 10 years. We primarily worked with traditional hosting platforms. You get a Linux machine, then you do an install. Along the process, a group of experts all touch it before it is ready to deploy code. Cloud Foundry it totally different.

Like I said before, Cloud Foundry is a real blast of an experience as a developer. This type of thing used to take days or weeks. Now, it takes minutes. As a CTO, this is the experience every developer has been looking for over the past 10 years. We get the power of scripting—an event can trigger just about anything we need.

What other overarching Cloud Foundry capabilities are top of mind?
Flexibility and agility are top of the list. I see extensibility as a major benefit too. The whole architecture is built with open source contributions in mind. Adding new stacks is encouraged.

How does Cloud Foundry and AppFog support your stack?
Cloud Foundry is the PaaS and AppFog integrates and operates the PaaS and IaaS. They also provide a Django/Python stack on Cloud Foundry. We didn’t have to do anything to get this set up. It was provided when we turned on our account. This is the way development should work!

Could you tell us a bit about the other services you use on Cloud Foundry?

We use Redis and RabbitMQ—these are both very powerful services for data structure and messaging middleware, and we have two main use cases. We use MySQL for persistence, also provisioned and managed by Cloud Foundry.

First, we provide elements of a web app and APIs as a service to the outside world. The runtime is a traditional Python process intended for short-lived, synchronous workloads. Second, there are many asynchronous processes that trigger over time. For example, the website generates a lot of events. If someone is booking a resource, confirming a booking, or cancels, these are all events. Workflow workloads run asynchronously in the background and do various things like send an SMS or email. There is a separation of concerns here between the two types of workloads, short-lived web requests, and longer-lived  background tasks, and we use RabbitMQ to decouple these sub-systems, and it is used on both sides.

Our RabbitMQ uses Redis as an in-memory data store to transport the messages between the two areas—the web APIs and the messaging, which is based on Celery as well. In addition, we use Redis as a distributed log instance—as soon as a reservation is made, we want to ensure no one else can grab the resource. Since we have four or five instances on the web API side, we run into a distributed lock problem. Redis solves this problem for us. So, RabbitMQ is the broker, Redis powers the state of notification events, and it also acts as a data store to log transaction events—for all events and messages across the two workloads.

The workflows, events, and messages store data in an analytics warehouse. At this point, we are using Hadoop, but we haven’t deployed it at a large scale quite yet. We are also looking at Storm.

cta-casestudy-shrebo

While you are a CTO, you were a developer at one point and probably get your hands dirty at times. What makes Cloud Foundry so cool for developers?
Cloud Foundry has a command line interface, and AppFog has built an API. As I mentioned earlier, this gives developers the power of scripting. We spend more time coding and less time dealing with the foundational platform. So, all deployment tasks—infrastructure, new instances, deploying code, unit testing, integration testing, screen tests—all of this can be scripted. We push a button to trigger these processes. While this is possible with any modern infrastructure, Cloud Foundry really gives us the ability to stay at a high level in the stack and avoid getting into details.

We also really like the separation between applications and code. When we write the deployment manifest and need services like MySQL or Redis, we just tell the system we need an instance of it in a few lines of configuration without having to worry about the config of the underlying service. The underlying stuff is all handled by the platform.

We have gotten off the ground very quickly. In fact, Facebook has this concept of hiring developers to deploy code in the same day they started. Now, we have the same type of environment—one of our new developers can deploy code the same day they start with shrebo. This is one of the most exciting facts about Cloud Foundry—we can have a small team of 4 people work on the app and run the infrastructure, and we can scale people this way too. Five years ago, it would have required 10 or more people to run this same infrastructure and do development.

In terms of monitoring, are you using any of Cloud Foundry’s built in capabilities?

Well, we aren’t using the Cloud Foundry health check services API directly. We decided to use New Relic as the application-level performance metrics monitor. From outside, we use Pingdom to verify uptimes. New Relic as extremely easy to plug into Cloud Foundry applications—they have a Python extension that works out of the box with Cloud Foundry. As described we use RabbitMQ to do app-level event processing, but we also use it to send New Relic custom events to their services infrastructure.

As a CTO/CIO level person with a financial services background, you have a clear handle on ROI, C-level metrics, and risk management. What about Cloud Foundry is powerful from this perspective?
If I talk about this from my banking background, I can easily see how Cloud Foundry produces efficiency and effectiveness—this helps reduce costs and risk. Let me explain.

When we release code without Cloud Foundry, it might take 4-6 months at a traditional bank with more complexity and legacy systems. With Cloud Foundry, you can reduce that time to days or weeks. There is an order of magnitude level of improvement in terms of these process metrics. There are fewer people involved, it is easier for developers to fix problems, we can scale dynamically without emailing anyone—this all reduces cost and time, and as a result increases the productivity of the whole organization

In terms of risk, Cloud Foundry clearly reduces operational risk. This is largely related to the lower number of people that need to be involved. For example, if an app shows signs of crashing, we don’t have to involve a team to resolve. A developer can fix the code, and an automated, daily build can update the problems or a button can redeploy without five people having to coordinate on a conference call or group chat. As well, I’ve described how the scale and uptime scenarios work—this reduces downtime and ensures SLA support. This is particularly important for mission-critical, revenue generating apps. So, Cloud Foundry has a direct impact on the risk of our top and bottom line.

Would you mind sharing an architecture slide on your stack?
Sure, here see below. We use HTML5/Bootstrap, JSON/REST, Redis, MongoDB, RabbitMQ, MySQL, and Python/Django for the online services or real-time runtime. There is some overlap here with our batch or periodic workloads. This part of the architecture is another conversation all together. However, we use R, Elastic Search, Storm, Hadoop, and much more. The infrastructure includes Nginx, Varnish, Gunicorn, Cloud Foundry, Cloudinary, Searchly, Amazon AWS, Mailgun, and GlobalSMS.

Thank you so much, is there anything you’d like to add from a personal perspective? What do you like to do in your off time? Anything you’d like to accomplish before you leave this little rock we call Earth?
Sure. In my spare time I like to spend time with friends and family. I also very much enjoy to ride the bike in the woods, or sail that very boat that we used to share, which is based on a small lake in Switzerland. In extension of this, I plan to do a lot more sailing at sea, in fact one of my dreams is to become a certified skipper for maritime sailing yachts.

, , , , , , , , , , , , , , , , , , ,

Openshift

How to use Redis on OpenShift from your Ruby application

26 Ago , 2014  

What is Redis

Redis is an in-memory database. Persistence is optional and has two forms – RDB and AOF (Append Only File). RDB provides an instant snapshot of the current data in the memory. AOF, in comparison, saves every operation in a log that may be later replayed and thus the memory image may be recreated.

Redis provides many different data structures in a very efficient way – e.g. strings, key-value maps, lists, sets, bitmaps. For a better overview, see the documentation. Because it provides so many data structures, Redis may be used for many use cases and probably the most common one is a replacement for Memcache as a cache solution.

Getting Redis

There are several ways to get Redis on OpenShift. There is a community cartridge, there is a partner provider in the marketplace, or you can use an external provider. We won’t talk about the last one today, but you can take a look at this article for more information. Even though it’s written with Node.js in mind, you can still get the basic idea of how to integrate with a third party.

Community cartridge

Clayton Coleman implemented a scalable Redis cartridge for OpenShift. With this cartridge, you have your Redis instances directly on OpenShift in separate gears. To get more information on this solution, take a look at this blog post.

Partner provider

OpenShift also provides integration with partners. Using the Marketplace, you can simply deploy services through partners and have them integrated automatically with OpenShift. That’s what we will demonstrate in this blog post.

Creating the application

First, let’s create a Ruby application on OpenShift. Then we will integrate the app with the Redis provider.

You can do this through the web interface or just simply use the command line tool:

{might want to change the name of the app to – redisapp – to help avoid confusion}

rhc app create redis ruby-2.0

Now we have a ruby application we can work with. Next, open the application detail in the OpenShift web interface. It should looks like this:

Ruby application instance

At the bottom most line, you can see a link to the Marketplace … click it. In the top right corner is a search box, enter “redis” and let it search. You will get two services by “Redis labs”.

Redis services in marketplace

We will choose “Redis Cloud”. Now that you see the overview of the service, open the “Editions and Pricing” tab

Redis cloud editions and pricing tab

Click the “25MB grey square”. You will be transferred to the screen where you add the plan to the purchase.

Add the plan to purchase

Then scroll down and click Continue.

Confirm the purchase

You will confirm the purchase.

Finish the process

And purchase will be finished.

Now there is a Redis instance running in the Redis Cloud. The next step is to integrate it with your OpenShift application.

Integrating with OpenShift

Go to the “Purchased products” section … or click the “Go to applications” button on the last screen.

Redis in the list

There you follow the “Add to application” button to the screen

Add Redis to application

From the list of your OpenShift applications, choose the one you need Redis for and click “Add Redis Cloud.”

Add Redis to OpenShift application

The item will change and will allow you to potentially remove Redis from the application. The application now has all of the information required for connecting to Redis in an environment variable.

Getting the connection parameters

To get a quick view of the environment variable, you can SSH into the gear and list it.

rhc ssh redis
env | grep redis

Mine looks like this:

Redis connection in ENV variables

or as a code

rediscloud_4e70d={"port":"11571","hostname":"pub-redis-11571.us-east-1-4.3.ec2.garantiadata.com","password":"vQkqFqpifVNbnnY6"}

You read the environment variable from your application like this:

redis_config = ENV['rediscloud_4e70d']

Then you need to parse the JSON string. I would recommend you to use MultiJSON as that allows you to switch JSON backends more easily.

config = MultiJson.load(redis_config)

Finally you can connect to Redis

redis = Redis.new(:url => "redis://:#{config['password']}@#{config['hostname']}:#{config['port']}/")

Now you are all set to use Redis.

Conclusion

Using Redis from OpenShift is easy and there are multiple ways to do that. You can use a partner from the Marketplace, use an external provider, or deploy a community cartridge providing the Redis database.

Next Steps

, , , , , , ,