Cumulogic,PaaS

Docker Image for CumuLogic DBaaS for private clouds

18 Mar , 2015  

docker logoWe just released a Docker image for running DBaaS for private/hybrid clouds. Image is available on Docker Hub and can be downloaded by   running:   $docker pull cumulogic/cumulogic:v1.2

What’s included in this image?

  • This is only demo version of CumuLogic controller.
  • This is full featured version of the product and includes most of the functionality supported in current release.
  • Database engines supported are MySQL, Percona Server and MongoDB 2.x
  • Integration with Amazon Cloud, OpenStack, CloudStack private clouds.
  • Support for provisioning database instances on a pool for provisioned virtual machines or bare metal servers
  • Supports CentOS 6.x and RHEL
  • MySQL instances support:
    • Percona Clustermysql logo
    • MySQL Single node instances with capability to launch read-only replica sets on multiple zones
    • Multi-AZ instances – provisions cluster nodes across multiple zones
    • Capability to add/remove nodes and replica nodes as desired without database interruption
    • Full monitoring of system metrics as well as MySQL metrics
    • Automated failover recovery to handle failures on disk, network, VMs and zones
    • Automated snapshots and manual backups
    • Notifications on events of critical importance
  • MongoDB instances support:mongodb-logo
    • Single node MongoDB instances
    • Replicasets
    • Auto provisioning of arbiter nodes based on the topology requested
    • Sharding with or without replica sets
    • Capability to add/remove replicasets
    • Capability to add additional shards
    • Automated failure recovery of replica nodes and shards
    • Automated full backups and snapshots
    • Monitoring and performance metrics

More details are available at https://registry.hub.docker.com/u/cumulogic/cumulogic/

Drop us a line at support [AT] cumulogic [DOT] com for any questions or help with running this image.

The post Docker Image for CumuLogic DBaaS for private clouds appeared first on Platform as a Service Magazine.

, , , , , , ,

CloudFoundry

Build Newsletter: Open Source, PaaS, Big Data for Developers – February 2015

13 Feb , 2015  

featured-buildIn this month’s Build Newsletter, the state of the open source, big data and PaaS markets continue to be the main threads throughout the industry news.

Open Source

First, we love this report on the Linux Foundation’s assessment of how open source projects such as OpenStack, Cloud Foundry and Docker are driving both innovation and enterprise readiness in cloud technology.Apache Hadoop® is another excellent example of how wide adoption of projects dramatically shifts the market, as we see in this article on Deutsche Bank’s latest Hadoop study (big data OSS). OSS has both become a wonderful way for companies to collaborate on technology and also create high growth business such as the record breaking business performance with Pivotal Cloud Foundry.  Two other related and noteworthy items include independent analyst Steve Chambers’ highlights on Cloud Foundry’s impressive first year, retracting a previous “bearish” attitude, and Matt Asay’s analysis asking if Cloud Foundry will be the next Red Hat.In our own experience, we have seen customers shift their buying, preferring OSS-based solutions as much as possible. Tesora’s shift to OSS within the OpenStack ecosystem is a great example of this.Individual contributors are the lifeblood of OSS, but you don’t have to be a developer checking in code to contribute.  Here are 8 ways you can contribute to open source projects without writing code.Sometimes, OSS projects may seem to run in their own silos or ecosystem niches.  Part of the power of OSS comes when contributors help increase the “innovation surface area” by bridging and connecting technologies. Several excellent examples can be seen here in multiple Spring and CF projects bridging PaaS, cloud, Apache Hadoop®, and MySQL:

Finally, in an effort to better align investment with the primary challenges Pivotal is trying to solve, Pivotal is looking for new sponsors for Groovy and Grails.

Custom Development and PaaS

First up is an interesting analysis by Redmonk, suggesting the most popular programming languages in use today. The top 5 all run on Cloud Foundry—JavaScript (e.g. Node.js), Java, PHP, Python, and Ruby (tied for 5th).On to platform decisions—there is always the ongoing debate whether to build or buy a development platform. Here is an explanation on how you might choose what works for you. Either way, development platforms are undergoing significant change with the rise of Docker and PaaS ecosystems such as Cloud Foundry, and this is having a profound effect on traditional IT operations and processes.More technical descriptions of PaaS can be found in this slideshow of The Cloud Foundry Story from @DevOpsSummit and this deeper dive on Why Services are Essential to Your Platform as a Service.Finally, a “How To” on  12-Factor App-Style Backing Services and a narrative on old versus new app deployment methods (with microservices in a PaaS) are two examples of techniques that today’s developers use with PaaS.Pivotal-Blog-CTA-NewBigData

Big Data and Data Science for Developers

First, Gigaom suggests all developers need to become familiar with big data technologies and use cases since soon every business application will likely incorporate some big data functionality.For example, big data is making its way into digital travel services—Expedia plans to “double the size” of their Apache Hadoop® cluster in 2015 to help solve its big data challenges in the UK, having previously only used DB2 and Microsoft SQL databases.Not convinced yet? Here is SaaS visionary Mark Benioff and two separate executive research surveys saying big data and predictive analytics are top priorities and that CEOs desire big data solutions: 1)  PwC CEO Survey Recap: Mobile, Data Mining, and Analysis most important 2) IDG Enterprise Big Data Research. Expect funding for future projects and all the market requirements you are building towards to reflect such priorities.Cloud Foundry is useful for big data and analytical applications as this blog about Cloud Foundry for Data Scientists reveals, and in how Pivotal built a Super Bowl social sentiment analysis application in less than a day on Cloud Foundry using microservices.Editor’s Note: Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

, , , , , , , , , , ,

CloudFoundry

Build Newsletter: Open Source, PaaS, Big Data for Developers – February 2015

13 Feb , 2015  

featured-buildIn this month’s Build Newsletter, the state of the open source, big data and PaaS markets continue to be the main threads throughout the industry news.

Open Source

First, we love this report on the Linux Foundation’s assessment of how open source projects such as OpenStack, Cloud Foundry and Docker are driving both innovation and enterprise readiness in cloud technology.Apache Hadoop® is another excellent example of how wide adoption of projects dramatically shifts the market, as we see in this article on Deutsche Bank’s latest Hadoop study (big data OSS). OSS has both become a wonderful way for companies to collaborate on technology and also create high growth business such as the record breaking business performance with Pivotal Cloud Foundry.  Two other related and noteworthy items include independent analyst Steve Chambers’ highlights on Cloud Foundry’s impressive first year, retracting a previous “bearish” attitude, and Matt Asay’s analysis asking if Cloud Foundry will be the next Red Hat.In our own experience, we have seen customers shift their buying, preferring OSS-based solutions as much as possible. Tesora’s shift to OSS within the OpenStack ecosystem is a great example of this.Individual contributors are the lifeblood of OSS, but you don’t have to be a developer checking in code to contribute.  Here are 8 ways you can contribute to open source projects without writing code.Sometimes, OSS projects may seem to run in their own silos or ecosystem niches.  Part of the power of OSS comes when contributors help increase the “innovation surface area” by bridging and connecting technologies. Several excellent examples can be seen here in multiple Spring and CF projects bridging PaaS, cloud, Apache Hadoop®, and MySQL:

Finally, in an effort to better align investment with the primary challenges Pivotal is trying to solve, Pivotal is looking for new sponsors for Groovy and Grails.

Custom Development and PaaS

First up is an interesting analysis by Redmonk, suggesting the most popular programming languages in use today. The top 5 all run on Cloud Foundry—JavaScript (e.g. Node.js), Java, PHP, Python, and Ruby (tied for 5th).On to platform decisions—there is always the ongoing debate whether to build or buy a development platform. Here is an explanation on how you might choose what works for you. Either way, development platforms are undergoing significant change with the rise of Docker and PaaS ecosystems such as Cloud Foundry, and this is having a profound effect on traditional IT operations and processes.More technical descriptions of PaaS can be found in this slideshow of The Cloud Foundry Story from @DevOpsSummit and this deeper dive on Why Services are Essential to Your Platform as a Service.Finally, a “How To” on  12-Factor App-Style Backing Services and a narrative on old versus new app deployment methods (with microservices in a PaaS) are two examples of techniques that today’s developers use with PaaS.Pivotal-Blog-CTA-NewBigData

Big Data and Data Science for Developers

First, Gigaom suggests all developers need to become familiar with big data technologies and use cases since soon every business application will likely incorporate some big data functionality.For example, big data is making its way into digital travel services—Expedia plans to “double the size” of their Apache Hadoop® cluster in 2015 to help solve its big data challenges in the UK, having previously only used DB2 and Microsoft SQL databases.Not convinced yet? Here is SaaS visionary Mark Benioff and two separate executive research surveys saying big data and predictive analytics are top priorities and that CEOs desire big data solutions: 1)  PwC CEO Survey Recap: Mobile, Data Mining, and Analysis most important 2) IDG Enterprise Big Data Research. Expect funding for future projects and all the market requirements you are building towards to reflect such priorities.Cloud Foundry is useful for big data and analytical applications as this blog about Cloud Foundry for Data Scientists reveals, and in how Pivotal built a Super Bowl social sentiment analysis application in less than a day on Cloud Foundry using microservices.Editor’s Note: Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

, , , , , , , , , , ,

CloudFoundry

All Things Pivotal Podcast Episode #15: What is Pivotal Web Services?

10 Feb , 2015  

featured-pivotal-podcastYou have a great idea for an app—and need a quick and easy place to deploy it.

You are interested in Platform as a Service—but you want a low risk and free way to try it out.

You are constrained on dev/test capacity and need somewhere your developers can work effectively and quickly—NOW.

Whilst you might be familiar with Pivotal CF as a Platform as a Service that you can deploy on-premises or in the cloud provider of your choice—you may not know that Pivotal CF is also available in a hosted for as Pivotal Web Services.

In this episode we take a closer look at Pivotal Web Services—what is it used for, and how you can take advantage of it.

PLAY EPISODE #15

 

RESOURCES:

Transcript

Speaker 1:

Welcome to the All Things Pivotal podcast, the podcast at the intersection of agile, cloud, and big data. Stay tuned for regular updates, technical deep dives, architecture discussions, and interviews. Please share your feedback with us by e-mailing podcast@pivotal.io.

Simon Elisha:

Hello everyone and welcome back to the All Things Pivotal podcast. Fantastic to have you back. My name is Simon Elisha. Good to have you with me again today. A quick and punchy podcast this week, but an interesting one nonetheless hopefully and answering a question that comes up quite commonly. Well, it’s really 2 questions. The first question is, how can I try out Pivotal Cloud Foundry really, really quickly without any set up time? Which often relates to my answer being, ‘Well have you heard of Pivotal Web Services?’ To which people say, ‘What is Pivotal Web Services?’ Also known as, PWS. Also known as P-Dubs.

Pivotal Web Services is a service available on the Web, funnily enough, at run.pivotal.io. It is a hosted version of Pivotal Cloud Foundry running on Amazon Web Services in the US. It provides a platform upon which you can use as a developer, push applications to it, organize your workspaces, and really use as a development platform or even as a production location for your applications. It is a fully-featured running version of Pivotal CF in the Cloud. Not surprisingly, that’s what Pivotal CF can do, but this provides a hosted version for you.

Let’s unpack this a little bit and have a look at what it is and why you might want to use it. The first thing that’s good to know is you can connect to run.pivotal.io straight away. You don’t need a credit card to start and you get a 60-day free trial. I’ll talk about what you get in that 60-day free trial shortly, but the good thing to know is you can go and try it straight away. Often when I’m talking to customers and they’re getting their toe in the water with platforms and service and they’re trying to understand what it is and they say, ‘Oh where can I just try and push an app or test something out?’ I say, ‘Hey go to Pivotal Web Services, it’s free, you can try it out, you can grab an application you’ve got on the shelf and just see what it’s like.’ They go, ‘Well that’s pretty cool, I can do it straight away.’ No friction in that happening.

In terms of what you can use on the platform, so we currently support apps written in Java, Grails, Play, Spring, Node.js, Ruby on Rails, Sinatra, Go, Python, or PHP. Any of those ones will automatically be discovered and [hey presto 02:37] CF push and away we go. If you’ve been listening to previous episodes you’ll know the magic of the CF push process. If however you need another language, you can use a Community Buildpack or you can even write a custom one yourself that will run on the platform as well. Obviously, if you’re running an application you may want to consume some services. You can choose from a variety of third party data bases, e-mail services, monitoring services, that exist in the Marketplace, that exist on Pivotal Cloud Foundry. I’ll run you through what some of those services are because there really is a nice selection available for you.

What you then do is you buy into those services within your application and Pivotal Cloud Foundry, and P-Dubs in particular, takes care of all the connection criteria or the buying in process or the credentials etc, which make it nice and easy. You may be saying, ‘Well hmm, what does this cost me to get access to this kind of platform?’ Well, it’s a really simple, simple model. It’s application centric and you pay 3 cents US per gig per hour. That’s the per hour cost is for the amount of memory used by the application to run. Now, with that 3 cents you get included your routing, so your traffic routing, your load balancing, you can up to 1 gig of ephemeral disk space on your app instances. You get free storage for your application files when they get pushed to the platform. You don’t pay for that storage cost at all. You get bandwidth both in and out, up to 2 terabytes of bandwidth. You get unified log streaming, which we’ll talk about and health management, which we’ll also talk about.

As you can imagine, this could be very cost-effective platform for dev test and production workloads because you’re only paying for what you use when you use it and you’re only paying at the application layer on a per memory basis. Now, there’s a really handy pricing tab on the Pivotal Web Services page that lets you put in how many app instances you’d need for your application and will punch out for you that cost on a per month basis for the hosting, which is really, really nice.

What are some of the things that we allow you to do with this platform? What are some of the benefits? As I mentioned, you get the 60-day free trial and the 60-day free trial, you get 2 gig of application memory, so it can run applications that consume up to 2 gig of aggregate memory. It can have up to 10 application services from the free tier of the Marketplace. This means you get to play with quite a lot of capability at very low cost, very, very easily.

Aside from pushing your app, which is yeah, nice and easy and something you want to do, what else do we do with this? Well, we can [elect 05:17] to have performance monitoring. In the developer console, which you can log into, you can see all your spaces, your applications and their status, how many services are bound to them etc. You can drill into them in more detail to see what they’re actually consuming. If you want even more detailed monitoring, so inside the application type monitoring, you can use New Relic for that and that’s a service that’s offered in the Marketplace. It has a zero touch configuration. For Java applications, you can basically [crank and bind 05:47] you New Relic service to your app very, very simply with basically no configuration. It’s amazing. For other languages like Ruby or Java Script, you have to the New Relic [agent 05:56] running, but it’s still a pretty trivial process to get it up and going.

Now, once your application is running, you probably want to make sure it keeps running. A normal desire to have. We have this thing called, The Health Manager. This is an automated system that monitors your application for you and if your application instances exit you to an error or something happens where the number of instances is less than the ones that you actually created when you did your CF push or CF Scale, the platform will automatically recover those particular instances for you. Obviously, the log will be updated to indicate that that took place. If you set up an application and you have 3 instances running, it will run them for you. If one of them fails, it will spin up another one for you and you’re good to go.

Another capability is, of course, the Unified Log Streaming. One of the features of Pivotal CF is the ability to bring logs together from multiple application instances into the one place. In PWS, we do the same thing. We have this streaming log API that will send all the information, all the components, for your application to the one location. You can tailor this interactively yourself or you can use a syslog drain too, once you have a third party tool you may like. Tools like, Splunk or Logstash etc. They’re all scoped by a unique application ID and an instance index, so they can correlate across multiple events and see how they all fit together, which is nice.

The system also has a really nice Web console, which is built for really agile developers to use. You jump in, you can see what applications are running, where, who started them, what’s going on. You can even connect your spacers with your CI pipeline to make sure that builds are going into the correct life cycle stage of being deployed appropriately as well. You can also see quotas and building across your spacers because you have access to organizations and spacers as well. We’ll talk about organizations and spacers in another episode.

What about from a services perspective? What are some of the services that we have available in the Marketplace? Well, it’s growing all the time. It’s a movable face, as I like to say. We have a number. I’ll just call out a few highlight ones. Things like, Searchify for search, BlazeMeter for load testing, Redis Cloud, which is an enterprise-class cache. We talked about caches a little while ago, ClearDB, which is a MySQL database service. We have Searchly, ElasticSearch. We have the Memcached [D 08:19] Cloud. We have SendGrid for sending e-mail, MongoLab for MongoDB as a service, New Relic obviously for access to performance criteria. RabbitMQ, so through Cloud AMQP, ElephantSQL, PostgreSQL as a service etc, etc. A good selection of services there are available to you to use.

It’s interesting seeing what people use this for. Often, customers who use this for a dev and test experience or to get the developers up to speed with using platform as a service. A company called [Synapse, which I’ll say 08:49], which is small, or young I should say, Boston based company that builds software and service web and mobile apps for consumer startups, they decided to use Pivotal Web Services for their platform because they wanted to just have the same develop experience through dev test and production, and it completely suited their needs. It gave them the flexibility in terms of how they built the application, it gave them the sizing requirements they needed etc. The other nice thing that they got out of it was the ability to deploy their particular application both in the public Cloud or in private Clouds that customers wanted to run. What they realized is that if they had customers who said, ‘Hey we really like your particular application, we like your service, but we want to run it in-house for whatever reason that we have,’ they had a very simple and easy way to say that ‘Hey, you just run Pivotal CF internally, we bring our code across, and it will work fine.’ A really interesting example there.

If you’ve ever wanted to have a play with Pivotal CF, you wondered how it looks, and what the experience is from a developer perspective, then Pivotal Web Services or PWS is the place to go. That’s run.pivotal.io. There’s a 60-day free trial. You don’t have to enter your credit card when you sign up for the free trial. You can have a bit of an experiment and see how you go. Hopefully you’ll be able to make something pretty cool and until then, talk to you later, and keep on building.

Speaker 1:

Thanks for listening to the All Things Pivotal podcast. If you enjoyed it, please share it with others. We love hearing your feedback, so please send any comments or suggestions to podcast@pivotal.io.

, , , , , , , , , , , , ,

CloudFoundry

All Things Pivotal Podcast Episode #15: What is Pivotal Web Services?

10 Feb , 2015  

featured-pivotal-podcastYou have a great idea for an app—and need a quick and easy place to deploy it.

You are interested in Platform as a Service—but you want a low risk and free way to try it out.

You are constrained on dev/test capacity and need somewhere your developers can work effectively and quickly—NOW.

Whilst you might be familiar with Pivotal CF as a Platform as a Service that you can deploy on-premises or in the cloud provider of your choice—you may not know that Pivotal CF is also available in a hosted for as Pivotal Web Services.

In this episode we take a closer look at Pivotal Web Services—what is it used for, and how you can take advantage of it.

PLAY EPISODE #15

 

RESOURCES:

Transcript

Speaker 1:

Welcome to the All Things Pivotal podcast, the podcast at the intersection of agile, cloud, and big data. Stay tuned for regular updates, technical deep dives, architecture discussions, and interviews. Please share your feedback with us by e-mailing podcast@pivotal.io.

Simon Elisha:

Hello everyone and welcome back to the All Things Pivotal podcast. Fantastic to have you back. My name is Simon Elisha. Good to have you with me again today. A quick and punchy podcast this week, but an interesting one nonetheless hopefully and answering a question that comes up quite commonly. Well, it’s really 2 questions. The first question is, how can I try out Pivotal Cloud Foundry really, really quickly without any set up time? Which often relates to my answer being, ‘Well have you heard of Pivotal Web Services?’ To which people say, ‘What is Pivotal Web Services?’ Also known as, PWS. Also known as P-Dubs.

Pivotal Web Services is a service available on the Web, funnily enough, at run.pivotal.io. It is a hosted version of Pivotal Cloud Foundry running on Amazon Web Services in the US. It provides a platform upon which you can use as a developer, push applications to it, organize your workspaces, and really use as a development platform or even as a production location for your applications. It is a fully-featured running version of Pivotal CF in the Cloud. Not surprisingly, that’s what Pivotal CF can do, but this provides a hosted version for you.

Let’s unpack this a little bit and have a look at what it is and why you might want to use it. The first thing that’s good to know is you can connect to run.pivotal.io straight away. You don’t need a credit card to start and you get a 60-day free trial. I’ll talk about what you get in that 60-day free trial shortly, but the good thing to know is you can go and try it straight away. Often when I’m talking to customers and they’re getting their toe in the water with platforms and service and they’re trying to understand what it is and they say, ‘Oh where can I just try and push an app or test something out?’ I say, ‘Hey go to Pivotal Web Services, it’s free, you can try it out, you can grab an application you’ve got on the shelf and just see what it’s like.’ They go, ‘Well that’s pretty cool, I can do it straight away.’ No friction in that happening.

In terms of what you can use on the platform, so we currently support apps written in Java, Grails, Play, Spring, Node.js, Ruby on Rails, Sinatra, Go, Python, or PHP. Any of those ones will automatically be discovered and [hey presto 02:37] CF push and away we go. If you’ve been listening to previous episodes you’ll know the magic of the CF push process. If however you need another language, you can use a Community Buildpack or you can even write a custom one yourself that will run on the platform as well. Obviously, if you’re running an application you may want to consume some services. You can choose from a variety of third party data bases, e-mail services, monitoring services, that exist in the Marketplace, that exist on Pivotal Cloud Foundry. I’ll run you through what some of those services are because there really is a nice selection available for you.

What you then do is you buy into those services within your application and Pivotal Cloud Foundry, and P-Dubs in particular, takes care of all the connection criteria or the buying in process or the credentials etc, which make it nice and easy. You may be saying, ‘Well hmm, what does this cost me to get access to this kind of platform?’ Well, it’s a really simple, simple model. It’s application centric and you pay 3 cents US per gig per hour. That’s the per hour cost is for the amount of memory used by the application to run. Now, with that 3 cents you get included your routing, so your traffic routing, your load balancing, you can up to 1 gig of ephemeral disk space on your app instances. You get free storage for your application files when they get pushed to the platform. You don’t pay for that storage cost at all. You get bandwidth both in and out, up to 2 terabytes of bandwidth. You get unified log streaming, which we’ll talk about and health management, which we’ll also talk about.

As you can imagine, this could be very cost-effective platform for dev test and production workloads because you’re only paying for what you use when you use it and you’re only paying at the application layer on a per memory basis. Now, there’s a really handy pricing tab on the Pivotal Web Services page that lets you put in how many app instances you’d need for your application and will punch out for you that cost on a per month basis for the hosting, which is really, really nice.

What are some of the things that we allow you to do with this platform? What are some of the benefits? As I mentioned, you get the 60-day free trial and the 60-day free trial, you get 2 gig of application memory, so it can run applications that consume up to 2 gig of aggregate memory. It can have up to 10 application services from the free tier of the Marketplace. This means you get to play with quite a lot of capability at very low cost, very, very easily.

Aside from pushing your app, which is yeah, nice and easy and something you want to do, what else do we do with this? Well, we can [elect 05:17] to have performance monitoring. In the developer console, which you can log into, you can see all your spaces, your applications and their status, how many services are bound to them etc. You can drill into them in more detail to see what they’re actually consuming. If you want even more detailed monitoring, so inside the application type monitoring, you can use New Relic for that and that’s a service that’s offered in the Marketplace. It has a zero touch configuration. For Java applications, you can basically [crank and bind 05:47] you New Relic service to your app very, very simply with basically no configuration. It’s amazing. For other languages like Ruby or Java Script, you have to the New Relic [agent 05:56] running, but it’s still a pretty trivial process to get it up and going.

Now, once your application is running, you probably want to make sure it keeps running. A normal desire to have. We have this thing called, The Health Manager. This is an automated system that monitors your application for you and if your application instances exit you to an error or something happens where the number of instances is less than the ones that you actually created when you did your CF push or CF Scale, the platform will automatically recover those particular instances for you. Obviously, the log will be updated to indicate that that took place. If you set up an application and you have 3 instances running, it will run them for you. If one of them fails, it will spin up another one for you and you’re good to go.

Another capability is, of course, the Unified Log Streaming. One of the features of Pivotal CF is the ability to bring logs together from multiple application instances into the one place. In PWS, we do the same thing. We have this streaming log API that will send all the information, all the components, for your application to the one location. You can tailor this interactively yourself or you can use a syslog drain too, once you have a third party tool you may like. Tools like, Splunk or Logstash etc. They’re all scoped by a unique application ID and an instance index, so they can correlate across multiple events and see how they all fit together, which is nice.

The system also has a really nice Web console, which is built for really agile developers to use. You jump in, you can see what applications are running, where, who started them, what’s going on. You can even connect your spacers with your CI pipeline to make sure that builds are going into the correct life cycle stage of being deployed appropriately as well. You can also see quotas and building across your spacers because you have access to organizations and spacers as well. We’ll talk about organizations and spacers in another episode.

What about from a services perspective? What are some of the services that we have available in the Marketplace? Well, it’s growing all the time. It’s a movable face, as I like to say. We have a number. I’ll just call out a few highlight ones. Things like, Searchify for search, BlazeMeter for load testing, Redis Cloud, which is an enterprise-class cache. We talked about caches a little while ago, ClearDB, which is a MySQL database service. We have Searchly, ElasticSearch. We have the Memcached [D 08:19] Cloud. We have SendGrid for sending e-mail, MongoLab for MongoDB as a service, New Relic obviously for access to performance criteria. RabbitMQ, so through Cloud AMQP, ElephantSQL, PostgreSQL as a service etc, etc. A good selection of services there are available to you to use.

It’s interesting seeing what people use this for. Often, customers who use this for a dev and test experience or to get the developers up to speed with using platform as a service. A company called [Synapse, which I’ll say 08:49], which is small, or young I should say, Boston based company that builds software and service web and mobile apps for consumer startups, they decided to use Pivotal Web Services for their platform because they wanted to just have the same develop experience through dev test and production, and it completely suited their needs. It gave them the flexibility in terms of how they built the application, it gave them the sizing requirements they needed etc. The other nice thing that they got out of it was the ability to deploy their particular application both in the public Cloud or in private Clouds that customers wanted to run. What they realized is that if they had customers who said, ‘Hey we really like your particular application, we like your service, but we want to run it in-house for whatever reason that we have,’ they had a very simple and easy way to say that ‘Hey, you just run Pivotal CF internally, we bring our code across, and it will work fine.’ A really interesting example there.

If you’ve ever wanted to have a play with Pivotal CF, you wondered how it looks, and what the experience is from a developer perspective, then Pivotal Web Services or PWS is the place to go. That’s run.pivotal.io. There’s a 60-day free trial. You don’t have to enter your credit card when you sign up for the free trial. You can have a bit of an experiment and see how you go. Hopefully you’ll be able to make something pretty cool and until then, talk to you later, and keep on building.

Speaker 1:

Thanks for listening to the All Things Pivotal podcast. If you enjoyed it, please share it with others. We love hearing your feedback, so please send any comments or suggestions to podcast@pivotal.io.

, , , , , , , , , , , , ,

CloudFoundry

Why Services are Essential to Your Platform as a Service

27 Gen , 2015  

For most organizations, there is a constant battle between the need to rapidly develop and deploy software while effectively managing the environment and deployment process. As a developer, you struggle with the ability to move new applications to production, and regular provisioning of support services can take weeks, if not months. IT operations, on the other hand, is balancing the backlog of new services requests with the need to keep all the existing (growing) services up and patched. Each side is challenged with meeting the needs of an ever-changing business.

What are Services?

Service is defined as “an act of helpful activity; help, aid.” A Service should make your life easier. Pivotal believes that Platform as a Service (PaaS) should make administrator’s and developer’s lives easier, not harder. Services available through the Pivotal Cloud Foundry platform allow resources to be easily provisioned on-demand. These services are typically middleware, frameworks, and other “components” used by developers when creating their applications.

Services extend a PaaS to become a flexible platform for all types of applications. Services can be as unique as an organization or an application requires. They can bind applications to databases or allow the integration of continuous delivery tools into a platform. Services, especially user-provided services, can also wrap other applications, like a company’s ERP back-end or a package tracking API. The accessibility of Services through a single platform ensures developers and IT operators can truly be agile.

Extensibility Through a Services-Enabled Platform

The availability of Services within the platform are one of the most powerful and extensible features of Pivotal Cloud Foundry. A broad ecosystem of software can run and be managed from within the Pivotal Cloud Foundry platform, and this ensures that enterprises get the functionality they need.

Robust functionality from a single source reduces the time spent on configuration and monitoring. It also has the added benefit of improving scalability and time-to-production. Services allow administrators to provide pre-defined database and middleware services, and this gives developers the ability to rapidly deploy a software product from a menu of options without the typical slow and manual provisioning process. This is also done in a consistent and supportable way.

Managed Services Ensure Simple Provisioning

One of the features that sets Pivotal Cloud Foundry apart from other platforms is the extent of the integration of Managed Services. These Services are managed and operated ‘as a Service,’ and this means they are automatically configured upon request. The provisioning process also incorporates full lifecycle management support, like software updates and patching.

Automation removes the overhead from developers, who are often saddled with service configuration responsibility. It makes administrators’ lives easier and addresses security risks by standardizing how services are configured and used—no more one-off weirdness in configuration. The result is true self-provisioning.

A few of the Pivotal Services, like RabbitMQ, are provided in a highly available capacity. This means that when the Service is provisioned it is automatically clustered to provide high availability. This relieves much of the administrative overhead of deploying and managing database and middleware Services, as well as the significant effort of correctly configuring a cluster.

Broad Accessibility With User-Provided Services

In addition to the integrated and Managed Services, Pivotal Cloud Foundry supports a broad range of User-Provided Services. User-Provided Services are services that are currently not available through the Services Marketplace, meaning the Services are managed externally from the platform, but are still accessible by the applications.

The User-Provided Services are completely accessible by the application, but are created outside of the platform. This Service extension enables database Services, like Oracle and DB2 Mainframe, to be easily bound to an application, guaranteeing access to all the Services needed by applications.

Flexible Integration Model

Access to all services, both managed and user-provided, is handled via the Service Broker API within the Pivotal Cloud Foundry platform. This module provides a flexible, RESTful API and allows service authors (those that create the services) to provide self-provisioning services to developers.

The Service Broker is not opinionated. It can be crafted to suit the unique needs of the environment and organization. The Service Broker functionality ensures the extensibility of the platform and also allows administrators to create a framework that developers can operate within, supporting agile deployments. This framework provides consistency and reproducibility within the platform. It also has the added benefit of limiting code changes required by applications as they move through the development lifecycle.

As an example of the customization capabilities, a customer created a Service Broker that not only adjusts the network topology when an application is deployed to an environment, it also adjusts the security attributes. An application could have fairly open access to an organization’s raw market material, but access to a core billing system would be limited and privileged.

Security Integrated into Each Service

The Service Broker gives administrators the ability to define access control of services. Service-level access control ensures developers and applications only have access to the environment and service necessary. When a Service is fully managed, credentials are encapsulated in the Service Broker. The result is that passwords no longer need to be passed across different teams and resources, but instead are managed by a single administrator.

Finally, the Service Broker provides full auditing of all services. Auditing simply keeps track of what Services have been created or changed, all through the API. This type of audit trail is great if you are an administrator and trying to figure out who last made changes to a critical database service.

Self-Service for the Masses

Managed Services are available for download from Pivotal Network, and are added by an administrator to the platform. All Services available within the platform are accessible to developers via the Marketplace. The Marketplace allows self-provisioning of software as it is needed by developers and applications.

Services from Pivotal like RabbitMQ, Redis, and Pivotal Tracker, as well as popular third-party software, like Jenkins Enterprise by CloudBees and Datastax Enterprise Cassandra, are available immediately. The Marketplace provides a complete self-service catalog, speeding up the development cycle.

Services

Services View in Pivotal Network

The breadth and availability of Services ensures that operators provide development teams access to the resources that they need, when they need them. A developer, who is writing a new application that requires a MySQL database, can easily select and provision MySQL from the Marketplace. The platform then creates the unique credentials for the database and applies those to the application.

Rapid Time to Market with Mobile Services

The expansive Services catalog extends to the Pivotal Mobile Services, announced in August 2014. These mobile backend services allow organizations to rapidly develop and deploy mobile applications. The accessibility of mobile backend services through the Pivotal Cloud Foundry platform ensures that developers are able to easily build new mobile applications leveraging capabilities such as push notifications and data sync.

Essentials of a Services-Enabled Platform

Developers want to quickly deploy a database or middleware service, without a slow and manual provisioning process. IT operators want to be able to quickly meet the growing requests for new services, while also securely managing a complex environment. Provisioning Services through a PaaS is the natural solution to balancing the needs of the developers and IT operators, all while meeting the needs of the business.

A PaaS should provide simple access to full lifecycle management for services—from click-through provisioning to patch management and high availability. At Pivotal we have seen tremendous success with the release of services based on CloudBees and DataStax. The Pivotal Services ecosystem continues to grow, as does the growing capabilities of the Service Broker. This growth ensures the Pivotal Cloud Foundry platform will continue to meet the needs of organizations.

Recommended Reading:

, , , , , , , , , , ,

Uncategorized

MemSQL open sources tool that helps move data into your database

30 Dic , 2014  

Database startup MemSQL said today that it open sourced a new data transfer tool called MemSQL Loader that helps users haul over vast quantities of data from sources like Amazon S3 and the Hadoop Distributed File System (HDFS) into either an MemSQL or MySQL database. While […]

MemSQL open sources tool that helps move data into your database originally published by Gigaom, © copyright 2014.

Continue reading…

, ,

PaaS

Introducing Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads

11 Dic , 2014  

We are excited to announce the preview of the Microsoft Azure Premium Storage Disks. With the introduction of new Premium Storage, Microsoft Azure now offers two types of durable storage: Premium Storage and Standard Storage. Premium Storage stores data on the latest technology Solid State Drives (SSDs) whereas Standard Storage stores data on Hard Disk Drives (HDDs).

Premium Storage is specifically designed for Azure Virtual Machine workloads requiring consistent high performance and low latency. This makes them highly suitable for I/O-sensitive SQL Server workloads. Premium Storage is currently available only for storing data on disks used by Azure Virtual Machines.

You can provision a Premium Storage disk with the right performance characteristics to meet your requirements. You can then attach several persistent disks to a VM, and deliver to your applications up to 32 TB of storage per VM with more than 50,000 IOPS per VM at less than one millisecond latency for read operations.

With Premium Storage, Azure offers the ability to truly lift-and-shift your demanding enterprise applications – like SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, and SAP Business Suite – to the cloud.

Currently, Premium Storage is available for limited preview. To sign up for Azure Premium Storage preview, visit Azure Preview page.

 

Premium Storage Benefits

We designed the service specifically to enhance the performance of IO intensive enterprise workloads, while providing the same high durability as Locally Redundant Storage.

Disk Sizes and Performance

Premium Storage disks provide up to 5,000 IOPS and 200 MB/sec throughput depending on the disk size. For calculating IOPS, we use 256KB as the IO unit size. IO sizes smaller than 256 KB are counted as one unit, and bigger IOs are counted as multiple IOs of size 256KB.

You will need to select the disk sizes based on your application performance and storage capacity needs. We offer three Premium Storage disk types for preview.

Disk Types P10 P20 P30
Disk Size 128 GB 512 GB 1024 GB
IOPS per Disk 500 2300 5000
Throughput per Disk 100 MB/sec 150 MB/sec 200 MB/sec

The disk type is determined based on the size of the disk you store into your premium storage account. See Premium Storage Overview for more details.

You can maximize the performance of your “DS” series VMs by attaching multiple Premium Storage disks, up to the network bandwidth limit available to the VM for disk traffic. For instance, with a 16-core “DS” series VM, you can attach up to 32TB of data disks and achieve over 50,000 IOPS. To learn the disk bandwidth available for each VM size, see the Virtual Machine and Cloud Service Sizes for Azure

Durability

Durability of data is of utmost importance for persistent storage. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. That is why, for Premium Storage, we implemented the same level of high durability using our Locally Redundant Storage technology. Premium Storage keeps three replicas of data within the same region.

We also recommend that you use the storage service commands to create snapshots and to copy those snapshots to a Standard GRS storage account for keeping a geo-redundant snapshot of your data.

Specialized Virtual Machines

We are also launching special Virtual Machines to further enhance the performance of Premium Storage disks. These VMs leverage new caching technology to provide extremely low latency for read operations. In order to use Premium Storage, you must use these special series VMs. Currently, only “DS” series VMs support Premium Storage disks.

These VMs also support Standard Storage disks. Thus you could have a “DS” series VM with a mix of Premium and Standard Storage based disks to optimize your capacity, performance and cost. You can read more about “DS” series VMs here.

Pricing

Pricing for the new Premium Storage service is here. During preview, Premium Storage will be charged at 50% of the GA price.

 

Getting Started

Step 1: Sign up for service

To sign up, go to the Azure Preview page, and sign up for the Microsoft Azure Premium Storage service using one or more of your subscriptions. As subscriptions are approved for the Premium Storage preview, you will get an email notifying you of the approval.

We are seeing overwhelming interest for trying out Premium Storage, and we will be opening up the service slowly to users in batches, so please be patient after signing up.

Step 2: Create a new storage account

Once you get the approval notification, you can then go to the Microsoft Azure Preview Portal and create a new Premium Storage account using the approved subscription. While creating the storage account be sure to select “Premium Locally Redundant” as the account type.

Premium Locally Redundant

Currently, Premium Storage is available for preview in the following regions:

    • West US
    • East US 2
    • West Europe
Step 3: Create a “DS” series VM

You can create the VM via Microsoft Azure Preview Portal, or using Azure PowerShell SDK version 0.8.10 or later. Make sure that your Premium Storage account is used for the VM.

Following is a PowerShell example to create a VM by using the DS-series under your Premium storage account:

$storageAccount = "yourpremiumccount"
$adminName = "youradmin"
$adminPassword = "yourpassword"
$vmName =    "yourVM"
$location = "West US"
$imageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201409.01-en.us-127GB.vhd"
$vmSize =    "Standard_DS2"
$OSDiskPath = "https://" + $storageAccount + ".blob.core.windows.net/vhds/" + $vmName + "_OS_PIO.vhd"
$vm = New-AzureVMConfig -Name $vmName -ImageName $imageName -InstanceSize $vmSize -MediaLocation $OSDiskPath
Add-AzureProvisioningConfig -Windows -VM $vm -AdminUsername $adminName -Password $adminPassword 
New-AzureVM -ServiceName $vmName -VMs $VM -Location $location

If you want more disk space for your VM, attach a new data disk to an existing DS-series VM after it is created:

$storageAccount = "yourpremiumaccount"
$vmName =    "yourVM"
$vm = Get-AzureVM -ServiceName $vmName -Name $vmName
$LunNo = 1
$path = "http://" + $storageAccount + ".blob.core.windows.net/vhds/" + "myDataDisk_" + $LunNo + "_PIO.vhd"
$label = "Disk " + $LunNo
Add-AzureDataDisk -CreateNew -MediaLocation $path -DiskSizeInGB 128 -DiskLabel $label -LUN $LunNo -HostCaching ReadOnly -VM $vm | Update-AzureVm

If you want to create a VM using your own VM image or disks, you should first upload the image or disks to your Premium Storage account, and then create the VM using that.

Summary and Links

To summarize, we are very excited to announce the new SSD based Premium Storage offering that enhances the VM performance and greatly improves the experience for IO Intensive workloads like databases. As we always do, we would love to hear feedback via comments on this blog, Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.

Please see these links for more information:

The post Introducing Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads appeared first on Platform as a Service Magazine.

, , , , ,

CloudFoundry

Getting Started with WordPress on Cloud Foundry

11 Dic , 2014  

featured-CF-WPOne of the most popular third party web applications out there is WordPress. Individuals and companies alike both choose the software for its ease of use and capabilities as a blogging platform.

Since WordPress is such a popular piece of software, I wanted to ensure it could be easily deployed on Cloud Foundry. This article walks through the steps required to configure and run WordPress on Cloud Foundry. There is also a short video embedded towards the end. At a high level, here are the steps necessary to do this.

  1. Obtain an account with the Cloud Foundry provider of your choice
  2. Install the cf client on your PC
  3. Setup persistent storage for your WordPress assets
  4. Create a MySQL Service
  5. Configure WordPress
  6. Deploy to Cloud Foundry
  7. Optionally scale the application to meet your performance and redundancy requirements

When complete, you’ll have a setup with one or more instances of WordPress running on top of Cloud Foundry.  Each instance of WordPress will utilize a shared MySQL service and the shared file storage that is setup.  With multiple instances of WordPress running, Cloud Foundry will automatically balance the application instances across different DEA nodes for additional redundancy.  It will also distribute incoming requests to the application across the different instances to help scale the application up to handle greater load.

A high level diagram of this can be seen below.

image00

Prerequisites: Obtaining an Account and the CF Client

There are three things that you’ll need to install WordPress on Cloud Foundry.  The first thing is an account account with a Cloud Foundry provider.  This could be your own Cloud Foundry installation, one managed by a company like Pivotal Web Services or even just Bosh Lite running on your PC.

The second thing is to have the cf client installed on your computer. If you don’t have this, you can download it here, and you can find documentation here to walk you through the installation process.

The third and final thing you will need is a safe place to store your files. This storage needs to be accessible remotely and it needs to be large enough to hold all your plugins, themes and media files.

For the purposes of this this tutorial, we are going to store our files on an SSH server. I’ve opted for this solution for a few key reasons:

  1. Setting up an SSH server is quick and easy. It has low overhead and resource requirements and is supported on many different operating systems. In fact, you may even have one already available.
  2. Connections to an SSH server are encrypted, and have key validation to confirm that the client is actually talking to the right server. This is helpful to keep our data safe and prevent man-in-the-middle attacks.
  3. Cloud Foundry (since v183) supports FUSE and SSHFS out-of-the-box, so we can easily mount the server as a local filesystem.

These are just my reasons for choosing an SSH server. There are other valid options for file storage, like Amazon S3 or a WebDav server. You will just need to adjust the instructions in this article to work with the particular type of server that you choose.

Once you have created and setup your server, SSH or otherwise, proceed to the next section where we’ll walk through the specific configuration necessary for our WordPress installation.

Server Configuration: Setting Up Persistent Storage

Once your server is running there is only one additional task to ready it for this example. You need to create a dedicated user to receive and store the WordPress files. I suggest creating a dedicated user to limit the access of the account on that machine in the event someone compromises the account.

After you have created the user, you can opt to create a subfolder where you will store the WordPress files.  However, this is not a requirement. You can simply put the files in the dedicated user’s home directory.

Application Setup: Configuring WordPress

Now that the server is setup, we can proceed to the setup of the application.  First, you will need to clone the WordPress project that I have already created. This will get you an out-of-the-box WordPress installation, that has been adjusted to run on Cloud Foundry. This can be done by running the following two commands in your terminal.

git clone https://github.com/dmikusa-pivotal/cf-ex-wordpress.git
  cd cf-ex-wordpress

After doing this, you might be thinking—hey where are all the files?  After all, WordPress is a large project, and there’s only one PHP file, wp-config.php, in this project. This is actually OK. The rest of the WordPress files will be downloaded by the buildpack on the server when your application stages. This is done to save you the time, bandwidth and hassle of uploading those files every time when you push your application to Cloud Foundry. It makes your deploys faster.

Next, we are going to configure WordPress. Like every standard WordPress install, you’ll need to edit wp-config.php and change the secret keys. For security reasons these should be unique for every installation.  If you need help generating a good, unique key you can use the WordPress.org secret-key service to generate a random set of keys for you.

That’s it for WordPress configuration. The only other change that would have been necessary was already made for you—grabbing the MySQL database connection information from the VCAP_SERVICES environment variable.  If you want to see what this change looks like, you can look at the wp-config.php file, or check out this link.

The next step to configure the application is to create our SSH keys. These will be used by SSHFS to connect to our SSH server and mount the remote directory so that it is available to WordPress. We will use the standard tools available on any Linux, Unix or Cygwin system to create the key. Here are the steps that should be run from the project directory.

   mkdir .ssh
   chmod 700 .ssh
   ssh-keygen -b 4096 -t rsa -N '' -f .ssh/sshfs_rsa
   ssh-keyscan -t rsa <ssh-server-ip> > .ssh/known_hosts

If you’re using Windows and do not have Cygwin installed, you can use Putty to generate the keys. The Putty documentation walks through this process here.

Now that we have the key generated, we just need to setup the SSH server to allow us to login with the key and not require a password.  This can be done by connecting to your SSH server and adding the contents of the .ssh/sshfs_rsa.pub file we just generated to the ~/.ssh/authorized_keys file for the user that you created on the SSH server.

You can then confirm the setup is working correctly with the following command:

ssh -i .ssh/sshfs_rsa -o StrictHostKeyChecking=yes -o
UserKnownHostsFile=.ssh/known_hosts <user>@<ssh-server>

This command should execute without asking you for a password and without prompting you about the authenticity of your SSH server. This is important as SSHFS must be able to mount the remote file system without requiring any user interaction. If you are prompted for either, check the steps above, check that you’ve added the correct public key to the authorized_keys file on your SSH server and that your SSH server is configured to allow key based authentication. Once this is working, proceed onto the next section.

Deploying WordPress to Cloud Foundry

With the steps above complete, we’re now ready to deploy WordPress.  This involves using the cf command line utility to create our database service and push the application itself up to the server.

Before we do that, we need to make sure the client is setup correctly.  If this is the first time you’ve used the cf utility, you’ll need to complete these steps to point the tool at Cloud Foundry. First, login and select the org and space where you’ll deploy WordPress.  These are the commands you need to run to accomplish this.

  cf api api.run.pivotal.io
  cf login
  cf target -o <your-org> -s <space-to-use>

This example assumes that you’re going to target Pivotal Web Services.  If you’re targeting a different Cloud Foundry installation, simply substitute the API link to your installation in the cf api command.

Once this is complete, or perhaps if you were already logged in, you can proceed to create your database service.  We’re going to use a MySQL database, and because our api is Pivotal Web Services, we’re going to use ClearDb as the provider.  To create the service, we’ll use this command.

  cf create-service cleardb spark mysql-db

This creates a very small, free tier service. If you’re deploying a production or high traffic site, you’ll likely want to start with one of the larger, paid service options from ClearDb or another provider.

With the service created, we are almost ready to deploy. The only remaining task prior to deploying is to inspect and customize the manifest.yml file. To do this, simply open the file in your favorite text editor and adjust as needed.

The section that you will need to complete is the SSHFS configuration, which is done through environment variables, at the bottom.  This will need to be completed with the information that is relevant to your SSH server.  See the default values for guidance on the values needed.

Also, you can optionally adjust the Cloud Foundry configuration in this file, which comes with sensible defaults. Some of the options you might want to change include the application name (default mywordpress), hostname (default random), memory allocation (default 128M) or the name of the MySQL service (default mysql-db).

After you are satisfied with the options configured in the manifest file, simply run this command to deploy your application to Cloud Foundry.

cf push

That’s it.  After it runs, you’ll be able to access WordPress at the URL you configured.  If you’re unsure what URL is configured, check the last few lines of the output from cf push, it should contain the specific URL.

Here is a short video to explain:

While this ends our journey to deploy WordPress, it is really just a beginning for you. Because now you’re ready to install some themes, some plugins and begin customizing WordPress.

Summary

In this article, we’ve presented the step-by-step instructions for installing your very own WordPress instance on Cloud Foundry.  It offers a quick setup, easy scaling of the memory or number of instances you’re running with the cf scale command and an easy way to stay up-to-date with WordPress, PHP and all of the server software required to run WordPress.

Learning More:

, , , , , , , , ,

PaaS

Reach High Availability with a Multiple Cloud Deployment

3 Dic , 2014  

This article is written by guest author Eugene Olshenbaum. Eugene is the Head of Media Platform at Wix, a cloud-based web development platform that makes it easy for everyone to create beautiful websites.

While some people are still debating whether to use a cloud service, we at Wix are debating how many to use. Тhe more services we use, the more assurance we have that we can handle any failures. To help ensure business continuity by freeing developers from the constraints of a single provider, multi-cloud environments are becoming the next evolution in cloud platform architecture.

Dimensional Research recently interviewed 659 IT decision makers with cloud responsibilities in Australia, Brazil, Canada, Germany, the UK, US, and Singapore, and 77% of respondents said they either already have or plan to implement a multi-cloud infrastructure in the coming year. Only 8% are not planning to do so.

As a result of this growing trend, we thought it was time to revisit a recent blog post describing Wix’s disaster recovery strategy, as well as discuss our multi-cloud implementation at Wix.

At Wix.com, we provide a cloud-based web development platform that allows users to create HTML5 websites and mobile sites through the use of our online drag-and-drop tools. Wix Media Platform is one of the most important pieces of our infrastructure, supporting the 55 million websites running on Wix.com.

While providing tools for building functional websites like an eCommerce shop, hotel, or restaurant, we quickly realized that our customers care about only one thing: they want their site to always be online. And because we know that things fail no matter what, using multiple cloud providers is our solution to:

  1. Achieve at least Five 9s uptime
  2. Stay on top of the competition
  3. Eliminate the risks associated with the business continuity of the infrastructure provider, as well as risks related to electricity suppliers, networking providers, and other “data center” issues (since each cloud provider will usually operate separately).

Wix Media Platform High-Level Architecture
The new multi-cloud configuration of Wix Media Platform’s system layout provides active/active, strongly consistent setup on:

  • Google Cloud Platform (primary)
  • Amazon Web Services
  • Wix-managed data centers

These locations are logical in terms of operation. If one of them fails, traffic is re-routed to a healthy location. Instead of focusing on how to extend availability within the boundaries of one cloud provider, we’ve been concentrating on how to failover at the highest possible level, which is the user’s web browser.

Wix’s platform relies on several subsystems, each of which provides its own service-level agreement (SLA). One of the key design guidelines is to keep each subsystem fully backed up by its independent equivalent on another location.

The Challenge
We want to provide close to 100% uptime for data serving while protecting users’ data against loss. We originally ran our service in one managed hosting environment. To improve data disaster recovery, we added a second one, running both services in active/active mode. Later, we added a third data center to run our services in 3x active/active mode.

As we explained in our previous blog post, we learned that maintaining three cross-data-center replicas was much more complex than managing two, especially with the data centers owned by different ISPs for ISP redundancy. One of the challenges in 3x active/active mode was database replication. To replicate across three data centers we had to configure our MySQL in a ring topology. The ring would break when one data center went down for a long time or failed completely.

To address this, instead of implementing 3x active/active mode with our current infrastructure, we decided to run in 2x active/active mode, with the third replica running on an entirely different technology platform. The third replica also added protection against data poisoning (when a faulty piece of code unintentionally corrupts data and remains undetected for some time).

We decided to develop a fully functional, logical data center natively on Google Cloud Platform. After six months, in April 2013, we started to serve Wix media from Google Cloud Platform in monitored geographies. By the end of 2013, 100% of production traffic was served from Google Cloud Platform. We developed NORM on Google Cloud Platform. NORM (Not Only Replication Manager) is a generic replication bus that allows us to keep the data in sync in all logical locations: Google Cloud Platform, Amazon Web Services, and Wix data centers.

Conclusion
As the leading cloud-based web development platform in the world, we have been paying very close attention to the string of recent cloud outages. Each minute of downtime is money our client loses, so it came as a natural decision for us to implement a multi-cloud infrastructure and mitigate the risks associated with failures.

We believe the advantages of utilizing multiple cloud platforms heavily outweigh the challenges. Over time, we learned that the benefits were going beyond extended capabilities, lower costs, and improved performance.
Operational efforts are way less stressful, and sleepless nights and crisis chat rooms are now in the past. In most cases we just switch traffic to a functional system and investigate failures afterwards. With this new implementation, our team can rest easier and still provide an exceptional customer experience.

– Posted by Eugene Olshenbaum, Director of Media Platform at Wix

The post Reach High Availability with a Multiple Cloud Deployment appeared first on Platform as a Service Magazine.

, , , , ,