Amazon ElastiCache Launches Enhanced Redis Backup and Restore with Cluster Resizing

16 Mar , 2017  

We are excited to announce that Amazon ElastiCache now supports enhanced Redis Backup and Restore with Cluster Resizing. In October 2016, we launched support for Redis Cluster with Redis 3.2.4. In addition to scaling your Redis workloads across up to 15 shards with 3.5TiB of data, it also allowed creating cluster-level backups, which contain snapshots of each of the cluster’s shards. With this launch, we are adding the capability to restore a backup into a Redis Cluster with a different number of shards and slot distribution, allowing you to resize your Redis workload. ElastiCache will parse the Redis key space across the backup’s individual snapshots, and redistribute the keys in the new Cluster according to the requested number of shards and hash slots. Your new cluster can be either larger or smaller in size, as long as the data fits in the selected configuration.



AWS Key Management Service now available in the AWS GovCloud (US) region

7 Mag , 2015  

The AWS Key Management Service (KMS) is now available in the AWS GovCloud (US) region. KMS is a service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses Hardware Security Modules (HSMs) to protect the security of your keys. This capability is a critical requirement for running regulated workloads in the cloud.

With the availability of KMS, you are now able to encrypt data in your own applications and within the following AWS services using keys under your control:

  • Amazon EBS volumes
  • Amazon S3 objects using Server Side Encryption (SSE-KMS) and client-side encryption using the S3 encryption client for the AWS SDKs
  • Output from your Amazon EMR cluster to Amazon S3 using the EMRFS client

In addition, AWS KMS is integrated with AWS CloudTrail to provide you with centralized logging of all key usage to help meet your regulatory and compliance needs.

AWS GovCloud (US) is an AWS region designed to allow U.S. government agencies at the federal, state and local level, along with contractors, educational institutions, enterprises and other U.S. customers to run regulated workloads in the cloud by addressing their specific regulatory and compliance requirements. Beyond the assurance programs applicable to all AWS regions, the AWS GovCloud (US) region allows you to adhere to U.S. International Traffic in Arms Regulations (ITAR) regulations, the Federal Risk and Authorization Management Program (FedRAMPSM) requirements and the Department of Defense (DoD) Cloud Security Model (CSM) Levels 3-5.

To get started in the AWS GovCloud (US) region, contact us today!



Amazon EC2 Container Service Available in Asia Pacific (Sydney)

27 Apr , 2015  

Amazon EC2 Container Service (ECS) is now available in the Asia Pacific (Sydney) region.

Amazon EC2 Container Service is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop container-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles. You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs and availability requirements. You can also integrate your own scheduler or third-party schedulers to meet business or application specific requirements.

Amazon EC2 Container Service is currently available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions. Please visit our product page for more information.



Docker Image for CumuLogic DBaaS for private clouds

18 Mar , 2015  

docker logoWe just released a Docker image for running DBaaS for private/hybrid clouds. Image is available on Docker Hub and can be downloaded by   running:   $docker pull cumulogic/cumulogic:v1.2

What’s included in this image?

  • This is only demo version of CumuLogic controller.
  • This is full featured version of the product and includes most of the functionality supported in current release.
  • Database engines supported are MySQL, Percona Server and MongoDB 2.x
  • Integration with Amazon Cloud, OpenStack, CloudStack private clouds.
  • Support for provisioning database instances on a pool for provisioned virtual machines or bare metal servers
  • Supports CentOS 6.x and RHEL
  • MySQL instances support:
    • Percona Clustermysql logo
    • MySQL Single node instances with capability to launch read-only replica sets on multiple zones
    • Multi-AZ instances – provisions cluster nodes across multiple zones
    • Capability to add/remove nodes and replica nodes as desired without database interruption
    • Full monitoring of system metrics as well as MySQL metrics
    • Automated failover recovery to handle failures on disk, network, VMs and zones
    • Automated snapshots and manual backups
    • Notifications on events of critical importance
  • MongoDB instances support:mongodb-logo
    • Single node MongoDB instances
    • Replicasets
    • Auto provisioning of arbiter nodes based on the topology requested
    • Sharding with or without replica sets
    • Capability to add/remove replicasets
    • Capability to add additional shards
    • Automated failure recovery of replica nodes and shards
    • Automated full backups and snapshots
    • Monitoring and performance metrics

More details are available at

Drop us a line at support [AT] cumulogic [DOT] com for any questions or help with running this image.

The post Docker Image for CumuLogic DBaaS for private clouds appeared first on Platform as a Service Magazine.

, , , , , , ,


Kubernetes comes to OpenStack this time thanks to Mirantis

24 Feb , 2015  

For businesses wanting to run the Kubernetes cluster management framework for containers on OpenStack clouds, Google and Mirantis have teamed up to make that happen more easily. The OpenStack Murano application catalog technology promises to ease deployment of Kubernetes clusters on OpenStack and then deploy Docker containers on those clusters. Murano […]

Kubernetes comes to OpenStack this time thanks to Mirantis originally published by Gigaom, © copyright 2015.

Continue reading…

, ,


Meet Myriad, a new project for running Hadoop on Mesos

11 Feb , 2015  

Hadoop vendor MapR and data center automation startup Mesosphere have created an open source technology called Myriad, which is supposed to make it easier to run Hadoop workloads on top of the popular Mesos cluster-management software. More specifically, Myriad allows the YARN resource scheduler — the linchpin […]

Meet Myriad, a new project for running Hadoop on Mesos originally published by Gigaom, © copyright 2015.

Continue reading…

, ,


Why Services are Essential to Your Platform as a Service

27 Gen , 2015  

For most organizations, there is a constant battle between the need to rapidly develop and deploy software while effectively managing the environment and deployment process. As a developer, you struggle with the ability to move new applications to production, and regular provisioning of support services can take weeks, if not months. IT operations, on the other hand, is balancing the backlog of new services requests with the need to keep all the existing (growing) services up and patched. Each side is challenged with meeting the needs of an ever-changing business.

What are Services?

Service is defined as “an act of helpful activity; help, aid.” A Service should make your life easier. Pivotal believes that Platform as a Service (PaaS) should make administrator’s and developer’s lives easier, not harder. Services available through the Pivotal Cloud Foundry platform allow resources to be easily provisioned on-demand. These services are typically middleware, frameworks, and other “components” used by developers when creating their applications.

Services extend a PaaS to become a flexible platform for all types of applications. Services can be as unique as an organization or an application requires. They can bind applications to databases or allow the integration of continuous delivery tools into a platform. Services, especially user-provided services, can also wrap other applications, like a company’s ERP back-end or a package tracking API. The accessibility of Services through a single platform ensures developers and IT operators can truly be agile.

Extensibility Through a Services-Enabled Platform

The availability of Services within the platform are one of the most powerful and extensible features of Pivotal Cloud Foundry. A broad ecosystem of software can run and be managed from within the Pivotal Cloud Foundry platform, and this ensures that enterprises get the functionality they need.

Robust functionality from a single source reduces the time spent on configuration and monitoring. It also has the added benefit of improving scalability and time-to-production. Services allow administrators to provide pre-defined database and middleware services, and this gives developers the ability to rapidly deploy a software product from a menu of options without the typical slow and manual provisioning process. This is also done in a consistent and supportable way.

Managed Services Ensure Simple Provisioning

One of the features that sets Pivotal Cloud Foundry apart from other platforms is the extent of the integration of Managed Services. These Services are managed and operated ‘as a Service,’ and this means they are automatically configured upon request. The provisioning process also incorporates full lifecycle management support, like software updates and patching.

Automation removes the overhead from developers, who are often saddled with service configuration responsibility. It makes administrators’ lives easier and addresses security risks by standardizing how services are configured and used—no more one-off weirdness in configuration. The result is true self-provisioning.

A few of the Pivotal Services, like RabbitMQ, are provided in a highly available capacity. This means that when the Service is provisioned it is automatically clustered to provide high availability. This relieves much of the administrative overhead of deploying and managing database and middleware Services, as well as the significant effort of correctly configuring a cluster.

Broad Accessibility With User-Provided Services

In addition to the integrated and Managed Services, Pivotal Cloud Foundry supports a broad range of User-Provided Services. User-Provided Services are services that are currently not available through the Services Marketplace, meaning the Services are managed externally from the platform, but are still accessible by the applications.

The User-Provided Services are completely accessible by the application, but are created outside of the platform. This Service extension enables database Services, like Oracle and DB2 Mainframe, to be easily bound to an application, guaranteeing access to all the Services needed by applications.

Flexible Integration Model

Access to all services, both managed and user-provided, is handled via the Service Broker API within the Pivotal Cloud Foundry platform. This module provides a flexible, RESTful API and allows service authors (those that create the services) to provide self-provisioning services to developers.

The Service Broker is not opinionated. It can be crafted to suit the unique needs of the environment and organization. The Service Broker functionality ensures the extensibility of the platform and also allows administrators to create a framework that developers can operate within, supporting agile deployments. This framework provides consistency and reproducibility within the platform. It also has the added benefit of limiting code changes required by applications as they move through the development lifecycle.

As an example of the customization capabilities, a customer created a Service Broker that not only adjusts the network topology when an application is deployed to an environment, it also adjusts the security attributes. An application could have fairly open access to an organization’s raw market material, but access to a core billing system would be limited and privileged.

Security Integrated into Each Service

The Service Broker gives administrators the ability to define access control of services. Service-level access control ensures developers and applications only have access to the environment and service necessary. When a Service is fully managed, credentials are encapsulated in the Service Broker. The result is that passwords no longer need to be passed across different teams and resources, but instead are managed by a single administrator.

Finally, the Service Broker provides full auditing of all services. Auditing simply keeps track of what Services have been created or changed, all through the API. This type of audit trail is great if you are an administrator and trying to figure out who last made changes to a critical database service.

Self-Service for the Masses

Managed Services are available for download from Pivotal Network, and are added by an administrator to the platform. All Services available within the platform are accessible to developers via the Marketplace. The Marketplace allows self-provisioning of software as it is needed by developers and applications.

Services from Pivotal like RabbitMQ, Redis, and Pivotal Tracker, as well as popular third-party software, like Jenkins Enterprise by CloudBees and Datastax Enterprise Cassandra, are available immediately. The Marketplace provides a complete self-service catalog, speeding up the development cycle.


Services View in Pivotal Network

The breadth and availability of Services ensures that operators provide development teams access to the resources that they need, when they need them. A developer, who is writing a new application that requires a MySQL database, can easily select and provision MySQL from the Marketplace. The platform then creates the unique credentials for the database and applies those to the application.

Rapid Time to Market with Mobile Services

The expansive Services catalog extends to the Pivotal Mobile Services, announced in August 2014. These mobile backend services allow organizations to rapidly develop and deploy mobile applications. The accessibility of mobile backend services through the Pivotal Cloud Foundry platform ensures that developers are able to easily build new mobile applications leveraging capabilities such as push notifications and data sync.

Essentials of a Services-Enabled Platform

Developers want to quickly deploy a database or middleware service, without a slow and manual provisioning process. IT operators want to be able to quickly meet the growing requests for new services, while also securely managing a complex environment. Provisioning Services through a PaaS is the natural solution to balancing the needs of the developers and IT operators, all while meeting the needs of the business.

A PaaS should provide simple access to full lifecycle management for services—from click-through provisioning to patch management and high availability. At Pivotal we have seen tremendous success with the release of services based on CloudBees and DataStax. The Pivotal Services ecosystem continues to grow, as does the growing capabilities of the Service Broker. This growth ensures the Pivotal Cloud Foundry platform will continue to meet the needs of organizations.

Recommended Reading:

, , , , , , , , , , ,


Jenkins Operations and Continuous Delivery @Scale with CloudBees Jenkins Enterprise

10 Dic , 2014  

Continuous Delivery @Scale 

Lately, there has been a tremendous buzz in the Jenkins open source community about the release of the Workflow feature. The Workflow feature enables organizations to build complex, enterprise-ready continuous delivery pipelines. I am particularly excited as native support for pipelines in Jenkins was one of the most common requests I have encountered from enterprise (and small) users. You can read about the OSS Workflow GA announcement here.

I am pleased to announce that CloudBees is delivering additional features built on top of OSS Jenkins that help enterprises use the Workflow features to implement continuous delivery pipelines.

The Workflow Stage View feature helps teams visualise the flow and performance of their pipelines. So, for example, a manager can look at a pipeline and easily drill into the performance of a particular stage or a developer can look at the pipeline and see how far in the pipelines their commits have traversed.

The Checkpoint feature enables recovery from both infrastructure or Jenkins failures. In the event of a failure, the pipeline can be started from any of the previous successful checkpoints, instead of from the beginning. This is extremely valuable in the case of long-running builds that may take hours or days to run. 

Jenkins Workflow is a technological leap to help organizations build out continuous delivery pipelines and I urge you to check it out. 

Jenkins Operations @Scale 
Late last year, we announced Jenkins Operations Center by CloudBees – a game changer in the world of Jenkins. It acts as the operations hub for multiple Jenkins in an organization, letting them easily share resources like slaves and security.

I am happy to announce the release of a new version of Jenkins Operations Center by CloudBees – version 1.6. This release has two significant features: 

Cluster Operations simplifies the management of Jenkins by allowing one operational command to simultaneously act on a group of Jenkins masters, versus administering individual masters. Cluster Operations includes actions such as: starting/ restarting client masters, installing and upgrading plugins and upgrading the Jenkins core.

CloudBees Jenkins Analytics provides operational insight into Jenkins performance. Performance aspects include Jenkins-specific build and executor performance across a cluster of masters and standard JVM-based performance metrics. CloudBees Jenkins Analytics also make the monitoring of multiple masters easier by adding a number of new graphs that show performance for build queues and persistence for views across master restarts. The feature also includes new visualization graphs built in Kibana and ElasticSearch, delivering the ability to quickly drill down into individual client performances.  

Performance Analytics

Build Analytics

Join our webinar 10 December 2014 about Workflow, and one on Jenkins Operations @Scale on 17 December to learn more. 

More information: 
Product pages
Jenkins Operations Center Documentation 
Jenkins Enterprise by CloudBees Documentation
Talk to sales for Jenkins Enterprise by CloudBees, Jenkins Operations Center.

– Harpreet Singh
vice president, product management

The post Jenkins Operations and Continuous Delivery @Scale with CloudBees Jenkins Enterprise appeared first on Platform as a Service Magazine.

, ,


Aerospike Hits One Million Writes Per Second with just 50 Nodes on Google Compute Engine

4 Dic , 2014  

Today’s guest blogger is Sunil Sayyaparaju, Director of Product & Technology at Aerospike, the open source, flash-optimized, in-memory NoSQL database.

What exactly is the speed of Google? We at Aerospike take pride in meeting our challenges of high throughput, consistently low latency, and real time processing that we know will be characteristic of tomorrow’s cloud applications. That’s why after we saw Ivan Santa Maria Filho, Performance Engineering Lead at Google, demonstrate 1 Million Writes Per Second with Cassandra on Google Compute Engine, our team at Aerospike decided to benchmark our product’s performance on Google Compute Engine and push the boundaries of Google’s speed.

And guess what we found out. Aerospike scaled on Google Compute Engine with consistently low latency, required smaller clusters and was simpler to operate. The combined Aerospike-Google Cloud Platform solution could fuel an entirely new category of applications that must process data in real-time and at scale from the very start, enabling a new class of startups with business models that were not viable economically previously.

Our benchmark used the same setup as the Cassandra benchmark: 100 Million records at 200 bytes each, debian 7 backports, servers on n1-standard-8 instances with 500GB non-SSD persistent disks at $0.504/hr, clients on n1-highcpu-8 instances at $0.32/hr, and followed these steps. In addition to pure write performance, we also documented pure read and mixed read/write performance. Our findings:

  • High Throughput for both Reads and Writes
    • 1 Million Writes per Second with just 50 Aerospike servers
    • 1 Million Reads per Second with just 10 Aerospike servers
  • Consistent low latency, no jitter for both Reads and Writes
    • 7ms median latency for Writes with 83% of writes < 16ms and 96% < 32
    • 1ms median latency for Reads with 80% of reads < 4ms and 96.5% < 16ms
    • Note that latencies are measured on the server (latencies on the client will be higher)
  • Unmatched Price / Performance for both Reads and Writes
    • 1 Million Writes Per Second for just $41.20/hour
    • 1 Million Reads Per Second for just $11.44/hour

Aerospike is used as a front edge operational database for a variety of purposes: a session or user context store for real-time bidding, personalization, fraud detection, and real-time analytics. These applications must read and write billions of keys and terabytes, from click-streams to sensor data. Data in Aerospike is replicated synchronously in-memory to ensure immediate consistency and written to disk asynchronously.

Here are details on our experiments with Aerospike on Google Compute Engine. Using 10 server nodes and 20 client nodes, we first examined throughput for a variety of different read and write ratios and documented latency results for those same workloads. Then, we documented how throughput scaled with cluster size, as we pushed a 100% read load and a 100% write load onto Aerospike clusters, going from a 2 nodes to 10.

High Throughput at different Read / Write ratios (10 server nodes, 20 client nodes)

For all read/write ratios, 80% of TPS in this graph is achieved with 50% of the clients (10), adding more clients only marginally improves throughput.

Disk IOPS depend on size. We used 500GB non-SSD persistent disks to ensure high IOPS, so the disk is not the bottleneck. For larger clusters, 500GB is a huge over-allocation and can be reduced for lower costs. To achieve this high performance, we did not need to use SSD persistent disks to get higher IOPS.

Consistent Low Latency for different Read / Write ratios (10 server nodes, 20 client nodes)

For a 100% read load, only ~20% of reads took more than 4ms and ~3.5% reads took more than 16ms. This is because reads in Aerospike are only 1 hop (network round trip) away from the client, while writes take 2 network-roundtrips for synchronous replication. Even with a 100% write load, only 16% of writes took more than 32ms. We ran Aerospike on 7 of 8 cores on each server node. Leaving 1 core idle helped latency; if all cores were busy, network latencies increased. Latencies are as measured on the server.

Linear Scalability for both Reads and Writes

This graph shows linear scalability for 100% reads and 100% writes but you can expect linear scaling for mixed workload too. For reads, a 2:1 clients:server ratio was used i.e for a 6 node cluster, we used 12 client machines to saturate the cluster. For writes, a 1:1 client:server ratio was enough because of the lower throughput of writes.

A new generation of applications with mixed read/write data access patterns sense and respond to what users do on websites and on mobile apps across the Internet. These applications perform data writes with every click and swipe, make decisions, record, and respond in real-time.

Aerospike running on Google Compute Engine showcases an example application that requires very high throughput and consistently low latency for both reads and writes. Aerospike processes 1 Million writes per second with just 50 servers, a new level of price and performance for us. You too can follow these steps to see the results for yourself and maybe even challenge us.

– Posted by Sunil Sayyaparaju, Director of Product & Technology at Aerospike

The post Aerospike Hits One Million Writes Per Second with just 50 Nodes on Google Compute Engine appeared first on Platform as a Service Magazine.

, , ,


In case you missed it in November: Google Cloud Platform Live unveils new products, and Google Compute Engine welcomes features

26 Nov , 2014  

November has gone by like this:

Okay, maybe not that fast, but let’s rewind a bit and see what we’ve gotten up to this month.

Google Cloud Platform Live introduces Container Engine, Cloud Networking, Cloud Debugger, and more
What do you get when you get tens of thousands of developers from around the world joining in for one event? A whirlwind. On November 4, Google Cloud Platform Live featured keynotes, highlights from our customers, sessions covering topics from mobile apps to cloud computing, and some fiery moments (literally). We also announced new features that are now available, including:

  • Google Container Engine

The alpha release of Google Container Engine lets you move from managing application components running on individual virtual machines to launching portable Docker containers that are scheduled into a managed compute cluster for you. Want to learn more? Check out this post.

    • Google Cloud Networking

    We’ve been investing in networking for over a decade at Google to ensure that you always get the best experience. With Google Cloud Platform, we’re focused on bringing our customers that same scale, performance and capability. So with Google Cloud Interconnect, it’s now easier for you to connect your network to us.

      • Firebase = building faster mobile apps

      During the event, we showed how Firebase helps you build mobile apps quickly and also gave a sneak preview of some new features the team has been hard at work on. Read more on what Firebase and Google Cloud Platform are getting up to together here.

        • Cloud Debugger

        Cloud Debugger is available in beta and makes it easier to troubleshoot applications in production. Now you can simply pick a line of code, set a watchpoint and the debugger will return local variables and a full stack trace from the next request that executes that line on any replica of your service. There is zero setup time, no complex configurations and no performance impact noticeable to your users. Ready to get started? Check out more info here and try it for yourself.

          To get a full recap of the event, check out our detailed blog post covering all our announcements. Or re-live the event with the keynote, session videos, or a live-tweet of the whole day.

          Curated Ubuntu Images now available on Google Cloud Platform
          Our customers spoke and we listened. Google Cloud Platform announced this month that we are now offering Ubuntu 14.04 LTS, 12.04 LTS and 14.10 guest images in beta – at no additional charge. We’d love to have you take these images for a trial run. Try it here and let us know what you think.

          Compute Engine welcomes Autoscaling and Click to Deploy Percona XtraDB Cluster
          This month we introduced two highly anticipated features to Compute Engine: Autoscaling and Percona XtraDB Clusters. Autoscaling allows customers to build more cost effective and resilient applications. Click to Deploy Percona XtraDB Cluster helps you simply launch a cluster preconfigured and ready-to-use in just a few clicks.

          Free IPv6 address with Cloud SQL instances
          It’s no secret that the Internet is quickly running out of IPv4 address space. Google has been at the forefront of IPv6 adoption — the newer, larger address space — and we are now assigning an immutable IPv6 address to each and every Cloud SQL instance, and making these addresses available for free. Full details can be found in this post.

          AffiniTech and Framestore: Saving money and creating great customer experiences through Google Cloud Platform
          We heard first hand from our customers AffiniTech and Framestore on how they use our products. AffiniTech helps their customers make data-driven decisions, while Framestore creates unforgettable visual effects (think Gravity). Both companies have seen incredible impacts to their bottom line and with their customers. But don’t just take our word– hear it directly from AffiniTech and Framestore in their blog posts.

          November, and this year, has given us a lot to be thankful for, so we want to say thank you to all of you who’ve helped us hack, hustle, and go on this journey. To all those outside of the US, we wish you a well-deserved thank you for your support and to those stateside, have a great Thanksgiving.

          -Posted by Charlene Lee, Product Marketing Manager

          The post In case you missed it in November: Google Cloud Platform Live unveils new products, and Google Compute Engine welcomes features appeared first on Platform as a Service Magazine.

          , ,