Building Your Own Cloud With SUSE OpenStack and Stackato Cloud Foundry

29 Giu , 2015  

SUSEIn a recent online meetup “Overcoming the Challenges of Microservices with PaaS on OpenStack,” John Wetherill from ActiveState, along with Frank Rego, Pete Chadwick and Cameron Seader from SUSE, showed how SUSE and Stackato, together, can help users obtain a quicker time to value for an OpenStack-based Platform-as-a-Service (PaaS) deployment.

read more

, , , , ,


Stackato on the Microsoft Azure Cloud

24 Giu , 2015  

Microsoft AzureThe growth of Azure has been outstanding–more than 90,000 new subscriptions every month. And the innovation is exponential with over 500 new features and services being added to the platform in the last 12 months. We’re very excited to be part of this growth.

read more

, ,


Stackato: By the Numbers

18 Giu , 2015  

Stackato: By the NumbersI have been involved with Stackato for a little over a year now. When I first used Stackato it was already a pleasant experience, it was simple to set up and had many complex, but easy-to-use, features.

read more


Stackato Joins the Cisco Intercloud Marketplace

10 Giu , 2015  

Cisco Intercloud MarketplaceCloud marketplaces are an increasingly important way for cloud hosting providers and vendors to help their customers navigate the multitude of cloud services by making them available in a central location. While enterprise IT wants choice, they also want simplicity and marketplaces give them the best of both worlds.

read more



Sponsored post: What customers want in 2015: Cloud Foundry, OpenStack, Docker and AWS

9 Feb , 2015  

When it comes to cloud technologies, vendors can get carried away with adding cool features to their products. But if those features don’t provide real business value to the customer, they are useless. Based on discussions with our customers – innovative companies using Stackato Platform-as-a-Service (PaaS) […]

Sponsored post: What customers want in 2015: Cloud Foundry, OpenStack, Docker and AWS originally published by Gigaom, © copyright 2015.

Continue reading…

, , , ,


The Cloud Foundry Jenkins Plugin

18 Nov , 2014  

Jenkins Cloud Foundry PluginThere’s a lot of interest in using Stackato in conjunction with continuous integration tools, and the most popular of those tools is Jenkins CI. Until now, the only ways to deploy your applications to Cloud Foundry or Stackato from a Jenkins build were to:

read more

, , , ,


Stackato App Store: Then and Now

10 Nov , 2014  

The Stackato App Store is an intuitive interface containing customizable links to popular apps for quick access deployments. It’s been a while (Q2 2012!) since we’ve last seen the Stackato App Store make an appearance on the ActiveState Blog.

read more



Twelve-Factor App Rollback

31 Lug , 2014  

Back To The Previous Version!
For those not familiar, “Twelve Factor Apps” were defined by Heroku in a beautifully succinct online document they wrote to define the 12 attributes of modern, scalable cloud applications. Such an application is ephemeral, works well with siblings and is decoupled in such a way that logging and configuration can be easily managed externally. If you are designing applications for the cloud, or writing any software for that matter, I thoroughly recommend reading The Twelve-Factor App document – it will take you about 15 minutes.

Where The Twelve-Factor App falls short, and in fairness is probably out of the scope of the document, is with upgrading applications. You might say that The Twelve-Factor App is a guide for Developers and that this is an Ops concern – and you would be 50% right. Upgrading is such a critical issue that it is not to be left to the domain of just Developers or just Operations engineers. This is a mission for DevOps!

What The Twelve-Factor App does well is treating backing services as attached resources. This makes the application runtime package-able and easily manageable. This is the cornerstone of PaaS and what makes working with Docker at scale feasible.

What are the key concerns of upgrading?

1) General feature breakage.

2) Unhappy end-users due to UI and feature changes.

3) Database upgrades or data migration.

4) Application down-time.

Points 1 and 2 are runtime concerns, usually.

Point 3 is related to attached resources, or Services in the world of Cloud Foundry.

Point 4 is a general issue with deploying applications to Cloud Foundry-based solutions. This is due to the way applications are taken offline first and then restaged. There are workarounds to this, such as deploying multiple versions of the same application under different application names and performing URL switching, but so far all solutions are external and onerous on the developer who is deploying the application.

Strongly focusing on The Twelve-Factor App enables us to solve upgrade issues with 1 and 2 quite easily. Since our application runtime is ephemeral, we can upgrade it by simply replacing it with a new version of the runtime. If there is a problem, we replace it with the previous version of the runtime. Although quite a simple concept, we are only now starting to see the adoption of this idea. This is one of the killer features of Docker. Drop-in, drop-out, drop-in something else.

For this to work, we have to assume that both versions will work with our attached resources. Database schemas must align with what the runtime expects. Dropping in a new version of the application runtime might be easy, but sometimes it also requires upgrading the database. Rolling back to a previous version of the application runtime might require the rolling back of that database.

I see data storage as the Achilles heel of PaaS. If applications did not have to store anything in databases then our life as application platform builders would be much simpler. But we do. I do not think the role of PaaS is to solve data storage. That task is for the external services that the application platform utilizes. But PaaS does need to solve how the application consumes those data services. We need to do this through best practices, while always supporting the legacy scenarios.

What are the best practices for working with data storage, in the context of application runtime roll-forward and rollback? We need to begin that discussion.

A common pattern, especially with Ruby-on-Rails developers, is versioned database migration scripts, which are run on deployment to bring the database schema either up or down the match with the deployed application code. This is a good solution and can be included in the test-suite of the application.

In an ideal world this would simply be ensuring that your application runtime is always backwards-compatible with your data-store and your data-store is always backwards-compatible with your application runtime. Either could independently move forwards, or backwards, and this would provide a Utopian level of decoupling. Unfortunately, as any half-decent mathematician will tell you, as your version numbers move towards infinity, both your application runtime code and your database schema will be severely crippled in this futile effort to support forwards-backwards compatibility.

While we have not yet reached Utopia for PaaS, we are way past just giving people faster horses. ActiveState customers are killing it with our Cloud Foundry and Docker based PaaS solution, Stackato. With Stackato 3.4, which was released today, we are taking the next step in application upgrade and rollback.

Stackato now retains previous versions of the application runtime. If you push new application code and the result is not what you expected, with a single click (or command or API call) you can rollback. Even if you only changed an environment variable, and you want the previous version, you can rollback. You can jump back to several versions prior. Any change to your application runtime can be rolled back and the Stackato administrator can configure how far you can roll back. You can even roll forward again if you changed your mind about the rollback.

What I like about this Stackato feature, is that it is a zero-downtime event. It phases over without skipping a beat.

This is both useful and powerful. Due to its power, I will give you the same advice a wise old man once told once told Spider-Man – actually it was his uncle – Uncle Ben to be specific: “With great power comes great responsibility.”

We are not expecting to solve all data storage migration and rollback scenarios during an application’s life-cycle. We do not restrict you from running anything in your runtime containers – though we might work with your Ops team to lock it down if they request it. We support both Twelve-Factor Apps and legacy applications. We support any Cloud Foundry service available in Stackato, in the wild, or that you can imagine and build yourself. Therefore, we cannot stop you from writing bad code, creating bad applications, bad database schemas, making bad choices of languages or bad choices when rolling your application runtime forwards and backwards. The end result is the same as if you had pushed an older version from the command-line or your IDE.

My general advice for a pain free life of upgrading would be…

1) If you can make your application easily backwards compatible with older versions of your database schema, do it.

2) If you can make your database schema easily backwards compatible with older versions of your application, do it.

3) Version your schema, version your application and document compatibilities.

4) Generally do not rollback when you have run incompatible database migrations.

5) If your application has no state stored an external data store, then this is a non-issue. Unfortunately, such an application is rare.

As with most things PaaS, the more you focus on building more modular applications, the more you can benefit from features like this. A micro-service architecture is much easier to reason about, test and iterate on. We fully recommend moving your architecture in this direction.

I would like to hear your feedback on this new feature of Stackato, but also the broader topic of working with data storage when upgrading and downgrading your application runtime. What are your best practices?

Follow @philwhln

Title image courtesy of JD Hancock on Flickr under Creative Commons License.

To learn about the new features of Stackato 3.4, click here or sign up for our upcoming webinar What’s New in Stackato 3.4

, , , , , , , ,


Happy SysAdmin Appreciation Day

24 Lug , 2014  

It’s SysAdmin Appreciation Day on July 25th! To mark this wondrous day and spread the word, we thought we would head out onto the streets of Vancouver and find out more from the general public what they thought of SysAdmins.

Remember to thank your SysAdmin for all that they do for you! (We recommend cupcakes.) (Tweet this)

With love from ActiveState on SysAdmin Appreciation Day



Other ActiveState Videos

Stackato: The Platform for the Agile Enteprise

Learn how Stackato helps IT and Dev work more efficiently and effectively together.





, ,


Get Out of the Weeds!

22 Lug , 2014  

Get Out of the WeedsYears ago a colleague of mine at Sun had an annoying habit of interrupting technical discussions at engineering meetings with the phrase “We're in the weeds!”

It was annoying because he would often disrupt fascinating and mind-expanding discussions on, say, the structure of TCP packets, or on message bus implementations. These were always interesting debates that we were reluctant to stop.

Years later I came to realize the wisdom of his visionary approach of focusing on the big picture and ensuring that we didn't get bogged down in implementation details, premature optimization, and other impediments to getting the product out the door. I soon found myself saying “Get out of the weeds” at engineering meetings, yet I was continuously hampered due to the overall complexity of software development. As much as I wanted to untangle myself from the weeds, I was unable to do so.

Fast-forward a couple of decades to the introduction of PaaS, which finally provides the capabilities needed to eliminate much of the complexity of software delivery, and truly Get out of the Weeds.

This blog collects a handful of situations I’ve encountered over the years where having access to a PaaS would have saved tremendous amounts of time, effort, complexity, cost, and overall stress for all involved.

Implement SSO across multiple applications

At one company, a coworker was tasked with implementing Single Sign-on (SSO) across a number of internal applications. He dove into this job with gusto, first researching the current state of SSO solutions available, building a handful of prototypes, finally choosing Spring Security and a few other frameworks as the basis, and went to work. Three months, and countless thousands of lines of extra code later he finally shipped it. While it was mostly working, “mostly working” doesn’t cut it in the security world.

Compare this to the effort involved to enable SSO across apps in Stackato: click a checkbox. This takes all of 3 seconds instead of 3 months, which is…15 million percent faster! Then there’s the reduction in code complexity, test suites, external application changes.

And, it actually works.


Onboarding a new engineer can be a nightmare. I’ve witnessed, and worked at, companies where bringing an engineer up to speed can take weeks. Much of this time is taken setting up, providing access to, and understanding complex development and deployment environments consisting of multiple servers, services, languages, runtimes, frameworks, containers, etc.. Then there’s the complexity of the processes involved in requesting resources, running test suites, interacting with other teams, etc.

How many times have you seen an engineer go through the long and sometimes painful onboarding process, only to jump ship immediately after? That sure doesn’t come cheap.

Most of this agony, cost, and time can be eliminated by using a Private PaaS like Stackato to set up the developer environments. I’m a big fan of the “micro cloud,” a facility Stackato provides allowing the entire cloud application suite, including all services and dependencies, to be provisioned on a single developer laptop quickly. New developers can access a fully-configured stack, on their own laptop, in hours, or even minutes after first logging in. A developer would be capable of deploying, running and testing large complex application stacks before they even learn where the nearest break room is.

Service Provisioning

I hate to think of all the time I’ve wasted waiting for access to a database instance. At first I thought this was an anomaly of one team I was working on, but then encountered the same lag time at the next team. And the next company. More recently, talking to engineering teams at a wide range of large and small organizations, I’ve discovered it’s still all too common to have to wait days or more for a simple db instance to be provisioned.

This lag time has several negative effects. First, it disrupts the flow. When I request a service instance for a problem I’m working on, all my focus and creativity is focused on this problem. But after having to wait days to get the service provisioned, my focus is almost certainly elsewhere and it takes effort to change gears again back to the old flow.

Another negative effect of this delay is that it often causes disharmony between the IT and developer teams. This disharmony is a common theme at devops conferences, and is extremely costly. This is where Stackato shines: it allows the IT team to deliver a reliable, scalable, self-service, on-demand developer platform with very little intervention required.

Now, instead of submitting a ticket and waiting, a developer needing a new database instance simply clicks a button, and is able to make use of the new service in minutes. This enables creativity, experimentation, and more efficient flow, while also greatly improving the harmony across the IT and developer teams.

Consistent Environment

Differences in configuration across developer machines can be incredibly costly. One time, inconsistency in a configuration file came close to sinking the company.

Without getting into too many details I’ll just say that the massive amounts of data involved, the intermittency of the problem, and the complexity of the application, resulted in the ship date slipping by 3 weeks, during which several stressed out developers were working around the clock, poring over over 4 page printouts of crazy SQL queries, parsing hundreds of thousand of lines of logs (logs – don’t get me started!), following misdirection after misdirection, and trying in vain to reproduce the problem. This three week slip date almost cost us our biggest customer, and probably shaved five years off my overall lifespan.

At the end of the day it turned out the problem was caused by an inconsistency between a configuration file on QA and in production. On QA a service was referenced by IP address in QA, and by hostname on the customer site. The DNS round-robining was occasionally resulting in a stale database being referenced.

I’ve seen countless other examples like this, with similar results. A backslash instead of a forward-slash in a config file on one developer system takes a whole morning to fix. Or a trailing space in a property file in QA results in a two-day slip.

The point here is that if QA, dev, and production had been using Stackato as the basis, configuration between the environments is minimized, as almost all aspects of the configuration is in common, whether running on a developer laptop, or on multiple racks spanning data centers.

Of course not all configuration is identical between various environments, but when using a PaaS as a foundation, most differences go away, minimizing the potential for issues caused by the differences.

Log Files – Don’t get me Started!

Too late, I already started. As mentioned, the above scenario resulted in countless hours poring over log files, most of them from Spring, where a single exception could easily generate over a hundred log lines. That didn’t include the up-front effort coordinating these logs in the first place, fiddling with tomcat’s notion of rotation, dealing with multiple agents capturing and forwarding log messages, ensuring the agents always stay alive, making sure disks don’t fill up.

In short, dealing with logs is a pain in the butt. With client/server architectures, multiple log files, inconsistent log message formats, disparate servers, disk capacities constraints, log rotation, notification requirements, time sync issues, correlation challenges, multiple user and system tasks, multiple interacting apps and processes are coming and going across servers, data centers, and continents, and more, it’s a wonder we’re able to deal with logs at all.

But, much of this pain goes away when using Stackato to manage logs. Like the SSO example above, Stackato allows configuration of log aggregation with a single operation or command. All application logs, from multiple app instances running across multiple availability zones, can be easily captured and redirected to tools or apps that are highly capable of dealing with them, like Loggly or Splunk.

Get out of the Weeds

I could go on with examples of situations where Stackato would have saved countless time in scaling, zero-downtime upgrades, security, and countless general development activities.

The point is, if you find that you or your team is mired in implementation details, constantly optimizing and re-optimizing, and basically tangled up, I recommend you take Stackato for a spin. Chances are you’ll find that it will also allow you to Get Out of the Weeds.

Share this post:

Image courtesy of Michael Wilfall

Learn more about Stackato. Find out how our private PaaS has helped organizations reduce deployment time from weeks to minutes, and manage their cloud applications more efficiently. If you would like to get hands-on experience with Stackato, you can get Stackato by downloading the free micro cloud, requesting a customized demonstration or signing up for a POC.

, , , , , ,