Platform-as-a-Service (PaaS) enables developers to write applications in any language, in a consistent environment, and deploy to production themselves. But what about IT? Does PaaS help them or is it just another system to manage? IT departments typically do not achieve the full potential of their […]
It’s often a goal of enterprise software vendors to simplify the lives of their customers. After all, complexity is the source of enormous pain and friction in most enterprise environments. This is a fine goal, but too often, vendors make assumptions that simplify their life and pass complex issues on to the customer. Rather than eliminating the complexity, they simply ignore it.
In the world of enterprise Platform as a Service (PaaS), how the platform handles your application’s database is a good illustration of this point. Most enterprise PaaS solutions conveniently leave the data tier out of scope or simply provide a broker to create a database as part of an app deployment. The database needs to be a first-class citizen throughout the lifecycle of the application because that’s the reality for enterprise developers.
One of the consequences of not supporting the data tier natively is that it makes other features of your PaaS much simpler to implement (at the expense of developers, of course). Two good examples are “blue-green” app upgrades and fault zones.
The goal of “blue-green” app upgrades is to minimize downtime during app upgrades by effectively having the current version of your app and the new version of your app running simultaneously. Once you are ready to have users access the new version, you begin re-routing requests to the new version with the previous version remaining available but idle. You can swap back to the old version quickly if something goes wrong.
The tricky part of course, as any developer will tell you, is that data tier. How do you handle in-flight transactions? How do you synchronize the data and schema between the green and blue versions? Do the versions share a database or do they each have their own copy? Are the versions data-compatible? What happens to the new transactions if you roll back? The answers to these questions influence the deployment strategy and need to be considered as part of the upgrade orchestration.
On the other hand, if you just ignore the data, life is a lot easier as a vendor because all of this complexity is left for the developer to reconcile.
Redirecting to a maintenance page isn’t the hard part of app upgrades; handling the data is. The vast majority of Fortune 500 apps running on Apprenda (and most enterprise apps in general) have a database. This means that unless you handle the data tier, as a practical matter, any update that touches the data will still have downtime and / or complexity.
Fault zones are another popular feature that is significantly impacted by the data tier because of the latency that they threaten to introduce. For example, if a developer deploys multiple copies of an app across multiple data centers or fault zones for redundancy, she will pay a penalty on latency if the database is only in one location. For example, if the database is in data center A and the developer deploys an instance of her app components to data center B, each client request that gets routed to data center B will require the app to roundtrip back to data center A in order to service the request. An enterprise PaaS that doesn’t consider databases is able to be blissfully ignorant of this issue because the data is the customer’s problem to reconcile.
If a private PaaS is truly going to support existing applications and real-world enterprise needs, then it needs to deal with and accept responsibility for application complexity as it exists in the enterprise. The data tier needs to be part of the native application model and not left as “an exercise for the reader.”
The post Why You Can’t Ignore the Data Tier in Private PaaS appeared first on Platform as a Service Magazine.
Symantec is splitting up to form two companies
“On Thursday afternoon, giant antivirus firm Symantec announced that it would split up into two separate, publicly traded companies: one focused on security and one focused on information management. Symantec is the company that produces The Norton antivirus security suite. This is this third giant technology company to announce a split into two separate companies in ten days…According to Symantec, its security business generated revenue of $4.2 billion in 2014. Its information management business generated revenue of $2.5 billion. The company said that its board of directors unanimously voted for the plan, although in after-hours trading, Symantec’s stock was down 2.3 percent.” Via Megan Geuss, ARS Technica
Detecting a Red Shift in Enterprise Software Development
“Red Shift is an astronomical phenomenon indicating a rapidly expanding universe. It also describes the current enterprise software shift from internal productivity enhancements to rapidly expanding revenue opportunities. A recent survey of 450 technology leaders indicates that only 46% of custom software is now focused on internal efficiency and effectiveness while 54% is aligned to enhance customer experience and deliver revenue through commercialized software. While this shift is significant, challenges face every organization trying to conquer this technology frontier…In the private PaaS landscape, 3 products have emerged with the vast majority of mindshare amongst enterprises – Apprenda with 27%..” Via PR Newswire
G.E. Opens Its Big Data Platform
“Thomas Edison would be proud. General Electric, the company he started, still knows how to make a buck off cutting-edge technology. In this case, the technology is in the so-called Internet of Things, in which sensors feed data to central repositories, which can analyze and manage enormous amounts of data from the machines. Initial uses include more efficient maintenance, remote monitoring, asset tracking and spotting new patterns of behavior that might be profitably exploited…” Via Quentin Hardy, NY Times
What Staples Can Teach The Cloud About Simplicity
“In 2003, the office supply chain store Staples launched the advertising campaign “that was easy.” The campaign featured a red “easy” button workers could press to instantly organize their offices—an icon that even today has a special place on millions of workers’ desks. Staples’ success acts as a good reminder for other industries, especially those in the cloud sector. Unfortunately, cloud providers have made their services unnecessarily challenging for both users and developers by imposing limitations on server setup, programming languages and operating systems. There’s no reason that cloud providers can’t also “think simple” when it comes to the tools used for developing their infrastructure…” Via Robert Jenkins, ReadWrite
Coca Cola and others innovate new data methods with Splunk
“The keynote at this week’s Splunk Inc. conference, and subsequent interviews with SilconANGLE’s roving news desk theCUBE, highlighted Splunk’s ability to streamline business IT processes and achieve a new philosophy about how they handle data, agreed co-hosts Jeff Kelly and Jeff Frick. In the keynote, one of Splunk’s most prominent customers, Coca Cola, Co., showcased its use of Splunk for app development and the automation of data collection…” Via Rachel Schramm, SiliconANGLE
Baidu acquires Brazil’s largest group buying website
“Chinese search giant Baidu has acquired group buying firm Peixe Urbano for an undisclosed sum as part of a strategy to widen its footprint in Brazil. Peixe Urbano was launched in 2009 as a Brazilian equivalent of Groupon. It experienced rapid growth until a decline in the group buying model prompted a major retrenchment in 2012. Despite its recent woes, Peixe Urbano refocused its business model from offers that would only be available for a day to campaigns that last for longer period of time. Baidu hopes to leverage on the platform, which has 25 million users, to create partnerships with service providers that would further develop its search engine locally…” Via Angelica Mari, ZDNet
Verizon changes course, focuses on private cloud service
“One year ago, Verizon laid out ambitious plans to provide a service to compete with the likes of Amazon in the market for public cloud infrastructure. Verizon now has dramatically shifted its focus and delivered a product aimed at enterprises that want to build private clouds. The beta version of the Verizon Cloud was launched as a unified IaaS platform and object storage service. When it was released, it was praised for its potential innovation. But industry experts had advised customers to proceed with caution until the unified platform became fully available. Verizon now plans to coexist with the large-scale IaaS vendors and has made its integrated platform generally available…” Via Trevor Jones, TechTarget
HP US$6-$10 billion deal appears imminent
“A Hewlett-Packard acquisition in the range of US$6 billion to US$10 billion is “imminent,” according to a Re/code report based off a research note from Sanford Bernstein analyst Toni Sacconaghi. The Re/code report and analyst note followed HP CEO Meg Whitman’s meeting Wednesday US time with analysts in New York on the company’s planned split into two publicly traded companies, each with US$56 billion in sales. The companies would be named HP Inc., which would include printers and PCs, and Hewlett-Packard Enterprise, which would include enterprise systems, software and services. HP declined comment…” Via Steven Burke, CRN
SAP And Birst’s New Alliance Looks To Fend Off Salesforce And Tableau In Race For Analytics
“Marc Benioff and Salesforce.com are expected to talk a lot about analytics next week at the CRM leader’s annual Dreamforce conference. Just days before, rival SAP has looked to make a splash. SAP announced a partnership with late-stage startup Birst on Thursday that could help both with sales while continuing the alignment of cloud companies for a land grab in the business intelligence market. The partnership allows Birst customers to use the cloud version of SAP HANA database management within the product, while customers of SAP’s various offerings can layer Birst onto what they’re doing to better analyze their data…” Via Alex Konrad, Forbes
In The Cloud, Microsoft Looks Like A Winner Again
“Microsoft used to be evil. Then it was irrelevant. Now it looks like a winner. How did this happen?…They’re building it with open source. And yet. Over the past few years Microsoft has established itself as a real competitor again. Though much of Microsoft’s playbook remains the same, it has made key changes that make it a much more formidable competitor, while being a more likable one, too…That’s why Microsoft is no longer evil. Or irrelevant. It’s a competitor again.” Via Matt Asay, ReadWrite
Satya Nadella on Mobility: It’s Personal
“At Garner Symposium, Drue Reeves and I had the opportunity to interview Microsoft CEO Satya Nadella. Here’s a brief clip from the closing. I’m summarizing and Satya, passionate as he was throughout the conversation, lays out his vision about mobility that crosses the personal and professional: mobility of the individual and the app experiences. “Have my work and life wherever – that’s the true form of mobility…” Via Merv Adrian, Gartner
OpenStack is nowhere near a “solved problem”
“OpenStack is many things, but a “solved problem”? Not even close. Though early OpenStack visionary Josh McKenty cited the “boredom” of working on an established platform like OpenStack as his reason for leaving to join Cloud Foundry, the reality is that OpenStack remains very much a work in progress that needs strong leadership if it’s to live up to its potential…” Via Matt Asay, TechRepublic
Oracle Just Poached A Key Cloud Exec From Google
“Peter S. Magnusson, Google’s former engineering director who was in charge of its App Engine team, was recently hired by Oracle to lead its cloud-computing business, Re/code’s Arik Hesseldahl reported Thursday. Magnusson is best known for being one of the founding members of Google’s App Engine team, where he built the search engine’s cloud and web app hosting platform. Most recently, he made headlines for leaving Snapchat’s VP of Engineering position after just six months…” Via Eugene Kim, Business Insider
Google, Oracle Java API copyright battle lands at Supreme Court
“Google is asking the US Supreme Court to reverse an appeals court ruling that said Oracle’s Java API’s were protected by copyright. Google told the justices in a petition [PDF] this week that assigning copyright to the code—the Application Programming Interfaces that enable programs to talk to one another—sets a dangerous precedent… The legal fracas started when Google copied certain elements—names, declaration, and header lines—of the Java APIs in Android, and Oracle sued…” Via David Kravets, ARS Technica
Have yourselves the finest weekend possible. Laughter is the best drug out there: try to OD.
The post Feeling Bogged Down? Split Up! All The Kids Are Doing It. Apprenda Marketwatch appeared first on Platform as a Service Magazine.
Apprenda brings JBoss into the fold to boost its Java prowess
“Apprenda started out as a Microsoft .NET-centric Platform as a Service (PaaS) for businesses. Then it added Java support, which gave it another perk for enterprise shops — most of which run both Java and Windows applications. Now the company has upped the ante by adding support for the popular JBoss and Apache Tomcat web servers which means deeper support for more Java applications, said Sinclair Schuller, CEO of Troy, N.Y.-based Apprenda. “JBoss is used for some really mission-critical enterprise apps — internal and external-facing, so [this addition] means that we now offer more first-class support for more mission-critical apps than any other vendor,” Schuller claimed…” Via Barb Darrow, GigaOM
Apprenda Extends Its PaaS And Aims A Kick In The Direction Of Red Hat
“One of the better known private PaaS vendors is Apprenda. The company, long a pure-play Microsoft .NET platform, rolled out a Java PaaS a few years ago and seems to be seeing success within large enterprises. It seems to have a particular bent for large financial organizations (perhaps because of the fact that it is headquartered in NYC and hence more attuned to the financial world than the Silicon Valley one). Its high-profile customers include JPMorgan Chase, McKesson and AmerisourceBergen. Today it has rolled out the latest version of its product which includes expanded Java capabilities..It will be interesting to see the future of PaaS generally, and Apprenda specifically..” Via Ben Kepes, Forbes
Apprenda Takes a Leap Forward with JBoss Support and More in Milestone 5.5 Release
“Apprenda, the leader in enterprise Platform as a Service (PaaS), announced today that it has greatly expanded its Java capabilities with support for JBoss, Tomcat 7 and JMX in its new milestone 5.5 release. This substantial update positions Apprenda as the PaaS of choice for the largest businesses in the world. Enterprise developers want a single pane of glass for .NET and Java application servers to speed up their time to market for applications. More than eighty-five percent of enterprise applications will be developed in .NET and Java for the foreseeable future, and Apprenda has provided developers with deep .NET and Java support for years. And now, Apprenda is the only private PaaS with first-class enterprise support for JBoss and pure Tomcat Java servers…” Press Release
Digital Dark Matter: The Unseen Forces That Influence Innovation
“..Combined, dark matter and dark energy constitute about 95.1% of all the mass-energy of the known universe which means that we can only directly observe and account for a mere 4.9% of what’s out there. Despite all we think we know about the nature of the universe, the overwhelming majority of the cosmos lies outside our current powers of observation, yet it profoundly affects everything. So it goes with the forces that influence innovation..Humans have been dependent on technology of one kind or another for many thousands of years, and it’s probably inevitable that our dependency will not only continue, but dramatically increase..” Via Christian Cantrell, TechCrunch
How have enterprises dealt with shadow IT?
“..The Q2 Cloud Adoption and Risk Report from Skyhigh Networks looks at enterprises that attempted to address the problem of shadow IT, how they did it and how successful they were. The subjects were over 200 organizations, generally large ones in the Fortune 2000, across all major verticals – Education, Financial Services, Food & Beverage, Healthcare, High-Tech, Media, Oil & Gas, Manufacturing, Retail, and Utilities..” Via Larry Seltzer, ZDNet
Why The Digital Revolution Has Not Yet Fulfilled Its Promises
“IF THERE IS a technological revolution in progress, rich economies could be forgiven for wishing it would go away. Workers in America, Europe and Japan have been through a difficult few decades…Most rich economies have made a poor job of finding lucrative jobs for workers displaced by technology, and the resulting glut of cheap, underemployed labour has given firms little incentive to make productivity-boosting investments. Until governments solve that problem, the productivity effects of this technological revolution will remain disappointing. The impact on workers, by contrast, is already blindingly clear.” Via The Economist, Business Insider
Will cloud-enablement be the pot of gold for traditional ISVs?
“Cloud computing is exploding, and we have the analysts’ numbers to prove it. A recent Gigaom survey showed that most enterprises have 1 to 50 public cloud instances running on any given day, with 4 percent of the respondents reporting more than 1,000 cloud instances running on any given day..If the cloud is exploding, then what do the ISV do to take advantage of the changing market?..You need to figure out what category your business falls into before you toss money at this issue. Again, there are many different reasons why not every software system should exist in the cloud. However, most should..” Via David Linthicum, GigaOM Research
What you missed in Cloud: the biggest names in enterprise tech advance on Amazon’s turf
“The war over the public cloud moved up several notches in the past week after top enterprise vendors made a series of landmark advances that significantly raise the stakes for market leader Amazon.com Inc. as it works to capture a bigger slice of enterprise technology spending. Cisco Systems Inc. set the pace on Monday with a pledge to commit another billion dollars towards its plan of establishing a global cloud network.. The launch comes a few short weeks after fellow Cisco partner Google introduced a similar onramp program for developers meant to drive the adoption of its IaaS platform..” Via Maria Deutscher, SiliconANGLE
HP to split into two businesses — report
“Hewlett-Packard may be ready for a breakup. HP, the world’s second-largest PC vendor behind Lenovo, plans to separate its PC and printer businesses from its corporate hardware and services operations, The Wall Street Journal reported Sunday, citing “people familiar with the matter.” The company could announce the move as early as Monday, according to the Journal’s sources. HP declined to comment..” Via Carrie Mihalcik, CNET
Report: HP plans to split into two companies
“..If this sounds familiar, it’s because this is basically the plan that was proposed in 2011 when HP’s CEO was Léo Apotheker. HP intended to get rid of the “Personal Systems Group” (PSG), the division that makes PCs, and focus on the enterprise. Shareholders didn’t like the plan though. So after Apotheker was fired and the current CEO Meg Whitman took over, she decided to keep the PC division..After a few years, it looks like the old plan is mostly back, and the PC group will be spun off into a separate company and take the printer group with it. WSJ says Whiteman will be the chairman of Consumer HP and CEO of Enterprise HP. The current lead independent director, Patricia Russo, will be chairman of Enterprise HP, and Dion Weisler will move from an executive in the PC/printer group to become the new company’s CEO.” Via Ron Amadeo, ARS Technica
Five questions for Meg Whitman about the potential HP split
“..Over the past decade, the company has spent billions on acquisitions and announced plans to lay off more than 120,000 employees. And yet, through all these machinations, growth has been elusive. With news of a split leaking out, we can expect the company to address it before its analyst day on Wednesday. So, we’ll wait to see if a press release or a conference call emerges on this first Monday of October. In the meantime, here are five questions we hope CEO Meg Whitman will answer when she explains the potential split..” Via Chris O’Brien, VentureBeat
HP’s big split: Five reasons it’s a good move
“Hewlett-Packard is reportedly breaking off its PC and printer businesses from its enterprise unit and the move — arguably overdue — is likely to benefit both independent companies in the future…In 2012, Whitman merged the PC and printer businesses. Given HP’s structure, breaking into two shouldn’t be that difficult. Overall, the split could turn out to be a good thing..” Via Larry Dignan, ZDNet
VMware hires another big name exec for NSX SDN
“VMware – which last month poached a top data centre executive from Cisco – has hired another well-known figure in the world of software-defined networking. Guido Appenzeller, co-founder and former CEO of SDN startup Big Switch Networks, joined VMware this week as chief technology strategy officer of its networking and security business unit. Appenzeller stepped down as Big Switch Networks CEO last November to become CTO, and left the company in March..” Via Kevin McLaughlin, CRN
Oracle dives into cloud (again), and this time it really means it
“At Oracle OpenWorld last week, Oracle CEO — er CTO and Executive Chairman Larry Ellison — once again vowed that his company will be king of cloud — in SaaS, Paas, IaaS — you name it. That’s a tall order but one coming from a company flush in resources which has shown itself willing to buy into new markets. And Ellison and co-CEOs Safra Catz and Mark Hurd trotted out news that Oracle now offers its flagship Database as a Service. Ditto Java as a Service. As for those resources, Oracle had about $51.6 billion in cash on its balance sheet for the most recent quarter. Over the last 12 months, it logged $35.8 billion in revenue and gross profit of $31.04 billion. See what I mean?..” Via Barb Darrow, GigaOM
It’s another beautiful day in the neighborhood. Friday’s Marketwatch
The post It’s All About Updates: Software, Company Structure & Otherwise. Apprenda Marketwatch appeared first on Platform as a Service Magazine.
Apprenda backs a private PaaS for companies running Java and Windows applications. It’s adding JBoss and Tomcat support for deeper integration with popular Java workloads.
Apprenda brings JBoss into the fold to boost its Java prowess originally published by Gigaom, © copyright 2014.
Oracle to release a PaaS
“Oracle plans to announce a new platform as a service (PaaS) that will allow customers to build Java applications in the cloud, according to a report in the New York Times. The move would be a big first one for the company’s new, yet very familiar, chief technology officer: Larry Ellison… Oracle wants to go on the offensive in the cloud. Announcing a new PaaS offering is one way to do that. The PaaS market is quickly becoming a crowded place. Many of the major cloud providers are focusing on this segment of the market…Pivotal, Apprenda and Active State are still battling it out on the private PaaS market. Now, Oracle wants to jump into this market too…” Via Brandon Butler, Network World
Oracle’s Management Shift Was Designed to Retain New CEOs
“…Sources familiar with the matter tell Re/code the two had recently been headhunted for jobs outside Oracle. Neither engaged in any significant talks, according to the sources, and it’s unclear which companies approached them. Hurd’s name was floated twice publicly, once as a potential CEO at Dell during its buyout campaign last year, and again as a possible successor to Steve Ballmer at Microsoft before Satya Nadella got the nod. There were approaches from other companies in addition to those two, the sources say… What’s less clear is exactly how long the dual-CEO structure can work…” Via Arik Hesseldahl, Re/Code
Two new CEOs don’t add up to meaningful change at Oracle
“Oracle has become far too predictable. The company that used to print money every quarter now routinely misses analyst expectations as it struggles to come to grips with the new realities of enterprise IT: open source and cloud…Are we in the midst of a changing of the enterprise guard?… Can Oracle and its peers make the change? The company is due to announce its own database service shortly, but why would enterprises turn to Oracle for a cloud-delivered database when the king of clouds, AWS, already offers this? The forecast seems cloudy with a chance of failure.” Via Matt Asay, TechRepublic
As Larry Ellison Steps Down At Oracle And SAP Buys Concur, Old Tech’s New Leaders Will Look To Spend Billions More
“As Alibaba announced its pricing of the largest IPO in American history, two old rivals traded major announcements that shook up enterprise software landscape. But where things get interesting is how Oracle and SAP’s ticker-tape news will shake up the cloud landscape for years to come… With new leadership at each company and some direct competition for deals, it’s a good time to be a SaaS company looking to sell. And for sometime-ally, sometime competition in the space, there’s little time to grab popcorn and watch…” Via Alex Konrad, Forbes
EMC Seeking Merger Partner?
“Storage specialist EMC is looking for another merger partner while exploring other options after negotiations with rival Hewlett-Packard on a blockbuster corporate marriage reportedly broke down in recent weeks. Unconfirmed reports surfaced in mid-September that EMC was considering selling off its stake in VMware. An unidentified source later told Reuters that the reports were inaccurate and that EMC planned to retain its roughly 80 percent stake in the virtualization specialist…As cloud storage increasingly moves toward commodity pricing, industry consolidation via mergers and acquisitions appear likely as rivals struggle to identify new revenue sources. EMC, based in Hopkinton, Massacusetts, also has reportedly held preliminary merger talks with Dell…” Via George Leopold, EnterpriseTech
EMC Mulls Merger With HP or Dell – Reports
“In a move that would profoundly change the data center and SDN landscape, EMC is considering merging with a rival amongst other options to navigate an uncertain future, recent reports suggest…The deal would be important to carriers, who are developing data centers to fuel cloud expansion. EMC is a leader in data center storage…Both HP and Dell are aggressively expanding their enterprise data center businesses into carrier SDN.” Via Mitch Wagner, Light Reading
Opinion: EMC-HP merger rumors a microcosm of an industry in crisis
“When Florian Leibert, co-founder of the hot infrastructure start-up Mesosphere, Inc. told theCUBE last week that consolidation is coming to the infrastructure market, he probably didn’t expect his predictions to come true quite so quickly…Traditional vendors have been caught flat-footed by infrastructure innovations that are set to sweep the industry. They didn’t expect competition from search engine and ecommerce companies, and they’re scrambling to respond. But it’s already too late. Those big companies have a lot of cash, and they can continue to grow through acquisition, but owning a lot of legacy software is a drag on innovation…A merger of EMC and HP doesn’t make sense. Two wrongs don’t make a right.” Via Paul Gillin, SiliconAngle
5 Early Cloud Adopters In Federal Government
“The federal government is known for its risk-adverse nature and tight control of IT assets. But with increasing data needs and constrained budgets, agencies are turning to cloud computing as a solution. Policies such as Cloud First have moved agencies in the right direction. Cloud First, authored by former federal CIO Vivek Kundra, mandates that agencies consider cloud computing before other options for new IT projects…But despite the progress, security and privacy remain top concerns…Although broader implementation of the cloud has been generally slow, there are those that have paved the way for other agencies…” Via Elena Malykhina, InformationWeek
Healthcare and Financial Services Areas of Opportunity, Forbes Tells Conference
“Healthcare and financial services are huge areas of opportunity for the industry, delegates at the Ingram Micro ONE event in Las Vegas heard today. In his keynote address, Steve Forbes, CEO of Forbes Inc., told delegates that with the cloud, there is an opportunity to be found in both of these sectors… He said the financial sector is also a huge area of opportunity for the industry. Once growth begins in earnest in the financial sector, he predicted there will be a huge demand for financial services again…” Via Jessica Meek, Channelnomics
Innovation: Hedge First, Optimize Later
“There is a sequence to growth, an implicit order. Consecutive development activities are initiated and concluded by the forward and aft positions. We look out to discover opportunities and threats to our growth before we may streamline our actions to maximize their value to us. First, the unknown must be found and revealed. This is not only to minimize the risk of failure but to speculate where opportunities for growth may abound… As with innovation, we must first lose our way in the complexity of possibilities before we may find our one true path.” Via Jeff DeGraff, WIRED
5 things to prepare the CIO for disruption
“For years, IT organizations operated in a certain way. They provided a relatively standard service in a particular way. Of course, both of these evolved incrementally year over year. Over the past 5-10 years, that direction has changed pretty significantly. And it shows no sign of stopping anytime soon…So, how does the CIO respond to these changes in a timely and meaningful manner? Start at the top and work down. That means, start with a business-centric approach that takes the perspective of the true customer (the company’s customer) and work your way down…” Via Tim Crawford, GigaOM Research
Apcera Emerges, Gets Snapped Up By Ericsson
“Apcera, a company founded by former chief architect of Cloud Foundry, Derek Collison, came out of stealth mode on Monday to announce its first product, Continuum. And on the same day, a majority share of the company was bought up by Ericsson for an undisclosed amount…Ericsson, in announcing its majority stake, called it “the most significant investment to date in deploying the next generation of PaaS technology.” Apcera is a startup with a known quantity in Collison as its founder and CEO. Its first product, Continuum, is meant to add another step to the workflow that picks up where PaaS like Cloud Foundry, Apprenda, or OpenShift leaves off…” Via Charles Babcock, InformationWeek
Why Ericsson just became the biggest owner of cloud startup Apcera
“…The deal gives new cloud capabilities to Ericsson but lets Apcera remain a standalone company, according to a statement on the news. On the surface, the deal puts Ericsson in the same camp as other telecommunications companies that have shelled out to enhance their cloud offerings. CenturyLink, NTT, and Verizon, among others, have already made cloud moves. Now Ericsson has set itself up to gain cloud mindshare. But more deeply, the deal gives Ericsson distinct technology that’s more than just a platform as a service for building and deploying applications…” Via Jordan Novet, VentureBeat
Mirantis wants to part ways with Red Hat and it’s easy to see why
“In June 2013, Red Hat participated in a $10 million Series A funding round of Mirantis which was, at the time, a sort of super-integrator for assembling OpenStack clouds. Now Mirantis wants extricate itself from that deal and is negotiating to make that happen, according to two sources close to the situation. This should not be hugely surprising given the bad blood that has flowed between the companies over the past year…As to whether Red Hat and Miranis are in talks to undo the investment, a Red Hat spokesman said the company does not comment on rumors and speculation…” Via Barb Darrow, GigaOM
Red Hat CEO announces a shift from client-server to cloud computing
“Red Hat is in the midst of changing its image from a top Linux company to the future king of cloud computing…In case you haven’t gotten the point yet, Whitehurst states, “We want to be the undisputed leader in enterprise cloud.” In Red Hat’s future, Linux will be the means to a cloud, not an end unto itself…Linux leaders see a future where IT is based on Linux and the open source cloud. And if Whitehurst has his way, it will be a Red Hat-dominated future.” Via Steven J. Vaughan-Nichols, ZDNet
Let’s make a day of it all! Friday’s Marketwatch
The post PaaS: Some Build It, Some Buy It – Apprenda Marketwatch appeared first on Platform as a Service Magazine.
Last week we announced an important new partnership with Cloudbees to bring a native enterprise Jenkins service to Pivotal CF (PCF). This provided a first glimpse at the power of Pivotal Network, a service catalog for PCF. Pivotal Network allows any private PCF installation to import additional scalable platform services from Pivotal and a growing ecosystem of partners. PCF then manages the health and scalability of these services natively.
As Cloudbees joined the extended Pivotal ecosystem they also exited their hosted RUN@ PaaS business, which they had previously offered on AWS. In a thoughtful exit interview the talented Steve Harris compared their four year journey into PaaS market to a high risk polar expedition. As a fellow adventurer with Harris in the platform market since 2010, I would sum up my own experience and many of the conclusions of his blog with this chart:
The public platform space has been a brutal market for startups. The $212M Salesforce buyout of Heroku in late 2010 was a false positive, and confused VCs. The investments became irrationally exuberant and the results were predictably bad. Public PaaS startups were too early, under resourced and focused on the wrong segment to win big in this new market. VC funded plays such as Dotcloud, Appfog, Cloudbees, and even the mobile specialists Parse and Stackmob have all since ‘pivoted’ or been modestly absorbed into larger organizations.
Deep pocketed investment by the major cloud providers such as Google’s recent free $100k for startups made it even harder for independent cloud platforms to compete or raise additional funding (all while having to pay the IaaS providers).
Finally, as Harris observed Cloud Foundry executed well on a broad ecosystem play, creating the “Cloud Foundry effect”
“There is a Cloud Foundry effect: Cloud Foundry has executed well on an innovate-leverage-commoditize (ILC) strategy using open source and ecosystem as the key weapons in that approach.”
Pivotal’s popular shared Cloud Foundry service for developers, PWS, is priced at only $2.70/month for a small hosted application. Stuck between the rapid growth of AWS, Google, and the emerging enterprise standard of Cloud Foundry there just wasn’t margin left for the hosted alternative platforms to live on.
In contrast to the startup failures in the public PaaS market, the revenue in the private platform market is significant and growing exponentially. While many enterprises are leveraging public cloud, many of the “crown jewels” applications still reside in private data centers, along with the majority of IT budgets. This means that despite public cloud adoption, architectural standards often start in the private cloud landscape and extend out.
Executing in this segment generates large purchase orders but requires shipping installable enterprise software, multi-year R&D, and a large enterprise sales force (the last two points being beyond the scope of most VC-funded business plans). The deals are large because they tap into the enormous yearly spend on enterprise middleware currently dominated by innovation laggards like Oracle’s SOA Suite. These legacy products were designed for infrequently updated applications built for vertical scaling—two trends which are now out of date. As the Harris’ blog observed:
“Pivotal Cloud Foundry has produced partnerships with the largest, established players in enterprise middleware and apps. In turn, that middleware marketplace ($20B) is prime hunting ground for PaaS, and Cloud Foundry has served up fresh hope to IT people searching desperately for a private cloud strategy”.
While public only PaaS players have continued to exit the market, private software based PCF has racked up an impressive parade of enterprise customers migrating from Oracle middleware to a next generation private cloud architecture. In June, more than a thousand enterprise users gathered for the Cloud Foundry summit, speaking to the remarkable benefits of building an application centric private cloud.
Early pioneers like Corelogic’s Richard Leurig are betting their company’s future on a next generation platform architecture and breaking down years of siloed applications on heterogeneous infrastructures. The market is waking up from years of dealing with the complexity of legacy products and demanding a cloud API integrated solution. They are also transforming their development processes to be more agile, anchored on continuous integration with systems like Jenkins. This is resulting in earmarked budgets for large private PaaS build outs.
As enterprises adopt PCF, it is changing their whole concept of private cloud. Line of business developers gain immediate self service access to rapidly test and deliver applications and APIs. Monsanto observed an immediate 50% reduction in LOB development time and costs with PCF and plans to standardize the company on it in 2015. For infrastructure teams PCF’s minimal IaaS requirements and deep platform provisioning automation provides immediate private cloud features with just a few clicks and a thirty minute install time.
Customer demand to extend the platforms’ services catalog with Jenkins, Cassandra, Elastic Search, etc is overwhelming. Even enterprise stalwarts such as SAP, and SAS have announced customers are pushing them to offer their software on Cloud Foundry based private clouds. Almost every day brings an inquiry from an existing ISV whose customers are migrating to the platform who wants to be part of the PCF service catalog.
Heading into 2015, we are hiring hundreds of additional developers, operations, marketing and sales professionals, and ramping every part of the team to take on our forty billion dollar enterprise opportunity.
The public only PaaS meme centered on hosting, and missed the true revolution in software architecture happening in enterprises. This allowed Cloud Foundry to germinate in R&D for several years and surprise existing vendors, who prepared for the change merely by offering their existing legacy products on demand. Oracle kept its decade old middleware architecture focused on heavyweight J2EE but enterprise applications are moving from heavyweight client-server legacy model to a mobile centric paradigm dominated by REST APIs and microservices.
The production ready PCF platform has emerged as a way for enterprises to deliver on an ambitious private cloud initiative, ready for mobile APIs and microservices based applications. It delivers highly scalable, highly available services, fully automated on any cloud, with an easy to use developer API perfect for agile teams.
Using PCF is believing, and after one recent successful pilot, a bemused Fortune 50 CIO asked me, “IBM has been promising me this level of simplicity and automation for years, why are you able to pull it off while they failed?” I answered, “Only by bringing the discipline of modern cloud architecture to enterprise applications were the efficiency gains you witnessed possible. No one has ever brought a cloud native platform to the enterprise datacenter—until now.”
The next few years of this expedition are going to be very interesting!
At the recent Gartner Catalyst conference, the OpenShift team was invited to join a panel focused on Platform as a Service frameworks. It was a great opportunity to review market trends, get in touch with the audience and openly discuss the differences between our respective PaaS offerings in terms of capabilities, philosophical approach and architecture.
Gartner analyst Eric Knipp also presented on “Building a Software Factory with a Private PaaS” where he explored how enterprise customers can “use cloud computing to improve developer agility while retaining control of the platform.” During this session, Eric discussed how to drive developer productivity with “open PaaS frameworks that standardize, automate and simplify the software development life cycle.” He then reviewed the two dominant open source PaaS platforms, OpenShift and Cloud Foundry. Today, I wanted to take the opportunity to go deeper on a few of the topics discussed, both during the conference and since.
One of the key items that Eric discussed was how PaaS frameworks should integrate with the infrastructure provisioning layer and cloud management platform. The OpenShift platform can be deployed on either bare metal servers, an existing virtualization platform (i.e. vSphere, Hyper-V, RHEV/KVM, etc.), private IaaS cloud (i.e OpenStack) or public cloud (i.e. Amazon, Google). A specific question that was raised related to the recovery of OpenShift Node instances and how that impacts application availability. In OpenShift a “Node” is a virtual or physical machine that acts as a container host and runs end user applications in Linux containers or “Gears”. A typical deployment will have many Nodes, controlled by an OpenShift Broker tier that handles orchestration and scheduling.
There are some important architectural differences to understand here. First, OpenShift Nodes can run on either virtual machines or physical machines, whereas many competing PaaS solutions only run in VMs. In addition, while some PaaS solutions only support stateless applications and don’t provide a persistent filesystem option for writing to local storage, OpenShift supports stateless applications as well as stateful application processes that can write to a persistent filesystem mapped to each Gear. OpenShift also is unique in supporting a full Java EE service with JBoss that also supports stateful clustering with session replication. So there are a number of inherent differences in managing Node hosts, due to the extra functionality OpenShift provides.
While some PaaS frameworks like Cloud Foundry provide a separate management tool for the underlying VM infrastructure which is limited to specific virtualization platforms, OpenShift is more flexible. We work to integrate with provisioning and configuration management tools customers already have in place, to both deploy OpenShift on their choice of bare metal, virtual, private or public cloud platforms and handle ongoing management of the OpenShift Node and Broker infrastructure. These include popular provisioning and configuration management tools like Puppet, Ansible, Chef as well as other monitoring and management tools deployed in a customer’s infrastructure. As a result we have been expanding our investment in providing general purpose automation scripts and expanding integration with leading management tools, sharing best practices from our OpenShift Online Operations team and integrating contributions from customers and community members like Cisco. We have also integrated with OpenStack Heat for deploying OpenShift on OpenStack and integrated with Red Hat CloudForms cloud management platform for deploying and managing the OpenShift PaaS platform in a hybrid cloud environment.
For application availability, OpenShift also provides built-in functionality such as regions & zones which allows you to group Nodes into different availability zones and enforce anti-affinity across zones for multi-container deployments. We also provide a Node watchman feature to monitor and restart failed application processes. This demo shows watchman in action for both a single instance and clustered JBoss application scenario.
We also recently announced our plan for continuing to improve our orchestration & scheduling capabilities in OpenShift v3 working with both Docker and Google Kubernetes communities.
Some additional questions were raised on the way OpenShift handles container isolation, without providing the full context how our platform works and key benefits we provide. As we’ve discussed in a prior post, OpenShift leverages Linux Containers, or what we refer to as “Gears”, to deploy and run applications in a secure multi-tenant platform.
Once users deploy their application, they have the flexibility to work from our Web console, CLI or IDE interfaces or to SSH directly into their Gears and execute commands for additional debugging. Features like remote debugging via SSH or direct debugging from Eclipse are what drive the developer experience that OpenShift users value. However, a competitor raised the fact that you can run commands like netstat from within an OpenShift Gear container and questioned us on security. While no system is completely immune from the threat of malicious hackers, the ability to execute these commands inside a Gear does not mean it is actually insecure, as was later noted. This is due to the multi-layer container security model OpenShift implements.
OpenShift container security and isolation is handled by a combination of kernel namespaces and SELinux. The value of SELinux as an additional security layer in containerized environments is something we’ve been discussing for years and it’s what has enabled us to manage our public OpenShift Online environment and the nearly 2 Million applications deployed on it since launch. The value of a layered security approach for containerization, leveraging solutions like SELinux or AppArmor, was recently highlighted by Docker in a blog on container security, in response to a published exploit. Red Hat’s own Dan Walsh, who works closely with us on OpenShift, contributed Docker’s SELinux enablement and is a member of the Docker governance board, recently discussed container security as well for opensource.com. Although we don’t view OpenShift’s ssh capabilities as an inherent security risk, we do plan to limit certain commands like netstat within an ssh session to remove any user concerns.
A final point of discussion was around application scaling. A competing PaaS vendor was discussing how quickly they could manually scale up large numbers of application instances on their PaaS. This was being compared against manual Gear scale-up times for our OpenShift Online service. It wasn’t clear however whether the tests were running on equivalent infrastructure or that the scenarios actually represented real world use cases, as you might expect from a true performance benchmark. In the case of OpenShift Online, the test was running a JVM-based application in resource constrained Small Gears, using our OpenShift Online free service tier, which is not what we’d recommend for running Java applications at scale.
In our experience, autoscaling, rather than manual scaling, is a key PaaS capability that customers we speak to are more interested in. This demo shows OpenShift’s autoscaling capabilities in action.
In OpenShift, when autoscaling is enabled, an increasing application load will trigger the automatic scale-up of additional application container instances, a feature most other PaaS providers can’t provide. OpenShift will also automatically scale instances down when load decreases and has built-in flap protection to avoid scaling up or down prematurely in response to transient load spikes. OpenShift can also idle inactive applications, to free up compute resources on the Nodes and drive greater density by optionally overcommitting Node resources. In the end it comes down to what features are most important to customers, for their particular application scenarios.
When it comes to choosing a PaaS provider, customers have many choices, including both Public and Private PaaS solutions. This increasingly competitive market landscape drives innovation forward and is good for customers. We are happy that so many customers have chosen OpenShift, are proud of the recognition OpenShift has received in the market and how we have faired in independent head-to-head comparisons. We will continue to drive new innovations and advance OpenShift forward based on feedback from our customers, partners, community members and users.
Today the OpenShift development team announced a new public Origin repo containing initial commits for our third generation OpenShift platform. This integrates work we’ve been doing over the past year plus in OpenShift Origin and related projects like Docker, Kubernetes, Geard and Project Atomic – all of which will become integral components of the new OpenShift. This Origin community effort will drive the next major releases of OpenShift Online and OpenShift Enterprise 3.
Earlier this spring, we looked under the hood with OpenShift to explain the components of our current generation PaaS platform. Long time Shifters will also fondly remember our first generation platform from the initial OpenShift.com launch over three years ago, in May 2011.
Since then we’ve added a ton of new features, seen our users deploy some amazing applications, launched commercially supported versions for both Private PaaS with OpenShift Enterprise and Public PaaS with OpenShift Online, won multiple awards and announced a number of great partners and customer wins.
Now it’s time to look ahead to the next major evolution of OpenShift and our new platform stack. In this blog I will explain our plan for OpenShift v3 and how all the pieces will come together. Future posts will look deeper into specific components of the platform.
We believe a modern application platform will have Linux containers at its core. That’s why Linux containers, or “Gears”, have always been a core component of OpenShift. Leveraging technologies like kernel namespaces, cGroups and SELinux we’ve delivered a highly scalable, secure, containerized application platform to our users. This enabled super fast application deployments for OpenShift developers, made the platform more efficient to both run and manage for our OpenShift administrators and supported the nearly 2 Million applications deployed on OpenShift since the initial launch.
Over the past year Red Hat has been working with the Docker community to evolve our existing containers technology and drive a new standard for containerization through the libcontainer project. Libcontainer provides a standard API for defining a container, including working with namespaces, cGroups, network interfaces and other container functions. Leveraging our deep kernel expertise, Red Hat is contributing significantly to the development of Docker libcontainer and driving key features like SELinux integration to enhance container security. This work lead to announcing Docker support in RHEL 7 and the launch of Project Atomic to develop a new container-optimized Linux host. This new container architecture is at the core of OpenShift v3.
The OpenShift platform is built on a foundation of Red Hat Enterprise Linux. Our current platform leverages key capabilities in RHEL 6 to provide many of the features our users enjoy. As we move to OpenShift v3 we will be taking advantages of new capabilities introduced in RHEL 7 and being developed in Project Atomic. RHEL 7 was officially released in June and brings Docker containers support and additional enhancements, on an updated Linux kernel, to help us improve both OpenShift functionality and performance.
Project Atomic was launched to develop new Linux host capabilities, optimized for running containerized application environments. Project Atomic also enables a new atomic update model for managing host instances. Many of the next generation containers features mentioned here are being developed in Project Atomic on this new atomic host model. This will enable a new RHEL Atomic product distribution that maintains compatibility with RHEL 7. Customers will have new flexibility to run RHEL Atomic or full RHEL 7 to host OpenShift environments.
While Docker libcontainer will provide lightweight application isolation through OpenShift Gears, what developers care most about is what they can run inside those containers to deploy their applications. That’s why the true power of Docker is an application-centric packaging model and the flexibility of an image-based deployment method, combined with the large and rapidly growing selection of images available in the Docker Hub registry. This gives developers the broadest selection of components to create their applications and also enables portability across bare metal systems, virtual machines and private and public clouds.
The OpenShift v3 Cartridge format will adopt the Docker packaging model and enable users to leverage any application component packaged as a Docker image. This will enable developers to tap into the Docker Hub community to both access and share container images to use in OpenShift. Customers will also be able to leverage Red Hat certified container images from both Red Hat and our ISV partners. Our recently launched OpenShift Marketplace will expand to include solutions from both SaaS partners and certified ISV’s.
Our xPaaS services for OpenShift, launched over the past year, will drive expanded capabilities in OpenShift v3 from our JBoss middleware portfolio. This will include new and enhanced services for messaging, integration, rules management, BPM, expanded mobile capabilities and more. OpenShift users looking for a runtime for their desired language or framework, a SQL or NoSQL database, message broker, cache, mobile push server, log management, monitoring tools, or other components, will have an unmatched array of choices for creating their next great application!
An application in OpenShift typically spans multiple Gears. Something needs to orchestrate those application container endpoints, whether it’s connecting Node.js in one Gear to Postgresql in another Gear, scaling up a cluster of JBoss EAP servers, or adding other components to build out your application stack. Those containers also need to be deployed on selected container hosts, or “Nodes” in OpenShift parlance, based on information gathered from each Node. Container orchestration and scheduling/placement is largely managed by the OpenShift Broker.
In OpenShift v3, we will be integrating Kubernetes in the OpenShift Broker to drive container orchestration. Google launched the Kubernetes project to address the orchestration and management of containerized application deployments, across a large cluster of container hosts, leveraging the experience gained from running their own containerized data centers at very large scales. Kubernetes also enables a pluggable scheduler component. Red Hat is leveraging Kubernetes and work initiated in the Geard project to bring orchestration and scheduling capabilities to OpenShift v3 and better manage large scale environments.
Ultimately the success of OpenShift is driven not just by the platform but also by the experience we are able to deliver to our end users. Whether you are using OpenShift Online or OpenShift Enterprise, building web or mobile applications, developing in Java, PHP, Python, Node.js or any of our other supported language runtimes, leveraging technology from Red Hat or our partners – our goal is to deliver a best of breed user experience for OpenShift developers. This commitment to a seamless developer experience will carry forward in OpenShift v3. Docker integration will also allow for a better local development experience as developers can use Docker images on their own machine for local development and then push those same images to OpenShift.
Improving the experience for OpenShift administrators is also an important objective. Whether the service provider is Red Hat with OpenShift Online or whether its our on-premise customers delivering PaaS services with OpenShift Enterprise, enabling administrators and operations teams to deploy and manage the OpenShift platform effectively is critical. OpenShift v3 will bring new capabilities for provisioning, patching and managing application containers, routing and networking enhancements, and provisioning and managing the OpenShift platform itself. Meanwhile administrators will continue to have the choice to deploy OpenShift on their choice of infrastructure, whether it’s on physical, virtual, private or public cloud infrastructure.
As much as we’ve done over the past year to lay the groundwork for OpenShift v3, in many ways we are just getting started. Like all Red Hat products, our work starts upstream in the open source community where innovation thrives. While the new OpenShift platform takes shape in the Origin community, which is the upstream for our Online and Enterprise solutions, we don’t believe that innovation is limited to a single open source community or foundation.
The new OpenShift platform is the product of the many different communities Red Hat actively participates in. This includes community projects like Fedora, Centos, Docker, Project Atomic, Kubernetes, OpenStack, multiple JBoss projects and more. In many cases, like Docker, Red Hat is not only a participant but a leading contributor and helping to drive the direction of the project, as others have noted. This work gives us a deep understanding and appreciation for the technology and benefits both the communities and our OpenShift users as we move forward.
Future blog posts will dive deeper into each component of the new OpenShift. A Beta program will launch later this fall creating additional ways for you to participate. We’ll also be discussing plans for migrating existing OpenShift Enterprise and OpenShift Online customers to OpenShift v3 once it’s released. In the meantime, join us in the Origin community as we build the next great application platform together.
Years ago a colleague of mine at Sun had an annoying habit of interrupting technical discussions at engineering meetings with the phrase “We're in the weeds!”
It was annoying because he would often disrupt fascinating and mind-expanding discussions on, say, the structure of TCP packets, or on message bus implementations. These were always interesting debates that we were reluctant to stop.
Years later I came to realize the wisdom of his visionary approach of focusing on the big picture and ensuring that we didn't get bogged down in implementation details, premature optimization, and other impediments to getting the product out the door. I soon found myself saying “Get out of the weeds” at engineering meetings, yet I was continuously hampered due to the overall complexity of software development. As much as I wanted to untangle myself from the weeds, I was unable to do so.
Fast-forward a couple of decades to the introduction of PaaS, which finally provides the capabilities needed to eliminate much of the complexity of software delivery, and truly Get out of the Weeds.
This blog collects a handful of situations I’ve encountered over the years where having access to a PaaS would have saved tremendous amounts of time, effort, complexity, cost, and overall stress for all involved.
Implement SSO across multiple applications
At one company, a coworker was tasked with implementing Single Sign-on (SSO) across a number of internal applications. He dove into this job with gusto, first researching the current state of SSO solutions available, building a handful of prototypes, finally choosing Spring Security and a few other frameworks as the basis, and went to work. Three months, and countless thousands of lines of extra code later he finally shipped it. While it was mostly working, “mostly working” doesn’t cut it in the security world.
Compare this to the effort involved to enable SSO across apps in Stackato: click a checkbox. This takes all of 3 seconds instead of 3 months, which is…15 million percent faster! Then there’s the reduction in code complexity, test suites, external application changes.
And, it actually works.
Onboarding a new engineer can be a nightmare. I’ve witnessed, and worked at, companies where bringing an engineer up to speed can take weeks. Much of this time is taken setting up, providing access to, and understanding complex development and deployment environments consisting of multiple servers, services, languages, runtimes, frameworks, containers, etc.. Then there’s the complexity of the processes involved in requesting resources, running test suites, interacting with other teams, etc.
How many times have you seen an engineer go through the long and sometimes painful onboarding process, only to jump ship immediately after? That sure doesn’t come cheap.
Most of this agony, cost, and time can be eliminated by using a Private PaaS like Stackato to set up the developer environments. I’m a big fan of the “micro cloud,” a facility Stackato provides allowing the entire cloud application suite, including all services and dependencies, to be provisioned on a single developer laptop quickly. New developers can access a fully-configured stack, on their own laptop, in hours, or even minutes after first logging in. A developer would be capable of deploying, running and testing large complex application stacks before they even learn where the nearest break room is.
I hate to think of all the time I’ve wasted waiting for access to a database instance. At first I thought this was an anomaly of one team I was working on, but then encountered the same lag time at the next team. And the next company. More recently, talking to engineering teams at a wide range of large and small organizations, I’ve discovered it’s still all too common to have to wait days or more for a simple db instance to be provisioned.
This lag time has several negative effects. First, it disrupts the flow. When I request a service instance for a problem I’m working on, all my focus and creativity is focused on this problem. But after having to wait days to get the service provisioned, my focus is almost certainly elsewhere and it takes effort to change gears again back to the old flow.
Another negative effect of this delay is that it often causes disharmony between the IT and developer teams. This disharmony is a common theme at devops conferences, and is extremely costly. This is where Stackato shines: it allows the IT team to deliver a reliable, scalable, self-service, on-demand developer platform with very little intervention required.
Now, instead of submitting a ticket and waiting, a developer needing a new database instance simply clicks a button, and is able to make use of the new service in minutes. This enables creativity, experimentation, and more efficient flow, while also greatly improving the harmony across the IT and developer teams.
Differences in configuration across developer machines can be incredibly costly. One time, inconsistency in a configuration file came close to sinking the company.
Without getting into too many details I’ll just say that the massive amounts of data involved, the intermittency of the problem, and the complexity of the application, resulted in the ship date slipping by 3 weeks, during which several stressed out developers were working around the clock, poring over over 4 page printouts of crazy SQL queries, parsing hundreds of thousand of lines of logs (logs – don’t get me started!), following misdirection after misdirection, and trying in vain to reproduce the problem. This three week slip date almost cost us our biggest customer, and probably shaved five years off my overall lifespan.
At the end of the day it turned out the problem was caused by an inconsistency between a configuration file on QA and in production. On QA a service was referenced by IP address in QA, and by hostname on the customer site. The DNS round-robining was occasionally resulting in a stale database being referenced.
I’ve seen countless other examples like this, with similar results. A backslash instead of a forward-slash in a config file on one developer system takes a whole morning to fix. Or a trailing space in a property file in QA results in a two-day slip.
The point here is that if QA, dev, and production had been using Stackato as the basis, configuration between the environments is minimized, as almost all aspects of the configuration is in common, whether running on a developer laptop, or on multiple racks spanning data centers.
Of course not all configuration is identical between various environments, but when using a PaaS as a foundation, most differences go away, minimizing the potential for issues caused by the differences.
Log Files – Don’t get me Started!
Too late, I already started. As mentioned, the above scenario resulted in countless hours poring over log files, most of them from Spring, where a single exception could easily generate over a hundred log lines. That didn’t include the up-front effort coordinating these logs in the first place, fiddling with tomcat’s notion of rotation, dealing with multiple agents capturing and forwarding log messages, ensuring the agents always stay alive, making sure disks don’t fill up.
In short, dealing with logs is a pain in the butt. With client/server architectures, multiple log files, inconsistent log message formats, disparate servers, disk capacities constraints, log rotation, notification requirements, time sync issues, correlation challenges, multiple user and system tasks, multiple interacting apps and processes are coming and going across servers, data centers, and continents, and more, it’s a wonder we’re able to deal with logs at all.
But, much of this pain goes away when using Stackato to manage logs. Like the SSO example above, Stackato allows configuration of log aggregation with a single operation or command. All application logs, from multiple app instances running across multiple availability zones, can be easily captured and redirected to tools or apps that are highly capable of dealing with them, like Loggly or Splunk.
I could go on with examples of situations where Stackato would have saved countless time in scaling, zero-downtime upgrades, security, and countless general development activities.
The point is, if you find that you or your team is mired in implementation details, constantly optimizing and re-optimizing, and basically tangled up, I recommend you take Stackato for a spin. Chances are you’ll find that it will also allow you to Get Out of the Weeds.
Image courtesy of Michael Wilfall
Learn more about Stackato. Find out how our private PaaS has helped organizations reduce deployment time from weeks to minutes, and manage their cloud applications more efficiently. If you would like to get hands-on experience with Stackato, you can get Stackato by downloading the free micro cloud, requesting a customized demonstration or signing up for a POC.