Heroku, the Salesforce-owned company that powers the application-development process of hot startups like Lyft and Upworthy, announced a new product line Thursday called Heroku Enterprise. It’s geared for big companies that want to develop the kind of modern applications seen at startups while providing the type […]
Heroku’s new app-development product line is meant for the enterprise originally published by Gigaom, © copyright 2015.
Server monitoring gets hot SolarWinds, which monitors multi-vendor technologies running in house, last week bought Librato to extend its reach into the cloud. Librato is noted for its ability to watch workloads running in Heroku and Amazon Web Services as well is in internally-run Rails, Node.js, Ruby, […]
Since day one, developers from all over the world have been deploying apps on Heroku, and we’re extremely proud of the strong global community we’ve built. Our European customers in particular have asked for the ability to deploy applications geographically close to their European customer base so they can offer a better user experience with more responsive apps. In 2013 we launched 1X and 2X dynos in Europe to meet this demand. Today we’re pleased to announce the general availability of Performance Dynos in our European region.
The availability of Performance Dynos in Europe provides the flexibility needed to build and run large-scale, high-performance apps. Performance Dynos are highly isolated from workloads in other dynos and contain 12 times the memory of a single Heroku Dyno, resulting in significantly faster and more consistent response times.
Deploying apps in the Heroku EU region can reduce latency by 100ms or more for European users, improving responsiveness and application experience. With the launch of Performance Dynos in Europe, highly scaled apps can reach even greater improvements. We also have 110 Heroku Add-ons available in the EU region to extend application functionality. All these EU region services are managed by the same unified Heroku interface that you are already familiar with, and there is no difference in pricing from our US region.
Ranging from small EU-based startups to international Fortune 500 enterprises, we’ve seen strong adoption of Heroku in Europe. In 2014, the number of apps deployed in Heroku’s Europe Region increased by nearly 180%. This includes great companies like Toyota Motor Europe and Sweden’s largest commercial television company, TV4.
When the TV4 team moved their first web apps from the US region to the European region, they immediately saw 150 ms improvements in web performance. Today, TV4 deploys all their Heroku apps to the EU region and are taking full advantage of Performance Dynos to deliver the very best user experience to their customers.
It is more important than ever to build responsive apps that react to customer actions in milliseconds. Heroku is committed to sustained investment in Europe and beyond to ensure that you have the geographic deployment options you need to meet this goal.
The post Introducing the General Availability of Performance Dynos in Europe appeared first on Platform as a Service Magazine.
Byron Sebastian, the former CEO of Heroku who was most recently an executive vice president at Salesforce.com, is joining the board of directors at developer darling Codenvy. The company sells a cloud service, and software, that uses Docker containers to simplify the configuration, deployment and sharing of development environments. Sebastian, […]
Ex-Heroku CEO Byron Sebastian joins the board at Codenvy originally published by Gigaom, © copyright 2014.
A quick glance at most any phone shows the importance and urgency — for businesses of all kinds — of creating mobile customer apps. Our everyday activities — finding a ride, ordering a meal or turning on a light — are increasingly mobile experiences.
But delivering a great omnichannel experience to customers requires more than just the work of the application developer. The larger organization is involved in following up with prospects, fielding service inquiries, and sending relevant marketing messages. Orchestrating this tapestry of touchpoints often requires developers to integrate with systems used by non-developers, including sales, service, marketing and community management systems.
At Heroku, our core belief around making the developer experience as simple as possible extends to the ecosystem of Salesforce products and services with which developers integrate their applications. Today we are releasing Heroku CX Patterns, a set of reference architectures and technical resources for creating a comprehensive customer experience with Heroku and Salesforce. With Heroku CX Patterns, developers can get starter apps, sample code, and documentation that will help them build out apps that utilize a wide range of Salesforce services.
The fictional setup for the primary starter app, “Nibs,” is that of a high end chocolatier looking to engage customers via a loyalty mechanic triggered by in-app activities. The base loyalty application is deployable via Heroku Button. Additional documentation and sample code inside the Nibs Github project can be used as a guide for developers that want to enhance the baseline customer experience through integrations with other Salesforce products. Optional modules include live video chat for customer service, in-app push notifications through Marketing Cloud, a Salesforce Communities integration, and a Journey Builder custom activity app.
Much of the power of Nibs comes from the content and customer data synchronization between Heroku Postgres and Salesforce via Heroku Connect. Heroku Connect is a bi-directional data synchronization service between Salesforce and Heroku Postgres that enables developers that know SQL to work with Salesforce data inside a Heroku Postgres database. This makes its easy for developers to deliver mobile apps to customers that seamlessly integrate with Salesforce data.
In addition to the core Salesforce apps — Sales Cloud and Service Cloud — which can be integrated with Nibs via Heroku Connect, Nibs includes support for other members of the Salesforce product family via direct API sample code and documentation. These include:
The intent of Heroku CX Patterns is to provide a example for developers to layer Salesforce services into their Heroku customer engagement apps. Developers can start simply and build out the app experience over time, as various products and services become relevant to their particular use case. Join us at Dreamforce this week learn more about Heroku CX Patterns and Nibs. There are several Nibs/ Heroku Connect breakout sessions listed in the DF14 Agenda, with demos in the Dev Zone, the Campground, and at Lightning Theater throughout the event.
Two-factor authentication is a powerful and simple way to greatly enhance security for your Heroku account. It prevents an attacker from accessing your account using a stolen password. After a 4 month beta period, we are now happy to make two-factor authentication generally available.
You can enable and disable two-factor authentication for your Heroku account in the Manage Account section of Dashboard.
Before you turn it on, please read on here to understand the risks of account lock-out. You can also refer to the Dev Center docs for more details.
Without two-factor authentication, an attacker can gain access to your Heroku account by just knowing your password. The most common way attackers get access to passwords is by hijacking email accounts and issuing a password reset request. If you reuse the same password for multiple services, an attacker may also learn your password if one of your other services are compromised and its password database leaks (therefore, never use the same password for multiple services).
After you turn on two-factor authentication, you can only authenticate by providing both the password and a "second factor" code. The second factor code is a code that can only be used once or that expires very quickly (30-60 seconds). You obtain the code from an authenticator app on your mobile device.
Now, it is only possible to access your account by knowing your password and having access to your (unlocked) mobile device.
When you enable two-factor authentication, it is critical that you download a set of recovery codes and store them in a safe place. If you lose your mobile device or if it gets wiped, you can authenticate using these recovery codes in place of the two-factor code generated by your device.
If you have enabled two-factor authentication and not saved your recovery codes, go to the accounts page now and download your codes.
If for some reason you have neither your two-factor device nor your recovery codes, there are a few additional ways you may be able to recover.
When you enable two-factor authentication, please understand that
It is possible for you to lock yourself out of your account with no ability to regain access.
It is critical that you download recovery codes and store them in a place where you can access them in case of an emergency.
If you are locked out and none of the recovery methods work for you, there is no guarantee that you can regain access to your account because we may not be able to confirm ownership of the account.
In the future, we may add additional forms of account ownership verification to aid in cases of lock-out, but there is no single solution that fully solves this problem.
Therefore, please do save those recovery codes!
Security is a multi-faceted problem and two-factor authentication is not designed to protect against all possible attacks. For example, it will not protect you against malware that gives a remote attacker access to your computer. Two-factor authentication is specifically designed to protect against password leaks. You should continue to follow all other security best practices to achieve maximum protection.
The Heroku unit of Salesforce.com wants to make it easier than ever to invoke its platform-as-a-service (PaaS) environment. At the company’s ExactTarget Connections conference this week, Salesforce.com announced Heroku DX, a set of services that includes tools that make it possible to deploy code in the Heroku environment with the click of a button.
One of our core beliefs at Heroku is that developers do their best work when the development process is as simple, elegant, and conducive to focus and flow as possible. We are grateful for how well many of our contributions to that cause have been received, and today we are making generally available a new set of features that have been inspired by those values.
Collectively, we call these new features Heroku DX–the next evolution in Heroku’s developer experience. Our goal with these new features–Heroku Button, Heroku Dashboard + Metrics and Heroku Postgres DbX–is to make it faster than ever for developers to build, launch and scale applications.
Heroku is known for making it easy to get started on a new app. Many tutorials and sample apps contain step-by-step guides for deploying to Heroku because it is one of the easiest platforms to start on. But we wanted to make it even easier. We wanted step 2 to be: “There is no step 2”.
Heroku Button makes this a reality. With a single click you can set up a new app from sample code, complete with environment configuration, Add-ons and the first deploy. Any public GitHub project can be Heroku Button enabled simply by adding an app.json file with some additional metadata. During the beta, we saw over 400 Heroku Buttons created, including examples from Dropbox, Twilio, Uber and Coinbase. As part of general availability, we are launching a real-time view into the Heroku Button community at buttons.heroku.com, which is continually updated with the newest and most popular buttons.
At the center of Heroku DX is an entirely new web Dashboard redesigned from the ground up for improved developer experience. Under the hood, the new Dashboard is written in Ember.js, allowing for faster and smoother interactions. At the surface, a new visual design makes navigating your collection of apps and associated resources and properties simpler. New interaction patterns, such as drag and drop app assignment, simplify previously complex tasks. As a modern web app, the new Dashboard is fully responsive, and works across devices and screen sizes, making it easy to manage and scale your Heroku apps wherever you go.
A new Activity section in Dashboard makes it dramatically easier for developers to collaborate. Every activity on every app shows up in the Dashboard so everyone on the team knows what’s going on. You can even inspect the build output from team members’ builds and help them troubleshoot problems right there in the Dashboard.
With Heroku DX, we are also introducing a significant enhancement to the runtime management of your apps with a new Metrics section in Dashboard that gives you a refreshingly simple and straightforward view on your application’s performance.
Heroku Metrics surfaces the key runtime attributes of your application, including Response Time, Requests / Sec, and CPU, and adds an intelligence layer that automatically surfaces the key recommendations and performance changes. Unified across a single time axis, the relationship between applications metrics is made clear, allowing you to easily see their interactions. The new Metrics feature is available for any application that has been scaled to more than one running dyno.
At the center of many applications running on Heroku is Heroku Postgres, and as part of the Heroku DX launch we are releasing one of our most significant upgrades ever to our Postgres Database-as-a-Service. These new Postgres features–or Heroku Postgres DbX–are designed to advance the experience of developing and managing databases in a way that is similar to how Dashboard enhances the other related parts of your developer experience. At the core of Postgres DbX is a new lineup of database plans, which on average offer 2-3X performance of our previous plans while keeping the same price. Premium plans and above also automatically gain encryption at rest, simplifying many compliance requirements.
Heroku Postgres’ most prominent addition is Performance Analytics. Understanding the operating and behavior of a relational database–in a way that is clear and actionable–typically requires an uncommon set of tools, skills and patience. Performance Analytics reveals one of the key benefits of the “as-a-Service” model as applied to databases; a complex pipeline of configuration, logs, processing and visualization is reduced to a simple and immediate web experience for the developer.
The Dev Center provides detailed technical information on all of the features launched today. You can also learn more by registering for an online session we’ll be delivering on October 8.
The post Introducing Heroku DX: The New Heroku Developer Experience appeared first on Platform as a Service Magazine.
Last week we announced an important new partnership with Cloudbees to bring a native enterprise Jenkins service to Pivotal CF (PCF). This provided a first glimpse at the power of Pivotal Network, a service catalog for PCF. Pivotal Network allows any private PCF installation to import additional scalable platform services from Pivotal and a growing ecosystem of partners. PCF then manages the health and scalability of these services natively.
As Cloudbees joined the extended Pivotal ecosystem they also exited their hosted RUN@ PaaS business, which they had previously offered on AWS. In a thoughtful exit interview the talented Steve Harris compared their four year journey into PaaS market to a high risk polar expedition. As a fellow adventurer with Harris in the platform market since 2010, I would sum up my own experience and many of the conclusions of his blog with this chart:
The public platform space has been a brutal market for startups. The $212M Salesforce buyout of Heroku in late 2010 was a false positive, and confused VCs. The investments became irrationally exuberant and the results were predictably bad. Public PaaS startups were too early, under resourced and focused on the wrong segment to win big in this new market. VC funded plays such as Dotcloud, Appfog, Cloudbees, and even the mobile specialists Parse and Stackmob have all since ‘pivoted’ or been modestly absorbed into larger organizations.
Deep pocketed investment by the major cloud providers such as Google’s recent free $100k for startups made it even harder for independent cloud platforms to compete or raise additional funding (all while having to pay the IaaS providers).
Finally, as Harris observed Cloud Foundry executed well on a broad ecosystem play, creating the “Cloud Foundry effect”
“There is a Cloud Foundry effect: Cloud Foundry has executed well on an innovate-leverage-commoditize (ILC) strategy using open source and ecosystem as the key weapons in that approach.”
Pivotal’s popular shared Cloud Foundry service for developers, PWS, is priced at only $2.70/month for a small hosted application. Stuck between the rapid growth of AWS, Google, and the emerging enterprise standard of Cloud Foundry there just wasn’t margin left for the hosted alternative platforms to live on.
In contrast to the startup failures in the public PaaS market, the revenue in the private platform market is significant and growing exponentially. While many enterprises are leveraging public cloud, many of the “crown jewels” applications still reside in private data centers, along with the majority of IT budgets. This means that despite public cloud adoption, architectural standards often start in the private cloud landscape and extend out.
Executing in this segment generates large purchase orders but requires shipping installable enterprise software, multi-year R&D, and a large enterprise sales force (the last two points being beyond the scope of most VC-funded business plans). The deals are large because they tap into the enormous yearly spend on enterprise middleware currently dominated by innovation laggards like Oracle’s SOA Suite. These legacy products were designed for infrequently updated applications built for vertical scaling—two trends which are now out of date. As the Harris’ blog observed:
“Pivotal Cloud Foundry has produced partnerships with the largest, established players in enterprise middleware and apps. In turn, that middleware marketplace ($20B) is prime hunting ground for PaaS, and Cloud Foundry has served up fresh hope to IT people searching desperately for a private cloud strategy”.
While public only PaaS players have continued to exit the market, private software based PCF has racked up an impressive parade of enterprise customers migrating from Oracle middleware to a next generation private cloud architecture. In June, more than a thousand enterprise users gathered for the Cloud Foundry summit, speaking to the remarkable benefits of building an application centric private cloud.
Early pioneers like Corelogic’s Richard Leurig are betting their company’s future on a next generation platform architecture and breaking down years of siloed applications on heterogeneous infrastructures. The market is waking up from years of dealing with the complexity of legacy products and demanding a cloud API integrated solution. They are also transforming their development processes to be more agile, anchored on continuous integration with systems like Jenkins. This is resulting in earmarked budgets for large private PaaS build outs.
As enterprises adopt PCF, it is changing their whole concept of private cloud. Line of business developers gain immediate self service access to rapidly test and deliver applications and APIs. Monsanto observed an immediate 50% reduction in LOB development time and costs with PCF and plans to standardize the company on it in 2015. For infrastructure teams PCF’s minimal IaaS requirements and deep platform provisioning automation provides immediate private cloud features with just a few clicks and a thirty minute install time.
Customer demand to extend the platforms’ services catalog with Jenkins, Cassandra, Elastic Search, etc is overwhelming. Even enterprise stalwarts such as SAP, and SAS have announced customers are pushing them to offer their software on Cloud Foundry based private clouds. Almost every day brings an inquiry from an existing ISV whose customers are migrating to the platform who wants to be part of the PCF service catalog.
Heading into 2015, we are hiring hundreds of additional developers, operations, marketing and sales professionals, and ramping every part of the team to take on our forty billion dollar enterprise opportunity.
The public only PaaS meme centered on hosting, and missed the true revolution in software architecture happening in enterprises. This allowed Cloud Foundry to germinate in R&D for several years and surprise existing vendors, who prepared for the change merely by offering their existing legacy products on demand. Oracle kept its decade old middleware architecture focused on heavyweight J2EE but enterprise applications are moving from heavyweight client-server legacy model to a mobile centric paradigm dominated by REST APIs and microservices.
The production ready PCF platform has emerged as a way for enterprises to deliver on an ambitious private cloud initiative, ready for mobile APIs and microservices based applications. It delivers highly scalable, highly available services, fully automated on any cloud, with an easy to use developer API perfect for agile teams.
Using PCF is believing, and after one recent successful pilot, a bemused Fortune 50 CIO asked me, “IBM has been promising me this level of simplicity and automation for years, why are you able to pull it off while they failed?” I answered, “Only by bringing the discipline of modern cloud architecture to enterprise applications were the efficiency gains you witnessed possible. No one has ever brought a cloud native platform to the enterprise datacenter—until now.”
The next few years of this expedition are going to be very interesting!
Our regular MC, Chris Ferris from IBM, was absent this month, so Michael Maximilien (“dr.max”) from IBM kicked off this month’s Cloud Foundry Community Advisory Board (CF-CAB) Meeting and kept things on track.
Mike Maxey from Pivotal gave an update on the where the Cloud Foundry foundation is in its formation. During the summer, and especially after the Cloud Foundry Summit, there has been a drop in momentum for this.
Mike said that we are still targeting the end of the year for standing up the foundation. Pivotal has been working with the Platinum members of the foundation to define the guiding principles and the high ideas around how governance might work. They are also looking at how projects will come together.
By October they plan to have all the requirements and documents finished for the non-profit entity. Then by November they hope to do an announcement of the Foundation.
They are now at a point where they can start working on the formal documents. Mike noted that this will obviously take a substantial amount of time.
Mike said they are hoping to share ideas and content with the Cloud Foundry community in the next one or two months.
At the Cloud Foundry Summit a total of 35 foundation members were presented. Since then a few more companies have shown interest in joining the foundation, but Mike said that it is unlikely these will be announced until November. Mike gave two reasons for waiting until November to announce any new members:
Joanne Muszynski from IBM asked if the regular foundation board meetings will start in November, or whether they are already happening. Mike confirmed that those meetings have not yet started and they will likely start in November.
Joanne also said that she had heard that it was becoming harder to form these kinds of non-profit foundations and wondered if this was causing any delays. Mike said that it was becoming more difficult to form a classic non-profit organization, but the Cloud Foundry foundation is going to use the same model as the OpenStack Foundation.
While Mike does not foresee any issues, he did say there are some hoops to get through. For instance, you need to publicly display all of your by-laws and documents for 45 days before the non-profit status will be granted by the government.
Heather Meeker, the foundation attorney, has looked at the process. Also the Pivotal attorneys have looked at it. Therefore, Mike thinks it should be quite straight-forward.
James Bayer gave some updates to the Product Managers within Pivotal.
These people will be points of contact for the various Cloud Foundry projects.
A version 9 release has just been made available of the MySQL service.
Shannon Coen, the Product Manager for Services at Pivotal, said they are looking to make the MySQL service more robust in terms of high-availability. The v9 release is a single MySQL server and only supports one shared plan. The next release of the MySQL service will support multiple plans.
Some significant architectural changes have been made for this upcoming release, which they are targeting for within the next month. The biggest change is that the MySQL server has been replaced with MariaDB. Also, MariaDB Galera clustering has been introduced, which is a synchronous multi-master cluster for MariaDB.
Initially this service will not be highly-available. All requests will be proxied via a single HAProxy server. Although, the subsequent release will look at adding an additional HAProxy server and implementing fail-over.
Shannon said that issues have been resolved with the Riak service, which involved data not being written to a persistent volume.
Syslog forwarding has also been introduced.
A new v4 final release of this Riak CS service is now available.
Discussing the Services road-map, Shannon told us that they are planning to add support for updating or changing a single service instance. The initial focus of this will be to change the service plan.
Shannon said that he will be working with Greg Oehman (the new CLI Product Manager at Pivotal) on the related new CLI functionality to update a service instance.
This is will then lay the groundwork for being able to change other things about the service instance.
Generally, the MySQL (via cf-mysql-release) and Riak CS (via cf-riak-cs-release) services are deployed separately to Cloud Foundry (via cf-release). Shannon said that the release engineering team at Pivotal are looking at doing composite deployments that include these services. This is to enable the use of the highly-available MySQL service (see above) to back other components, such as the Cloud Controller and UAA. Likewise, Riak CS could be a potential replacement to the NFS blob store.
These two open-sources services, MySQL (MariaDB) and Riak CS will also provide out-the-box HA services with this composite release.
James Bayer said that this would be a big milestone if we could have the data-stores of key Cloud Foundry components be highly-available out-of-the-box.
Greg Oehman from Pivotal started his update by warning that he had only been on the job for 3 days.
He said that the internalization work is practically complete. There are just a few more tasks to wrap up.
Some work has been done for Services, which involved service plan access, visualizations and filtering.
For internationalization, the client now has supports for five languages, which Greg said is a great start. He said that the next one might be Catalan, which Ferran Rodenas (“Ferdy”) from Pivotal has requested.
Greg intends to re-ignite the conversation around plugins and find the best path to move forward with this. He said they want to do this soon and do it rapidly.
Alex Jackson from Pivotal said that they are working on eliminating the parts of Loggregator that are not highly-available. For instance, the data pipeline path is not redundant going through the Traffic Controller. Even if you do have multiple Traffic Controllers, it will still only use one of them.
They are using a similar pattern to Diego, by putting the active components into etcd. This allows for monitoring the active health of these components and traffic will be routed only to the currently active components within the system.
Alex said that they will continue to work on making this new architecture stable over the next month. They will also work on including the beginnings of the Loggregator metrics.
The new Loggregator metrics will be provided via new Loggregator API endpoints, so will not affect existing API endpoints and will therefore be backwards compatible with the CLI’s existing integration with Loggregator.
Alex said his team would like to promote the dropsonde repository out of the Cloud Foundry incubator and into the main Cloud Foundry repos. This defines is the emitter format for the beginnings of the metrics used by the router which the Loggregator intends to also use.
James Bayer gave the update from the Runtime team.
v175 was the first release to include fully implemented CF Security Groups. James said that his allows administrators to set up the outbound network policies for applications. This can apply at staging or runtime.
There are a core set of policies that apply to all applications and then additional security groups can be applied to individual Spaces.
For example, a Space named “Production” might get exclusive network access to talk to the “Production” database. An application pushed into Cloud Foundry gets the general security access provided by the system, plus specific security access provided by the Space policies.
James said that this improves the multi-tenancy of the system and allows the treating of individual spaces as individual tenants.
James mentioned a new feature that gives the ability to restart an individual application instance. Previous to this there was only the ability to do a restart at the application level, which would involve fully stopping the application deploying new instances of all of the application containers.
James said that this is a minor feature, but “kinda neat”. He gave the example of having 5 running instances and seeing an issue with just one of them. Now you can restart that one instance without downtime to the entire application.
A new Ubuntu 14.04 (“Trusty”) stem-cell has been built for cf-release and works with v176. You can still also use Ubuntu 10.04 (“Lucid”) with v176.
James warned that going forward support for Lucid stem-cells will be dropped. He said that there has been some requests to continue producing Lucid stem-cells, but the burden of supporting both Lucid and Trusty stem-cells long-term is too much for the Pivotal team. This is due to the cost of providing continuous integration testing on these components. The matrix of things that they are testing is already very complicated.
While nothing intentionally will be done to break support for Lucid, thorough testing will only be done on Trusty. Therefore, they will not be able to certify support for Lucid.
Colin Humphreys from CloudCredo asked whether the application containers filesystem will also move to Trusty, or whether it will remain on the Lucid filesystem. He also suggested that this might be option via the “–stack” parameter.
James said that for now they are sticking with the Lucid filesystem for the Warden containers. The main reason given was lack of experience with the Trusty filesystem within containers and all the Cloud Foundry buildpacks currently work with the Lucid filesystem. He said that it is a long-term, but unscheduled, plan to move to the Trusty file-system for containers.
James said that the Runtime team are currently working on “Space Quotas” and “Organizational Buildpacks”. Both of these provide a way to provide these existing pieces of functionality at a greater level of granularity.
Other things briefly mentioned by James were…
James said that the designs for these are not finalized and that you can follow the mailing-list to see discussions on how these problems are being tackled.
David Stevenson from Pivotal gave an update on Notifications, before running off for his flight.
David said they now have a synchronous application that obeys the Notifications API and allows the components of Cloud Foundry to send notifications to either individual Users or all Developers of a specific Space.
The application uses templates, which can be modified by whoever deploys it.
They plan to announce this being “ready for use” within the next week, after which it will pushed to the Services teams for integration.
James Bayer proxied the update he got on Diego from Onsi Fakhouri of Pivotal.
Diego is continuing to be hardening and tested. Pivotal is now using an internal deployment that co-locates Diego in their production environment. They are able to opt-in to using Diego via environment variables. This covers the staging and running of applications.
James said that there are still “plenty of gaps”. There is currently no health management in Diego – if your application crashes in Diego, it would not be restarted. Some application statistics have not been implemented. There is no read-only file-system, like with the directory server.
IBM and CloudCredo are helping with adding support for Docker containers in Diego. This would work by end-users specifying a Docker image instead of a Buildpack when deploying an application. Once the container is running, it is treated like any other application in Cloud Foundry, said James. For instance, there is no change to routing requests or managing of Loggregator logs.
The work for Docker support in Diego started a couple of weeks ago and there is a design document you can read and comment on. IBM developers are working with Pivotal on this at the Pivotal offices and they expect to see progress on this over the next few months.
James gave a brief update on BOSH.
The Trusty stem-cell contains the go-agent, which means that Pivotal’s production environments are now using the go-agent. Therefore they believe this agent to be production ready.
James said that there is a lot of focus on the external CPI’s from Pivotal’s BOSH team. Usability is also being constantly improved.
UAA will shortly get a new Product Manager.
Work continues on OAuth.
James said that the Java Buildpack will shortly be moving to Java version 8. This will be the default Java runtime, but it will still be possible for users to change this to run application under Java 7.
Python, PHP, Go, Ruby and Node.js are being evolved to work better off-line and also to be more automated in the way upstream commits are taken from Heroku. These will be certified to work with Cloud Foundry in “offline” mode, then packaged up in cf-release.
Thanks to Michael Maximilien from IBM for running the meeting and also sending me his notes to assist in the write-up.
The Cloud Foundry Community Advisory Board started the session focussing on the Cloud Foundry Summit held in San Francisco earlier this month. Overall, the feedback about the conference was very positive. Cornelia Davis summed it up best, “The conference was really great…the energy was tremendous. The tidal wave that you could feel from the momentum around Cloud Foundry and the Cloud foundry community was palpable there.” A big thank you to everyone who organized the event!
Chris Ferris of IBM started off today’s Cloud Foundry Community Advisory Board meeting reflecting on the recent OpenStack conference. “One of the things I observed is that it’s definitely starting to trend towards discussions around containers, and Cloud Foundry and OpenStack seem to be like hand-in-glove in a lot of the conversations.”. Renat Khasanshyn from Altoros added his comments on the OpenStack Summit in the chat that “Docker conversations in the hallways heavily leaned towards Cloud Foundry” and “OpenStack ecosystem turned the corner and becoming all in with Cloud Foundry.”