Server management tools shed light on data center issues
A comprehensive collection of articles, videos and more, hand-picked by our editors
IT organizations that want to grow cloud deployments through DevOps have new options in Puppet Labs' latest release, Puppet Enterprise 3.0, but some in the industry question the scalability of Puppet's declarative, model-based approach.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
A debate simmers in the industry between proponents of Puppet and those of other configuration management automation products. The debate hearkens back to concepts of declarative, model-based programming (the most like Puppet's approach) and an imperative, or procedural, approach (the one generally taken by Puppet rival Chef). The declarative approach requires that users specify the end state of the infrastructure they want, and then Puppet's software makes it happen. The imperative/procedural approach takes action to configure systems in a series of actions.
Any kind of automation is better than the manual systems management that was done before, said Jonathan Eunice, analyst with Illuminata Inc., based in Nashua, N.H.
But some proponents of the imperative/procedural approach to automation say the declarative model can break down when there are subtle variations in the environment, since declarative configuration files must be made for every edge case in the environment. Competitive products such as Chef can operate based on if-then statements and can also use a declarative approach where it's a better fit.
"Any kind of variation is where the declarative approach tends to be harder, because you come to this place where you have to have a very sophisticated model that takes into account all the different variances that occur," Eunice said. "That becomes a learning-curve issue."
However, for some, there's power in the declarative model, because it simplifies configurations and offers an easy way to understand how systems are configured, said Robert Snyder, director of outreach technology services for Pennsylvania State University.
"We've grown relatively large relatively quickly, and we haven't found anything in scaling up our Puppet configuration that would suggest that if we were to multiply the number of nodes by 10 or 100 that we would see any problems," said Jason Staph, a lead systems administrator for Penn State.
Puppet Enterprise 3.0 updates
Puppet Labs' Puppet configuration automation tool comes in two flavors: Puppet Enterprise and Puppet Open Source. Puppet Enterprise includes a number of features Puppet Open Source doesn't, such as a graphical user interface, technical support with defined service-level agreements and role-based access control.
Puppet Enterprise 3.0 has been updated with a new centralized back-end storage system that can improve the software's performance. A new automated performance-testing framework implemented by Puppet Labs has also resulted in support for twice as many nodes under management as in previous releases.
Puppet Enterprise 3.0 also includes orchestration that does dynamic discovery so that users can do discovery against an infrastructure using real-time queries, or against any data source. Once that node list is in the system, users now have finer-grained control over the rollout of services; one example Puppet cites is the ability to roll out changes to a specified percentage of the infrastructure -- say, 10 % -- to ensure that any potential problems don't bring down all the nodes in the environment.
Finally, Puppet Labs gets into the software-defined infrastructure game with new modules that allow it to orchestrate infrastructure resources such as network and storage devices.
Users of the software say 3.0 is a big improvement.
"More robust support for Windows is exciting for us," Penn State's Synder said. "We're also looking to tie our 3.0 installation to our VMware cluster for automated provisioning."
Live management -- the ability to identify groups of nodes and execute changes on them in real time -- also won praise from Penn State's Staph.
"I've probably had three or four change events I've gone through in our environment that, if I had only had live management tools right there, could've been a lot easier and a lot less nerve-wracking," he said.