Essential Guide

Server management tools shed light on data center issues

A comprehensive collection of articles, videos and more, hand-picked by our editors

Declarative vs. imperative: The DevOps automation debate

Users say Puppet Enterprise 3.0 is a boon, but some industry experts say Puppet's declarative configuration management approach is subject to debate.

IT organizations that want to grow cloud deployments through DevOps have new options in Puppet Labs' latest release, Puppet Enterprise 3.0, but some in the industry question the scalability of Puppet's declarative, model-based approach.

A debate simmers in the industry between proponents of Puppet and those of other configuration management automation products. The debate hearkens back to concepts of declarative, model-based programming (the most like Puppet's approach) and an imperative, or procedural, approach (the one generally taken by Puppet rival Chef). The declarative approach requires that users specify the end state of the infrastructure they want, and then Puppet's software makes it happen. The imperative/procedural approach takes action to configure systems in a series of actions.

Any kind of automation is better than the manual systems management that was done before, said Jonathan Eunice, analyst with Illuminata Inc., based in Nashua, N.H.

But some proponents of the imperative/procedural approach to automation say the declarative model can break down when there are subtle variations in the environment, since declarative configuration files must be made for every edge case in the environment. Competitive products such as Chef can operate based on if-then statements and can also use a declarative approach where it's a better fit.

"Any kind of variation is where the declarative approach tends to be harder, because you come to this place where you have to have a very sophisticated model that takes into account all the different variances that occur," Eunice said. "That becomes a learning-curve issue."

However, for some, there's power in the declarative model, because it simplifies configurations and offers an easy way to understand how systems are configured, said Robert Snyder, director of outreach technology services for Pennsylvania State University.

"We've grown relatively large relatively quickly, and we haven't found anything in scaling up our Puppet configuration that would suggest that if we were to multiply the number of nodes by 10 or 100 that we would see any problems," said Jason Staph, a lead systems administrator for Penn State.

Puppet Enterprise 3.0 updates

Puppet Labs' Puppet configuration automation tool comes in two flavors: Puppet Enterprise and Puppet Open Source. Puppet Enterprise includes a number of features Puppet Open Source doesn't, such as a graphical user interface, technical support with defined service-level agreements and role-based access control.

Puppet Enterprise 3.0 has been updated with a new centralized back-end storage system that can improve the software's performance. A new automated performance-testing framework implemented by Puppet Labs has also resulted in support for twice as many nodes under management as in previous releases.

Puppet Enterprise 3.0 also includes orchestration that does dynamic discovery so that users can do discovery against an infrastructure using real-time queries, or against any data source. Once that node list is in the system, users now have finer-grained control over the rollout of services; one example Puppet cites is the ability to roll out changes to a specified percentage of the infrastructure -- say, 10 % -- to ensure that any potential problems don't bring down all the nodes in the environment.

Finally, Puppet Labs gets into the software-defined infrastructure game with new modules that allow it to orchestrate infrastructure resources such as network and storage devices.

Users of the software say 3.0 is a big improvement.

"More robust support for Windows is exciting for us," Penn State's Synder said. "We're also looking to tie our 3.0 installation to our VMware cluster for automated provisioning."

Live management -- the ability to identify groups of nodes and execute changes on them in real time -- also won praise from Penn State's Staph.

"I've probably had three or four change events I've gone through in our environment that, if I had only had live management tools right there, could've been a lot easier and a lot less nerve-wracking," he said.

Beth Pariseau is senior news writer for SearchCloudComputing. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Server management tools shed light on data center issues

Join the conversation

7 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Where do you stand -- declarative or imperative automation? Why?
Cancel
For DevOps - imperative because the field is new and rapidly evolving. Too soon to rely on the assumptions.
Cancel
In the world of massive scale and complexity - the declarative approach falls down. Look at Google and Yahoo as an example. Yahoo's model was to have a declarative state where they tried to organize the entire world under categories and they had a busy homepage to try and guide you to everything. In the early days is worked well enough - and then it broke - the world is too big to model every single item. Google, took a different, more imperative, approach - a single search bar - realizing that they couldn't model the scale and complexity of the world and through intelligence search capabilities - would just connect the user to the information they were searching.

In the end, declarative models break once they get stretched too far into scale and complexity in today's IT environments. The last generation of IT automation products found a similar fate.
Cancel
Chef's resource collection is actually declarative.

Puppet constructs a manifest on the puppet master serializes it and ships it down to the client to converge. In Chef the 'manifest' is constructed in the client using data that is collected from the sever and inspected on the client itself. This manifest is not typically serialized into a text representation and exists as an array of resource objects in memory. In the convergence phase these objects are evaluated just like a puppet clients manifest is evaluated and the configuration on the client is changed.

There are ways to abuse this in chef, but they're mostly anti-patterns. It does give you the flexibility to *build* the resource collection imperatively -- you can use looping structures or inspect the client state as you're building the resource collection -- something which is going to be more difficult to accomplish using puppet. But the imperative ruby code you write should not be used to configure the system and should be used to construct the resource collection. The actions which modify system state should entirely be declarative chef resources being converged.
Cancel
I prefer the desired end state option especially in a virtual world.
Cancel
Since this model can adapt to a declarative approach if needed I think it would make sense.
Cancel
Google search approach sometimes fails with its assumptions. And google ads completely misses the context.
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchSOA

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

Close