DevOps interview questions
- Published: 27 August 2016
Preparing for the DevOps interview
This is a list of DevOps interview questions which include some general and abstract questions and some more technical questions which a DevOps candidate might be requested in an interview.
What is DevOps ?
Devops is in a nutshell a cultural movement which aims to remove through collaboration and communication unnecessary silos in an organization. In less abstract terms, Devops can be seen as a number of software development practices that enable automation and accelerate delivery of products. The last element, automation, in turn requires a programmable dynamic platform.
Which are the components of DevOps ?
Operations: which is is responsible for the infrastructure and operational environments that support application deployment, including the network infrastructure. In most cases we can say this is the Sys Admin
Devs: which is responsible for software engineering development. In most case Developers, Architects fall in this category.
Quality Assurance: which are responsible for verifying the quality of the product such as Product Testers.
Do you think Devs and Ops will radically change their working routine ?
In most cases not. Ops will still be Ops and Devs will still be Devs. The difference is, these teams need to begin working closely together.
How can you improve DevOps culture ?
- Open communication: a new culture is always created through discussions.In the Devops approach, however, the talks are focused on the product through its lifecycle rather discussing about the organization.
- Responsiblity: DevOps becomes most effective when its principles pervade all the organization rather than being limited to single roles. Everyone is accountable for building and running an application that works as expected. This turns in assigning wider responsibilities and rewards at various levels.
- Respect: As open communication is necessary so does respect which means respectful discussion and listening to other opinions and experiences
- Trust: In a perfect Devops trust is essential. Operations must trust Development they are doing their best according to the common plan. Development must trust that Quality Assurance is there to improve the quality of their work and Product Manager needs to trust that Operations is going to provide precise metrics and reports on the product deployment
Which technologies can act as driver to enable DevOps ?
- Paas: which is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure
- Iaas: which is a category of cloud computing services that abstract the user from the details of infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc.
- Configuration automation: Automation is a big win in part because it eliminates the labor associated with repetitive tasks. Codifying such tasks also means documenting them and ensuring that they’re performed correctly, in a safe manner, and repeatedly across different infrastructure types.
- Microservices: which consists in a particular way of designing software applications as suites of independently deployable services.
- Containers: Containers modernize IT environments and processes, and provide a flexible foundation for implementing DevOps. At the organizational level, containers allow for appropriate ownership of the technology stack and processes, reducing hand-offs and the costly change coordination that comes with them.
What are microservices and why they have an impact on operations ?
Microservices are a product of software architecture and programming practices. Microservices architectures typically produce smaller, but more numerous artifacts that Operations is responsible for regularly deploying and managing. For this reason microservices have an important impact on Operations.The term that describes the responsibilities of deploying microservices is microdeployments. So, what DevOps is really about is bridging the gap between microservices and microdeployments.
Which tools are typically integrated in DevOps workflow ?
Many different types of tools are integrated into the DevOps workflow at this point. For example:
- Code repositories: like Git
- Container development tools: to convert code in a repository into a portable containerized image that includes any required dependencies
- Virtual machines software: like Vagrant for creating and configuring lightweight, reproducible, and portable development environments
- IDE: like Eclipse which has integration with DevOps platforms like Openshift
- Continuous Integration and Delivery software: like Jenkins which automates pushing the code directly to production once it has passed automated testing.
What is automation ?
Automation is the process of removing manual, error-prone operations from your services, ensuring that your applications or services can be repeatedly deployed.
Automation is a key point of devops, however what is the prerequisite of it ?
The necessary prerequisite of it is standardization. Which means both a:
- Techincal standardization:choose standard Operating systems and middleware, develop with a standard set of common libraries
- Procress standardization: standard systems development life cycle, release management, monitoring and escalation management.
At which level can applied automation in DevOps ?
At three levels:
1) Automate the application lifecycle: in terms of software features, version control, build management, integration frameworks
2) Automate the middleware platform automation: such as installing middleware, autoscaling and resources optimization of middlware components
3) At infrastructure by provisioning operating system resources and virtualizing them
Which scripting language is most important for a DevOps engineer?
Software development and Operational automation requires programming. In terms of scripting
Bash is the most frequently used Unix shell which should be your first automation choice. It has a simple syntax, and is designed specifically to execute programs in a non-interactive manner. The same stands for Perl which owes great deal of its popularity to being very good at manipulating text and storing data in databases.
Next, if you are using Puppet or Chef it's worth learning Ruby which is relatively easy to learn, and so many of the automation tools have been specifically with it.
Java has a huge impact in IT backend, although it has a limited spread across Operations.
Explain how DevOps is helpful to developers?
DevOps brings faster and more frequent release cycles which allows developers to identify and resolve issues immediately as well as implementing new features quickly.
Since DevOps is what makes people do better work by making them wear different hats, Developers who collaborate with Operations will create software that is easier to operate, more reliable, and ultimately better for the business.
How Database fits in a DevOps ?
In a perfect DevOps world, the DBA is an integral part of both Development and Operations teams and database changes should be as simple as code changes. So, you should be able to version and automate your Database scripts as your application code. In terms of choices between RDBMS, noSQL or other kind of storage solutions a good database design means less changes to your schema of Data and more efficient testing and service virtualization. Treating database management as an afterthought and not choosing the right database during early stages of the software development lifecycle can prevent successful adoption of the true DevOps movement.
Which are the reasons against using an RDBMS?
In a nutshell, if your application is all about storing application entities in a persistent and consistent way, then an RDBMS could be an overkill. A simple Key-Value storage solution might be perfect for you. Note that the Value is not meant to be a simple element but can be a complex entity in itself!
Another reason could be if you have hierarchical application objects and need some query capability into them then most NoSQL solutions might be a fit. With an RDBMS you can use ORM to achieve the same result, but at the cost of adding extra complexity.
RDBMS is also not the best solution if you are trying to store large trees or networks of objects. Depending on your other needs a Graph Database might suit you.
If you are running in the Cloud and need to run a distributed database for durability and availability then you could check Dynamo and Big Table based datastores which are built for this core purpose.
Last but not least, if your data grows to large to be processed on a single machine, you might look into Hadoop or any other solution that supports distributed Map/Reduce.
What is 2 factors authentication ?
In terms of authentication, when you have to enter only your username and one password, that's considered a single-factor authentication. 2 factors authentication requires the user to have two out of three types of credentials before being able to access an account. The three types are:
- Something you know, such as a personal identification number (PIN), password
- Something you have, such as a digital ATM card, phone
- Something you are, such as a biometric like voice or a fingerprint
What is a PTR record and how to add one?
While a record points a domain name to an IP address, the PTR record resolves the IP address to a domain/hostname. PTR records are used for the reverse DNS (Domain Name System) lookup. Using the IP address you can get the associated domain/hostname. A record should exist for every PTR record.
You can check whether there is a PTR record set for a defined IP address. The syntax of the commands on a Linux OS are:
$ dig -x IP
In terms of automation, two discuss about the differences between Puppet, Ansible and Chef
Push vs Pull Strategy:
- Puppet nodes use a Pull strategy as nodes periodically check into a puppet master server to “pull” resource definitions.
- Ansible uses a Push strategy. The machine where Ansible is installed uses SSH to copy files, remotely install packages, etc. on target machines The client machine requires no special setup outside of a working installation of Python 2.5+.
- Chef: Chef client queries Chef server for the latest set of recipes (configuration instructions) that apply to the current node.
- Puppet infrastructure is made up of one or more “puppetmaster” servers, along with a special agent package installed on each client node.
- Ansible has no concept of master/slave server, nor special agent executables to install: just proper SSH keys/credentials in order to connect to the nodes.
- Chef infrastructure uses a Chef Server, the main hub where Chef propagates and stores system configuration information and policies and a Chef Client installed on every node being managed
Language and Extensibility:
- Puppet uses its own DSL language which is a subset of Ruby. Thus adding extra complex functionality is done through Ruby modules. That being said there's a more strict control on what you are doing with Ruby.
- Ansible playbooks are YAML files. In terms of extensibility, Ansible is built upon Python for which most organization will have some experience.
- Chef: uses Ruby as programming language that is the authoring syntax for Chef cookbooks. Put it straight Chef lets you run wild with Ruby.
Resources & Ordering
- Puppet: Resources defined in a Puppet manifest are not applied in order of their appearance (ex: top->bottom). Instead resources are applied randomly, unless explicit resource ordering is used.
- Ansible: The playbooks using a traditional top-to-bottom, as they appear in the file. This is more intuitive for developers coming from other languages.
- Chef: always executes recipes in the order you specify them. It will not arbitraily reorder things. So if you want one recipe to be run before another, just load them in that order
- Puppet internally creates a directed graph of all of the defined resources along with the order they should be applied in. Puppet can even generate a graph file so that one can visualize everything that Puppet manages. On the other hand, building this graph is susceptible to “multiple resource definition” errors or conflicts due to circular dependencies.
- Ansible is merely a thin-wrapper for executing commands over SSH, so there is no resource dependency graph built internally.
- Chef is also able to declare dependencies between resources. Dependency failures are breakages in your dependency graph, which keep the current project’s pipeline from being able to ship safely. These failures are tracked because through Chef Automation
DevOps Tool Support
Puppet, Ansible and Chef are well supported by other DevOps tools like Vagrant, Packer, and Jenkins.
What is an MX record ?
An MX record tells senders how to send email for your domain. When your domain is registered, it’s assigned several DNS records, which enable your domain to be located on the Internet. These include MX records, which direct the domain’s mail flow. Each MX record points to an email server that’s configured to process mail for that domain. There’s typically one record that points to a primary server, then additional records that point to one or more backup servers. For users to send and receive email, their domain's MX records must point to a server that can process their mail.
What is SSH ?
SSH (Also known as Secure Shell) is a program to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. It provides strong authentication and secure communications over unsecure channels. It is intended as a replacement for rlogin, rsh, and rcp.
Keep tuned, we will keep updating the DevOps interview questions!