Schedulers in YARN: from concepts to configurations
FIFO, capacity or fair scheduler, the choice is yours.
When we start delving into the world of big data, a number of new words and acronyms start showing up — YARN is also one of them. Apache YARN (Yet Another Resource Negotiator) is a cluster resource management platform for distributed computing paradigms which is most often used with Hadoop but it is general enough to be used with other platforms as well.
In this article, we will not be discussing YARN as a whole but a small yet important subset of it: Scheduling. A scheduler typically handles the resource allocation of the jobs submitted to YARN. In simple words — for example — if a computer app/service wants to run and needs 1GB of RAM and 2 processors for normal operation — it is the job of YARN scheduler to allocate resources to this application in accordance to a defined policy.
There are three types of schedulers available in YARN: FIFO, Capacity and Fair. FIFO (first in, first out) is the simplest to understand and does not need any configuration. It runs the applications in submission order by placing them in a queue. Application submitted first, gets resources first and upon completion, the scheduler serves next application in the queue. However, FIFO is not suited for shared clusters as large applications will occupy all resources and queues will get longer due to lower serving rate.
Figure 1 represents the difference among three schedulers. It is now evident that in case of FIFO, a small job blocks until the large job complete. Capacity scheduler maintains a separate queue for small jobs in order to start them as soon a request initiates. However, this comes at a cost as we are dividing cluster capacity hence large jobs will take more time to complete.
Fair scheduler does not have any requirement to reserve capacity. It dynamically balances the resources into all accepted jobs. When a job starts — if it is the only job running — it gets all the resources of the cluster. When the second job starts it gets the resources as soon as some containers, (a container is a fixed amount of RAM and CPU) get free. After the small job finishes, the scheduler assigns resources to large one. This eliminates both drawbacks as seen in FIFO and capacity scheduler i.e. overall effect is timely completion of small jobs with high cluster utilization.
Let us see some configurations now. We did these configurations using HDP 2.4 (valid for any HDP 2.x) in a sandbox environment. Hortonworks uses capacity scheduler by default however; Cloudera’s default is Fair scheduler. After their merger, they are still using the same conventions and keeping the business as usual but this might change in the future.
Let us assume that you want to configure capacity scheduler for your department’s needs. We can start by dividing the jobs into three categories: default, workflow and preference. Default for normal ad-hoc jobs submitted to scheduler, Workflow for ingestion and ETL processes and Preference for any jobs that need immediate attention. Although, Hortonworks default is the capacity scheduler, you can still mimic the behavior of Fair scheduling by employing something called “queue elasticity”.
First off, login to Ambari web console and from dotted menu in the top right corner select YARN queue manager.
Here, you can see the default settings: There is only one queue (root) with one child (default) currently allotted 100 % of the resources.
Next, add two more job queues by clicking “Add Queue”. As per requirement, you can assign resources to the queues.
In this case, let us assign 20% to default and it can go up to 40 %, Workflow being the most resource needy can get 60–80 % and 20% for Preference going up to 80% all summed to a 100%.
This is what we can call capacity scheduling with queue elasticity e.g. if someone submits a job in Preference queue it will get 20% of resources for sure while it can go up to a maximum of 80% of resources if there are no pending jobs. Save your changes to take them effect from Actions menu.
These are some basic queue configuration for Capacity scheduler in Hortonworks sandbox.
Before concluding, I will briefly explain the concept of Preemption that is important from interview perspective. If a queue is taking more containers than its fair share of resources and another job is waiting for resources under its fair share, preemption allows it to kill the containers from first queue and assign it to second one. However, this comes at the cost of reduced cluster efficiency. One can enable Preemption from yarn.scheduler.fair.preemption property and setting it to True.
I would like to link the consulted materials that can be helpful in understanding the concept of YARN scheduling.