Basics of Oozie and Oozie SHELL action

本文介绍Apache Oozie——一款用于Hadoop集群的工作流管理系统。通过实例演示如何使用Oozie来调度Hadoop任务,包括创建Shell脚本、配置属性文件及定义工作流XML等步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Our Oozie Tutorials will cover most of the available workflow actions with and without Kerberos authentication.

Let’s have a look at some basic concepts of Oozie.

 

What is Oozie?

Oozie is open source workflow management system. We can schedule Hadoop jobs via Oozie which includes hive/pig/sqoop etc. actions. Oozie provides great features to trigger workflows based on data availability,job dependency,scheduled time etc.

More information about Oozie is available here.

 

 

oozie-arch

 

Oozie Workflow:

Oozie workflow is DAG(Directed acyclic graph) contains collection of actions. DAG contains two types of nodes action nodes and control nodes, action node is responsible for execution of tasks such as MapReduce, Pig, Hive etc. We can also execute shell scripts using action node. Control node is responsible for execution order of actions.

 

MR-Dag-WF

 

Oozie Co-ordinator:

In production systems its necessary to run Oozie workflows on a regular time interval or trigger workflows when input data is available or execute workflows after completion of dependent job. This can be achieved by Oozie co-ordinator job.

 

oozie-coord

 

Oozie Bundle jobs:

Bundle is set of Oozie co-ordinators which gives us better control to start/stop/suspend/resume multiple co-ordinators in a better way.

 

 

Oozie Launcher:

Oozie launcher is map only job which runs on Hadoop Cluster, for e.g. you want to run a hive script, you can just run “hive -f <hql-script-name>” command from any of the edge node, this command will directly trigger hive cli installed on that particular edge node and hive queries mentioned in the hql script will be executed. In case of Oozie this situation is handled differently, Oozie first runs launcher job on Hadoop cluster which is map only job and Oozie launcher will further trigger MapReduce job(if required) by calling client APIs for hive/pig etc. actions as per workflow.xml.

 

Let’s get started with running shell action using Oozie workflow.

 

Step 1: Create a sample shell script and upload it to HDFS

[root@sandbox shell]# cat ~/sample.sh
#!/bin/bash
echo "`date` hi" > /tmp/output

 

hadoop fs -put sample.sh /user/root/

 

Step 2: Create job.properties file according to your cluster configuration.

[root@sandbox shell]# cat job.properties
nameNode=hdfs://<namenode-hostname>:8020
jobTracker=<resource-manager-hostname>:8050
queueName=default
examplesRoot=examples
oozie.wf.application.path=${nameNode}/user/${user.name}

 

Step 3: Create workflow.xml file for your shell action.

<!--
 Licensed to the Apache Software Foundation (ASF) under one
 or more contributor license agreements. See the NOTICE file
 distributed with this work for additional information
 regarding copyright ownership. The ASF licenses this file
 to you under the Apache License, Version 2.0 (the
 "License"); you may not use this file except in compliance
 with the License. You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
-->
<workflow-app xmlns="uri:oozie:workflow:0.3" name="shell-wf">
 <start to="shell-node"/>
 <action name="shell-node">
 <shell xmlns="uri:oozie:shell-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <configuration>
 <property>
 <name>mapred.job.queue.name</name>
 <value>${queueName}</value>
 </property>
 </configuration>
 <exec>sample.sh</exec>
 <file>/user/root/sample.sh</file>
 </shell>
 <ok to="end"/>
 <error to="fail"/>
 </action>
 <kill name="fail">
 <message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
 </kill>
 <end name="end"/>
</workflow-app>

 

Step 4: Upload workflow.xml file created in above step to HDFS at oozie.wf.application.path mentioned in job.properties

hadoop fs -copyFromLocal -f workflow.xml /user/root/

 

Step 5: Submit Oozie workflow by running below command

oozie job -oozie http://<oozie-host>:11000/oozie -config job.properties -run

 

Step 6: Check Oozie UI to get status of workflow.

http://<oozie-host>:11000/oozie

 

Screen Shot 2016-04-04 at 1.06.42 AM

 

If you click on workflow ID area then you will get detailed status of each action ( see below screenshot )

 

Screen Shot 2016-04-04 at 1.07.51 AM

 

If you want to check logs for running action then just click on Action Id 2 i.e. shell-node action followed by Console URL(Click on Magnifier icon at the end of console URL)

 

Screen Shot 2016-04-04 at 1.07.59 AM

 

 

Screen Shot 2016-04-04 at 1.08.16 AM

 

Step 7: Check final output from command line. Please note that you need to execute below command on a nodemanager where your oozie launcher was launched.

[root@sandbox shell]# cat /tmp/output
Sun Apr 3 19:44:52 UTC 2016 hi

 

Please comment if you have any feedback/questions/suggestions. Happy Hadooping!! :)

内容概要:该研究通过在黑龙江省某示范村进行24小时实地测试,比较了燃煤炉具与自动/手动进料生物质炉具的污染物排放特征。结果显示,生物质炉具相比燃煤炉具显著降低了PM2.5、CO和SO2的排放(自动进料分别降低41.2%、54.3%、40.0%;手动进料降低35.3%、22.1%、20.0%),但NOx排放未降低甚至有所增加。研究还发现,经济性和便利性是影响生物质炉具推广的重要因素。该研究不仅提供了实际排放数据支持,还通过Python代码详细复现了排放特征比较、减排效果计算和结果可视化,进一步探讨了燃料性质、动态排放特征、碳平衡计算以及政策建议。 适合人群:从事环境科学研究的学者、政府环保部门工作人员、能源政策制定者、关注农村能源转型的社会人士。 使用场景及目标:①评估生物质炉具在农村地区的推广潜力;②为政策制定者提供科学依据,优化补贴政策;③帮助研究人员深入了解生物质炉具的排放特征和技术改进方向;④为企业研发更高效的生物质炉具提供参考。 其他说明:该研究通过大量数据分析和模拟,揭示了生物质炉具在实际应用中的优点和挑战,特别是NOx排放增加的问题。研究还提出了多项具体的技术改进方向和政策建议,如优化进料方式、提高热效率、建设本地颗粒厂等,为生物质炉具的广泛推广提供了可行路径。此外,研究还开发了一个智能政策建议生成系统,可以根据不同地区的特征定制化生成政策建议,为农村能源转型提供了有力支持。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值