任务一
编写Scala代码,使用Spark将MySQL库中表ChangeRecord,BaseMachine,MachineData, ProduceRecord全量抽取到Hive的ods库中对应表changerecord,basemachine, machinedata,producerecord中。
第四题:
抽取MySQL的shtd_industry库中MachineData表的全量数据进入Hive的ods库中表machinedata,字段排序、类型不变,同时添加静态分区,分区字段为etldate,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。使用hive cli执行show partitions ods.machinedata命令,将hive cli的执行结果截图粘贴至客户端桌面【Release任务B提交结果.docx】中对应的任务序号下。
代码实现:
package ModuleB.BookTask5.Task1 import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession import org.apache.spark.sql.functions.lit object Task1_4 { def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("BookTask5.Task1_4").setMaster("local") val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate() spark.conf.set("hive.exec.dynamic.partition.mode","nonstrict") spark.sparkContext.setLogLevel("OFF") val mysqldf = spark.read.format("jdbc") .option("driver","com.mysql.jdbc.Driver") .option("url","jdbc:mysql://master:3306/shtd_industry") .option("user","root") .option("password","123456") .option("dbtable","machinedata") .load() mysqldf.show() val etldate = java.time.LocalDate.now().minusDays(1).format(java.time.format.DateTimeFormatter.ofPattern("yyyyMMdd")) val df = mysqldf.withColumn("etldate",lit(etldate)) df.write.format("hive") .mode("append") .partitionBy("etldate") .saveAsTable("ods.machinedata") spark.sql("show partitions ods.machinedata").show() spark.stop() } }
以上代码如有错误,请各位大佬指正