2023大数据技能竞赛模块B数据抽取Task1_3(工业)

任务一

编写Scala代码,使用Spark将MySQL库中表ChangeRecord,BaseMachine,MachineData, ProduceRecord全量抽取到Hive的ods库中对应表changerecord,basemachine, machinedata,producerecord中。
 

第三题:

抽取MySQL的shtd_industry库中ProduceRecord表的全量数据进入Hive的ods库中表producerecord,剔除ProducePrgCode字段,其余字段排序、类型不变,同时添加静态分区,分区字段为etldate,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。使用hive cli执行show partitions ods.producerecord命令,将hive cli的执行结果截图粘贴至客户端桌面【Release任务B提交结果.docx】中对应的任务序号下;

代码实现:

package ModuleB.BookTask5.Task1

import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.lit

object Task1_3 {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("BookTask5.Task1_3").setMaster("local")
    val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
    spark.sparkContext.setLogLevel("OFF")
    spark.conf.set("hive.exec.dynamic.partition.mode","nonstrict")

    val mysqldf = spark.read.format("jdbc")
      .option("driver","com.mysql.jdbc.Driver")
      .option("url","jdbc:mysql://master:3306/shtd_industry")
      .option("user","root")
      .option("password","123456")
      .option("dbtable","producerecord")
      .load()

    val df1 = mysqldf.drop("ProducePrgCode")
    df1.show()

    val etldate = java.time.LocalDate.now().minusDays(1).format(java.time.format.DateTimeFormatter.ofPattern("yyyyMMdd"))

    val df2 = df1.withColumn("etldate",lit(etldate))

    df2.write.format("hive")
      .mode("append")
      .partitionBy("etldate")
      .saveAsTable("ods.producerecord")

    spark.sql("show partitions ods.producerecord").show()
    spark.stop()

  }
}

以上代码如有错误,请各位大佬指正