spark源码分析之Executor启动与任务提交篇

任务提交流程

概述

在阐明了Spark的Master的启动流程与Worker启动流程。接下继续执行的就是Worker上的Executor进程了,本文继续分析整个Executor的启动与任务提交流程

Spark-submit

提交一个任务到集群通过的是Spark-submit
通过启动脚本的方式启动它的主类,这里以WordCount为例子
spark-submit --class cn.apache.spark.WordCount

  1. bin/spark-clas -> org.apache.spark.deploy.SparkSubmit 调用这个类的main方法
  2. doRunMain方法中传进来一个自定义spark应用程序的main方法
    class cn.apache.spark.WordCount
  3. 通过反射拿到类的实例的引用mainClass = Utils.classForName(childMainClass)
  4. 在通过反射调用class cn.apache.spark.WordCountmain方法

我们来看SparkSubmit的main方法

  def main(args: Array[String]): Unit = {val appArgs = new SparkSubmitArguments(args)if (appArgs.verbose) {printStream.println(appArgs)}//匹配任务类型appArgs.action match {case SparkSubmitAction.SUBMIT => submit(appArgs)case SparkSubmitAction.KILL => kill(appArgs)case SparkSubmitAction.REQUEST_STATUS => requestStatus(appArgs)}}

这里的类型是submit,调用submit方法

  private[spark] def submit(args: SparkSubmitArguments): Unit = {val (childArgs, childClasspath, sysProps, childMainClass) = prepareSubmitEnvironment(args)def doRunMain(): Unit = {。。。。。。try {proxyUser.doAs(new PrivilegedExceptionAction[Unit]() {override def run(): Unit = {//childMainClass这个你自己定义的App的main所在的全类名runMain(childArgs, childClasspath, sysProps, childMainClass, args.verbose)}})} catch {。。。。。。        }}        。。。。。。。//掉用上面的doRunMaindoRunMain()}

submit里调用了doRunMain(),然后调用了runMain,来看runMain

  private def runMain(。。。。。。try {//通过反射mainClass = Class.forName(childMainClass, true, loader)} catch {。。。。。。}//反射拿到面方法实例val mainMethod = mainClass.getMethod("main", new Array[String](0).getClass)if (!Modifier.isStatic(mainMethod.getModifiers)) {throw new IllegalStateException("The main method in the given main class must be static")}。。。。。。try {//调用App的main方法mainMethod.invoke(null, childArgs.toArray)} catch {case t: Throwable =>throw findCause(t)}}

最主要的流程就在这里了,上面的代码注释很清楚,通过反射调用我们写的类的main方法,大体的流程到此

SparkSubmit时序图

Executor启动流程

SparkSubmit通过反射调用了我们程序的main方法后,就开始执行我们的代码
,一个Spark程序中需要创建SparkContext对象,我们就从这个对象开始

SparkContext的构造方法代码很长,主要关注的地方如下

class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationClient {。。。。。。private[spark] def createSparkEnv(conf: SparkConf,isLocal: Boolean,listenerBus: LiveListenerBus): SparkEnv = {//通过SparkEnv来创建createDriverEnvSparkEnv.createDriverEnv(conf, isLocal, listenerBus)}//在这里调用了createSparkEnv,返回一个SparkEnv对象,这个对象里面有很多重要属性,最重要的ActorSystemprivate[spark] val env = createSparkEnv(conf, isLocal, listenerBus)SparkEnv.set(env)//创建taskScheduler// Create and start the schedulerprivate[spark] var (schedulerBackend, taskScheduler) =SparkContext.createTaskScheduler(this, master)//创建DAGSchedulerdagScheduler = new DAGScheduler(this)//启动TaksSchedulertaskScheduler.start()。。。。。
}

Spark的构造方法主要干三件事,创建了一个SparkEnv,taskScheduler,dagScheduler,我们先来看createTaskScheduler里干了什么

 //通过给定的URL创建TaskSchedulerprivate def createTaskScheduler(.....//匹配URL选择不同的方式master match {。。。。。。//这个是Spark的Standalone模式case SPARK_REGEX(sparkUrl) =>//首先创建TaskSchedulerval scheduler = new TaskSchedulerImpl(sc)val masterUrls = sparkUrl.split(",").map("spark://" + _)//很重要val backend = new SparkDeploySchedulerBackend(scheduler, sc, masterUrls)//初始化了一个调度器,默认是FIFOscheduler.initialize(backend)(backend, scheduler)。。。。。}        
}

通过master的url来匹配到Standalone模式:然后初始化了SparkDeploySchedulerBackendTaskSchedulerImpl,这两个对象很重要,是启动任务调度的核心,然后调用了scheduler.initialize(backend)进行初始化

启动TaksScheduler初始化完成,回到我们的SparkContext构造方法后面继续调用了
taskScheduler.start() 启动TaksScheduler
来看start方法

override def start() {//调用backend的实现的start方法backend.start()if (!isLocal && conf.getBoolean("spark.speculation", false)) {logInfo("Starting speculative execution thread")import sc.env.actorSystem.dispatchersc.env.actorSystem.scheduler.schedule(SPECULATION_INTERVAL milliseconds,SPECULATION_INTERVAL milliseconds) {Utils.tryOrExit { checkSpeculatableTasks() }}}}

这里的backend是SparkDeploySchedulerBackend调用了它的start

override def start() {//CoarseGrainedSchedulerBackend的start方法,在这个方法里面创建了一个DriverActorsuper.start()// The endpoint for executors to talk to us//下面是为了启动java子进程做准备,准备一下参数val driverUrl = AkkaUtils.address(AkkaUtils.protocol(actorSystem),SparkEnv.driverActorSystemName,conf.get("spark.driver.host"),conf.get("spark.driver.port"),CoarseGrainedSchedulerBackend.ACTOR_NAME)val args = Seq("--driver-url", driverUrl,"--executor-id", "{{EXECUTOR_ID}}","--hostname", "{{HOSTNAME}}","--cores", "{{CORES}}","--app-id", "{{APP_ID}}","--worker-url", "{{WORKER_URL}}")val extraJavaOpts = sc.conf.getOption("spark.executor.extraJavaOptions").map(Utils.splitCommandString).getOrElse(Seq.empty)val classPathEntries = sc.conf.getOption("spark.executor.extraClassPath").map(_.split(java.io.File.pathSeparator).toSeq).getOrElse(Nil)val libraryPathEntries = sc.conf.getOption("spark.executor.extraLibraryPath").map(_.split(java.io.File.pathSeparator).toSeq).getOrElse(Nil)// When testing, expose the parent class path to the child. This is processed by// compute-classpath.{cmd,sh} and makes all needed jars available to child processes// when the assembly is built with the "*-provided" profiles enabled.val testingClassPath =if (sys.props.contains("spark.testing")) {sys.props("java.class.path").split(java.io.File.pathSeparator).toSeq} else {Nil}// Start executors with a few necessary configs for registering with the schedulerval sparkJavaOpts = Utils.sparkJavaOpts(conf, SparkConf.isExecutorStartupConf)val javaOpts = sparkJavaOpts ++ extraJavaOpts//用command拼接参数,最终会启动org.apache.spark.executor.CoarseGrainedExecutorBackend子进程val command = Command("org.apache.spark.executor.CoarseGrainedExecutorBackend",args, sc.executorEnvs, classPathEntries ++ testingClassPath, libraryPathEntries, javaOpts)val appUIAddress = sc.ui.map(_.appUIAddress).getOrElse("")//用ApplicationDescription封装了一些重要的参数val appDesc = new ApplicationDescription(sc.appName, maxCores, sc.executorMemory, command,appUIAddress, sc.eventLogDir, sc.eventLogCodec)//在这里面创建ClientActorclient = new AppClient(sc.env.actorSystem, masters, appDesc, this, conf)//启动ClientActorclient.start()waitForRegistration()}

这里是拼装了启动Executor的一些参数,类名+参数 封装成ApplicationDescription。最后传给并创建AppClient并调用它的start方法

AppClient创建时序图

AppClient的start方法

接来下关注start方法

  def start() {// Just launch an actor; it will call back into the listener.actor = actorSystem.actorOf(Props(new ClientActor))}

在start方法里创建了与Master通信的ClientActor,然后会调用它的preStart方法向Master注册,接下来看它的preStart

  override def preStart() {context.system.eventStream.subscribe(self, classOf[RemotingLifecycleEvent])try {//ClientActor向Master注册registerWithMaster()} catch {case e: Exception =>logWarning("Failed to connect to master", e)markDisconnected()context.stop(self)}}

最后会调用该方法向所有Master注册

    def tryRegisterAllMasters() {for (masterAkkaUrl <- masterAkkaUrls) {logInfo("Connecting to master " + masterAkkaUrl + "...")//t通过actorSelection拿到了Master的引用val actor = context.actorSelection(masterAkkaUrl)//向Master发送异步的注册App的消息actor ! RegisterApplication(appDescription)}}

ClientActor发送来的注册App的消息,ApplicationDescription,他包含了需求的资源,要求启动的Executor类名和一些参数
Master的Receiver

 case RegisterApplication(description) => {if (state == RecoveryState.STANDBY) {// ignore, don't send response} else {logInfo("Registering app " + description.name)//创建App  sender:ClientActorval app = createApplication(description, sender)//注册AppregisterApplication(app)logInfo("Registered app " + description.name + " with ID " + app.id)//持久化ApppersistenceEngine.addApplication(app)//向ClientActor反馈信息,告诉他app注册成功了sender ! RegisteredApplication(app.id, masterUrl)//TODO 调度任务schedule()}}

registerApplication(app)

 def registerApplication(app: ApplicationInfo): Unit = {val appAddress = app.driver.path.addressif (addressToApp.contains(appAddress)) {logInfo("Attempted to re-register application at same address: " + appAddress)return}//把App放到集合里面applicationMetricsSystem.registerSource(app.appSource)apps += appidToApp(app.id) = appactorToApp(app.driver) = appaddressToApp(appAddress) = appwaitingApps += app}

Master将接受的信息保存到集合并序列化后发送一个RegisteredApplication消息通知反馈给ClientActor,接着执行schedule()方法,该方法中会遍历workers集合,并执行launchExecutor

  def launchExecutor(worker: WorkerInfo, exec: ExecutorDesc) {logInfo("Launching executor " + exec.fullId + " on worker " + worker.id)//记录该worker上使用了多少资源worker.addExecutor(exec)//Master向Worker发送启动Executor的消息worker.actor ! LaunchExecutor(masterUrl,exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)//Master向ClientActor发送消息,告诉ClientActor executor已经启动了exec.application.driver ! ExecutorAdded(exec.id, worker.id, worker.hostPort, exec.cores, exec.memory)}

这里Master向Worker发送启动Executor的消息

`worker.actor ! LaunchExecutor(masterUrl,exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)` 

application.desc里包含了Executor类的启动信息

case LaunchExecutor(masterUrl, appId, execId, appDesc, cores_, memory_) =>。。。。。appDirectories(appId) = appLocalDirs//创建一个ExecutorRunner,这个很重要,保存了Executor的执行配置和参数val manager = new ExecutorRunner(appId,execId,appDesc.copy(command = Worker.maybeUpdateSSLSettings(appDesc.command, conf)),cores_,memory_,self,workerId,host,webUi.boundPort,publicAddress,sparkHome,executorDir,akkaUrl,conf,appLocalDirs, ExecutorState.LOADING)executors(appId + "/" + execId) = manager//TODO 开始启动ExecutorRunnermanager.start()。。。。。。}}}

Worker的Receiver接受到了启动Executor的消息,appDesc对象保存了Command命令、Executor的实现类和参数

manager.start()里会创建一个线程

  def start() {//启动一个线程workerThread = new Thread("ExecutorRunner for " + fullId) {//用一个子线程来帮助Worker启动Executor子进程override def run() { fetchAndRunExecutor() }}workerThread.start()// Shutdown hook that kills actors on shutdown.shutdownHook = new Thread() {override def run() {killProcess(Some("Worker shutting down"))}}Runtime.getRuntime.addShutdownHook(shutdownHook)}

在线程中调用了fetchAndRunExecutor()方法,我们来看该方法

def fetchAndRunExecutor() {try {// Launch the processval builder = CommandUtils.buildProcessBuilder(appDesc.command, memory,sparkHome.getAbsolutePath, substituteVariables)//构建命令val command = builder.command()logInfo("Launch command: " + command.mkString("\"", "\" \"", "\""))builder.directory(executorDir)builder.environment.put("SPARK_LOCAL_DIRS", appLocalDirs.mkString(","))// In case we are running this from within the Spark Shell, avoid creating a "scala"// parent process for the executor commandbuilder.environment.put("SPARK_LAUNCH_WITH_SCALA", "0")// Add webUI log urlsval baseUrl =s"http://$publicAddress:$webUiPort/logPage/?appId=$appId&executorId=$execId&logType="builder.environment.put("SPARK_LOG_URL_STDERR", s"${baseUrl}stderr")builder.environment.put("SPARK_LOG_URL_STDOUT", s"${baseUrl}stdout")//启动子进程process = builder.start()val header = "Spark Executor Command: %s\n%s\n\n".format(command.mkString("\"", "\" \"", "\""), "=" * 40)// Redirect its stdout and stderr to filesval stdout = new File(executorDir, "stdout")stdoutAppender = FileAppender(process.getInputStream, stdout, conf)val stderr = new File(executorDir, "stderr")Files.write(header, stderr, UTF_8)stderrAppender = FileAppender(process.getErrorStream, stderr, conf)// Wait for it to exit; executor may exit with code 0 (when driver instructs it to shutdown)// or with nonzero exit code//开始执行,等待结束信号val exitCode = process.waitFor()。。。。}}

这里面进行了类名和参数的拼装,具体拼装过程不用关心,最终builder.start()会以SystemRuntime的方式启动一个子进程,这个是进程的类名是CoarseGrainedExecutorBackend
到此Executor进程就启动起来了

Executor创建时序图

Executor任务调度对象启动

Executor进程后,就首先要执行main方法,main的代码如下

  //Executor进程启动的入口def main(args: Array[String]) {。。。。//拼装参数while (!argv.isEmpty) {argv match {case ("--driver-url") :: value :: tail =>driverUrl = valueargv = tailcase ("--executor-id") :: value :: tail =>executorId = valueargv = tailcase ("--hostname") :: value :: tail =>hostname = valueargv = tailcase ("--cores") :: value :: tail =>cores = value.toIntargv = tailcase ("--app-id") :: value :: tail =>appId = valueargv = tailcase ("--worker-url") :: value :: tail =>// Worker url is used in spark standalone mode to enforce fate-sharing with workerworkerUrl = Some(value)argv = tailcase ("--user-class-path") :: value :: tail =>userClassPath += new URL(value)argv = tailcase Nil =>case tail =>System.err.println(s"Unrecognized options: ${tail.mkString(" ")}")printUsageAndExit()}}if (driverUrl == null || executorId == null || hostname == null || cores <= 0 ||appId == null) {printUsageAndExit()}//开始执行Executorrun(driverUrl, executorId, hostname, cores, appId, workerUrl, userClassPath)}

执行了run方法

  private def run(driverUrl: String,executorId: String,hostname: String,cores: Int,appId: String,workerUrl: Option[String],userClassPath: Seq[URL])   。。。。。//通过actorSystem创建CoarseGrainedExecutorBackend -> Actor//CoarseGrainedExecutorBackend -> DriverActor通信env.actorSystem.actorOf(Props(classOf[CoarseGrainedExecutorBackend],driverUrl, executorId, sparkHostPort, cores, userClassPath, env),name = "Executor")。。。。。。}env.actorSystem.awaitTermination()}}

run方法中创建了CoarseGrainedExecutorBackend的Actor对象用于准备和DriverActor通信,接着会继续调用preStart生命周期方法

  override def preStart() {logInfo("Connecting to driver: " + driverUrl)//Executor跟DriverActor建立连接driver = context.actorSelection(driverUrl)//Executor向DriverActor发送消息driver ! RegisterExecutor(executorId, hostPort, cores, extractLogUrls)context.system.eventStream.subscribe(self, classOf[RemotingLifecycleEvent])}

Executor向DriverActor发送注册的消息
driver ! RegisterExecutor(executorId, hostPort, cores, extractLogUrls)

DriverActor的receiver收到消息后

def receiveWithLogging = {//Executor发送给DriverActor的注册消息case RegisterExecutor(executorId, hostPort, cores, logUrls) =>Utils.checkHostPort(hostPort, "Host port expected " + hostPort)if (executorDataMap.contains(executorId)) {sender ! RegisterExecutorFailed("Duplicate executor ID: " + executorId)} else {logInfo("Registered executor: " + sender + " with ID " + executorId)//DriverActor向Executor发送注册成功的消息sender ! RegisteredExecutoraddressToExecutorId(sender.path.address) = executorIdtotalCoreCount.addAndGet(cores)totalRegisteredExecutors.addAndGet(1)val (host, _) = Utils.parseHostPort(hostPort)//将Executor的信息封装起来val data = new ExecutorData(sender, sender.path.address, host, cores, cores, logUrls)// This must be synchronized because variables mutated// in this block are read when requesting executorsCoarseGrainedSchedulerBackend.this.synchronized {//往集合添加Executor的信息对象executorDataMap.put(executorId, data)if (numPendingExecutors > 0) {numPendingExecutors -= 1logDebug(s"Decremented number of pending executors ($numPendingExecutors left)")}}listenerBus.post(SparkListenerExecutorAdded(System.currentTimeMillis(), executorId, data))//将来用来执行真正的业务逻辑makeOffers()}

DriverActor的receiver里将Executor信息封装到Map中保存起来,并发送反馈消息 sender ! RegisteredExecutor
给CoarseGrainedExecutorBackend

override def receiveWithLogging = {case RegisteredExecutor =>logInfo("Successfully registered with driver")val (hostname, _) = Utils.parseHostPort(hostPort)executor = new Executor(executorId, hostname, env, userClassPath, isLocal = false)

CoarseGrainedExecutorBackend收到消息后创建一个Executor对象用于准备任务的执行,到此Executor的创建就完成了,接下来下篇介绍任务的调度。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/541513.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

mysql 5.5.22.tar.gz_MySQL 5.5.22源码编译安装

MySQL 最新的版本都需要cmake编译安装&#xff0c;估计以后的版本也会采用这种方式&#xff0c;所以特地记录一下安装步骤及过程&#xff0c;以供参考。注意&#xff1a;此安装是默认CentOS下已经安装了最新工具包&#xff0c;比如GNU make, GCC, Perl, libncurses5-dev&#x…

利用python进行数据分析D2——ch03IPython

为无为,事无事,味无味。大小多少,报怨以德。图难于其易,为大于其细;天下难事必作于易,天下大事必作于细。——老子关于图片的例子&#xff1a;import matplotlib.pyplot as plt imgplt.imread(ch03/stinkbug.png) import pylab plt.imshow(img) pylab.show()结果&#xff1a;调…

mysql 视图 字典_MySQL深入01-SQL语言-数据字典-服务器变量-数据操作DML-视图

SQL语言的组成部分常见分类&#xff1a;DDL&#xff1a;数据定义语言DCL&#xff1a;数据控制语言&#xff0c;如授权DML&#xff1a;数据操作语言其它分类&#xff1a;完整性定义语言&#xff1a;DDL的一部分功能约束约束&#xff1a;包括主键&#xff0c;外键&#xff0c;唯一…

为什么我会被淘汰?

这是一个值得讨论的问题。华为前段时间也传出了大规模裁员的一些负面新闻&#xff0c;一时间搞的人心惶惶。总结起来说&#xff0c;还是怕失去这份赖以生存的工作&#xff0c;尤其是对于上有老下有小的中年人来说&#xff0c;工作尤为重要。 淘汰&#xff0c;是软件行业不变的真…

mysql 存储过程死循环_pl/sql存储过程loop死循环

今早&#xff0c;一个存储过程&#xff0c;写过很多次的存储过程&#xff0c;随手一写&#xff0c;各种报错&#xff0c;各种纠结&#xff0c;网上一搜&#xff0c;有好多个都遇到&#xff0c;论坛上给出的结局答案&#xff0c;今早&#xff0c;一个存储过程&#xff0c;写过很…

《Java学习指南》—— 1.4 设计安全

本节书摘来异步社区《Java学习指南》一书中的第1章&#xff0c;第1.4节&#xff0c;作者&#xff1a;【美】Patrick Niemeyer , Daniel Leuck&#xff0c;更多章节内容可以访问云栖社区“异步社区”公众号查看。 1.4 设计安全 Java被设计为一种安全语言&#xff0c;对于这一事实…

ppython_Python pcom包_程序模块 - PyPI - Python中文网

PCOM在python中一个非常基本的unitronics pcom协议实现。如何使用from pcom import commandsfrom pcom.plc import EthernetPlcwith EthernetPlc(address(192.168.5.43, 1616)) as plc:# Read realtime clockc commands.ReadRtc()res plc.send(c)print(res)# Set realtime cl…

《软件定义数据中心:Windows Server SDDC技术与实践》——导读

前言 通过对自身的审视和对身边IT 技术专家的观察&#xff0c;我发现对于我们来说&#xff0c;掌握一项新的技术或熟悉一个新的产品&#xff0c;大都是闻而后知&#xff0c;知而后学&#xff0c;学以致用&#xff0c;用以知其然。然而Windows Server作为一个简单的、易上手的操…

《Spark核心技术与高级应用》——3.2节构建Spark的开发环境

本节书摘来自华章社区《Spark核心技术与高级应用》一书中的第3章&#xff0c;第3.2节构建Spark的开发环境&#xff0c;作者于俊 向海 代其锋 马海平&#xff0c;更多章节内容可以访问云栖社区“华章社区”公众号查看 3.2 构建Spark的开发环境无论Windows或Linux操作系统&am…

webapi随机调用_BeetleX之webapi验证插件JWT集成

对于webapi服务应用很多时候需要制订访问限制&#xff0c;在前面的章节也讲述了组件如何制订控制器访问控制&#xff1b;但到了实际应用要自己去编写还是比较麻烦。为了让访问控制更方便组件实现基于JWT的控制器访问控制组件BeetleX.FastHttpApi.Jwt&#xff1b;通过这个组件可…

《驯狮记——Mac OS X 10.8 Mountain Lion使用手册》——2.3 Dock

本节书摘来自异步社区《驯狮记——Mac OS X 10.8 Mountain Lion使用手册》一书中的第2章&#xff0c;第2.3节&#xff0c;作者&#xff1a;陈明 , 张铮 , 马玉龙著&#xff0c;更多章节内容可以访问云栖社区“异步社区”公众号查看 2.3 Dock 驯狮记——Mac OS X 10.8 Mountain…

mysql 嵌套if标签_对比Excel、MySQL、Python,分别讲述 “if函数” 的使用原理!

作者&#xff1a;黄伟呢本文转自&#xff1a;数据分析与统计学之美其实&#xff0c;不管是Excel、MySQL&#xff0c;还是Python&#xff0c;“if”条件判断都起着很重要的作用。今天这篇文章&#xff0c;就带着大家盘点一下&#xff0c;这三种语言如何分别使用 “if函数” 。if…

R语言数据挖掘

数据分析与决策技术丛书 R语言数据挖掘 Learning Data Mining with R &#xff3b;哈萨克斯坦&#xff3d;贝特麦克哈贝尔&#xff08;Bater Makhabel&#xff09; 著 李洪成 许金炜 段力辉 译 图书在版编目&#xff08;CIP&#xff09;数据 R语言数据挖掘 / &#xff08;哈…

vue2.0的学习

vue-router 除了使用 <router-link> 创建 a 标签来定义导航链接&#xff0c;我们还可以借助 router 的实例方法&#xff0c;通过编写代码来实现。 1&#xff09;router.push(location) 这个方法会向 history 栈添加一个新的记录&#xff0c;所以&#xff0c;当用户点击浏…

《Java EE 7精粹》—— 第3章 JSF 3.1 Facelets

本节书摘来异步社区《Java EE 7精粹》一书中的第2章&#xff0c;第2.1节&#xff0c;作者&#xff1a;【美】Arun Gupta&#xff0c;更多章节内容可以访问云栖社区“异步社区”公众号查看。 第3章 JSF JSF是基于Java的Web应用程序开发的服务器端用户界面&#xff08;UI&#xf…

mysql5批处理_转关于mysql5.5 的批处理讨论(转载)

MySql的JDBC驱动不支持批量操作(已结)MySql连接的url中要加rewriteBatchedStatements参数&#xff0c;例如String connectionUrl"jdbc:mysql://192.168.1.100:3306/test?rewriteBatchedStatementstrue";还要保证mysql JDBC驱的版本。MySql的JDBC驱动的批量插入操作性…

《C#多线程编程实战(原书第2版)》——3.2 在线程池中调用委托

本节书摘来自华章出版社《C#多线程编程实战&#xff08;原书第2版&#xff09;》一书中的第3章&#xff0c;第3.2节&#xff0c;作者&#xff08;美&#xff09;易格恩阿格佛温&#xff08;Eugene Agafonov&#xff09;&#xff0c;黄博文 黄辉兰 译&#xff0c;更多章节内容可…

《Android 应用测试指南》——第2章,第2.4节包浏览器

本节书摘来自异步社区《Android 应用测试指南》一书中的第2章&#xff0c;第2.4节包浏览器&#xff0c;作者 【阿根廷】Diego Torres Milano&#xff08;迭戈 D.&#xff09;&#xff0c;更多章节内容可以访问云栖社区“异步社区”公众号查看 2.4 包浏览器创建完前面提到的两个…

《OpenStack云计算实战手册(第2版)》——1.7 添加用户

本节书摘来自异步社区《OpenStack云计算实战手册&#xff08;第2版&#xff09;》一书中的第1章&#xff0c;第1.7节,作者&#xff1a; 【英】Kevin Jackson , 【美】Cody Bunch 更多章节内容可以访问云栖社区“异步社区”公众号查看。 1.7 添加用户 在OpenStack身份认证服务中…

java外部类_Java里什么叫内部类什么叫外部类

展开全部对普通类(没有内部类的类)来说&#xff0c;62616964757a686964616fe78988e69d8331333337396234内部类和外部类都与他无关&#xff1b;对有内部类的类来说&#xff0c;它们就是其内部类的外部类&#xff0c;外部类是个相对的说法&#xff0c;其实就是有内部类的类。所以…