aar and source switching plug-in Plus | an abandoned case that has not been adopted


Monorepo simply means putting all the company's code into a Git / Mercurial / Subversion code warehouse. For many people who have never heard of this concept, it is tantamount to a fantasy. Shouldn't Git warehouse be one for each project? For many companies using monorepo, their Git warehouse not only has their own code, but also includes a lot of dependencies.

Our engineering structure is monorepo, but some public SDKs have been removed from momorepo because we need to manage the dependency of the basic database.

But it's a good thing to remove this kind of thing after treatment, but it's not particularly convenient to debug if there are bugs. At the same time, because basic libraries such as framework are basically moved, if there are methods to change, in fact, it's difficult to perceive the main project and investigate the impact of Wanyi.

So I received such a job at that time and found a way to move these warehouses to the main warehouse.


Then I thought carefully for a long time and evaluated the feasibility of the scheme. Although this scheme was not used in the end, I think it is still very interesting. I can briefly talk to you here.

The code warehouse is here. Because of laziness, there is no remote maven. If you are interested, just look at the source code yourself. The principle is relatively simple GradleTask

One of the pain points is that because the code has been extracted from momorepo before, there have been differences in relying on relevant versions. If possible, our boss hopes to use the configuration in the warehouse when compiling in the warehouse. What I want is the convenience outside the warehouse, so I need to take into account these two characteristics at the same time.

Bid farewell to substitute through group+moduleName

substitute was introduced in this article before Thoughts on Android componentization

demo project RouterAndroid

This point was introduced in another article before. Each module is composed of group: moudleName:version.

gradle native can directly use this attribute, but the premise is that the remote end must have this warehouse address. The following is my experiment on the routing component. When a module

We can execute the dependency tree first/ gradlew app:dependencies

\--- com.github.leifzhang:compiler:0.5.1 -> project :compiler
     +--- com.google.auto.service:auto-service:1.0-rc7
     |    +--- com.google.auto.service:auto-service-annotations:1.0-rc7
     |    +--- com.google.auto:auto-common:0.10
     |    |    \--- com.google.guava:guava:23.5-jre -> 27.0.1-jre
     |    |         +--- com.google.guava:failureaccess:1.0.1
     |    |         +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     |    |         +--- com.google.code.findbugs:jsr305:3.0.2
     |    |         +--- org.checkerframework:checker-qual:2.5.2
     |    |         +--- com.google.errorprone:error_prone_annotations:2.2.0
     |    |         +--- com.google.j2objc:j2objc-annotations:1.1
     |    |         \--- org.codehaus.mojo:animal-sniffer-annotations:1.17
     |    \--- com.google.guava:guava:27.0.1-jre (*)
     +--- com.squareup:javapoet:1.13.0
     +--- org.apache.commons:commons-lang3:3.9
     +--- org.apache.commons:commons-collections4:4.1
     +--- project :RouterAnnotation
     |    \--- org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.4.30
     |         +--- org.jetbrains.kotlin:kotlin-stdlib:1.4.30
     |         |    +--- org.jetbrains.kotlin:kotlin-stdlib-common:1.4.30
     |         |    \--- org.jetbrains:annotations:13.0
     |         \--- org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.4.30
     |              \--- org.jetbrains.kotlin:kotlin-stdlib:1.4.30 (*)
     \--- org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.4.30 (*)

Take kapt's configuration as an example. We can see that if the remote end can pull this aar, when there is a module in the project whose group+moduleName can completely match, this aar will be replaced with the source code.

However, due to the problem of people's habits, most people can't write group. At the same time, they also define the name of a module, such as library, so it's best to avoid it if you can avoid it.

However, if we are developing purely locally and the module is not published to the remote end, an error will be reported in the synchronization phase, so we still need to intervene manually at this time. When the project completes the gradle configuration, replace the remote dependency with the local project. As mentioned before, you only need to build in the root Gradle can be configured with the following code.

allprojects {
  configurations.all {
    resolutionStrategy.dependencySubstitution.all {
        if (requested is ModuleComponentSelector) {
            val moduleRequested = requested as ModuleComponentSelector
            val p = rootProject.allprojects.find { p ->
                (p.group == moduleRequested.group && p.name == moduleRequested.module)
            if (p != null) {
                useTarget(project(p.path), "selected local project")


Tip: this can be directly applied to the module s that come in from include building

Here are the dependencies after complete adjustment

+--- com.github.leifzhang:RouterLib:0.5.1 -> project :RouterLib
|    +--- project :RouterAnnotation (*)
|    \--- org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.4.30 (*)
+--- com.github.leifzhang:secondmoudle:0.5.1 -> project :secondmoudle
|    +--- androidx.databinding:viewbinding:4.1.1 (*)
|    +--- org.jetbrains.kotlin:kotlin-stdlib:1.4.30 -> 1.4.31 (*)
|    +--- org.jetbrains.kotlin:kotlin-android-extensions-runtime:1.4.30
|    |    \--- org.jetbrains.kotlin:kotlin-stdlib:1.4.30 -> 1.4.31 (*)
|    +--- androidx.appcompat:appcompat:1.3.0 (*)
|    +--- com.github.leifzhang:RouterAnnotation:0.5.1 -> project :RouterAnnotation (*)
|    +--- com.github.leifzhang:RouterLib:0.5.1 -> project :RouterLib (*)
|    \--- com.github.leifzhang:CoroutineSupport:0.5.1 -> project :CoroutineSupport
|         +--- org.jetbrains.kotlinx:kotlinx-coroutines-android:1.4.0 -> 1.4.3 (*)
|         +--- org.jetbrains.kotlin:kotlin-stdlib:1.4.30 -> 1.4.31 (*)
|         +--- androidx.core:core-ktx:1.3.2 -> 1.5.0 (*)
|         +--- androidx.appcompat:appcompat:1.2.0 -> 1.3.0 (*)
|         +--- project :RouterAnnotation (*)
|         \--- project :RouterLib (*)

Gradle repo renovation plan

First throw out the question and take the question to see the solution. What is the difference between include and includeBuilding?

include introduces the module of the current project into the project for compilation, and then uses the properties of the current project, such as ext, public configuration, properties, folder path, etc.

includeBuilding is to compile the project of this path and the current project. The latter two projects are relatively independent and cannot directly reference each other's sub module s.

So from the above description, oh, when the similarity between two projects is very high, such as global ext, many global configurations, configuration, etc., we can directly introduce the module in the form of include. At the same time, for super large projects, include has a unique advantage, especially when the AGP version is upgraded. Because the project configuration of the main project is used, there is basically no need to worry.

However, if the current module relies on some special plug-ins, which are not defined in the shell project, or some basic configurations do not exist in the shell project, various strange configuration errors will occur in the include d module, thus losing the independence in the original warehouse.

In another case, if the ext global attributes between the two projects are quite different, you can use includeBuilding. In addition, there are some customized configurations in the project. In case of low universality, I suggest using includeBuilding. Especially in the case of plug-in project, it is particularly easy to use.

However, as mentioned earlier, if the gradle version or agp version is not synchronized, the two projects will not be able to complete includeBuilding.

In general, our repo source code switching plug-in will choose one of the two technology stacks to complete the source code switching of the project. In this migration plan, only minors will do multiple-choice questions, and adults will say I want them all.


This time, we have defined a new yaml file, but this file is only responsible for the dependency of modules through include.

// Global control switch
src: false
// seeing the name of a thing one thinks of its function
  // List first data structure
  - branch: master
    origin: https://github.com/Leifzhang/QRScaner.git
    srcBuild: true
    // moduleName participating in mixed compilation
      - name: QRScaner
      - name: abb

Compared with other libraries, we need to define fewer things. As for why modules need to be decided by ourselves, in fact, sometimes there are sample projects and buildSrc in a project. Although they can be eliminated directly, we still need to let students actively perceive which modules need to participate in the mixing.

Then what we need to do is to get the file from the project root directory, then deserialize the data, and then include them into the current settings structure.

//Obtain the data model according to the module path
fun inflateInclude(projectDir: File, repo: RepoInfo) {
       val yaml = Yaml()
       val dir = File(projectDir.parentFile, "subModules")
       val f = File(projectDir, "repo-include.yaml")
       if (f.exists()) {
           val repoInfoYaml = yaml.load<LinkedHashMap<String, Any>>(FileInputStream(f))
           if (repoInfoYaml.containsKey("projects")) {
               val modulesList = repoInfoYaml["projects"]
               if (modulesList is MutableList<*>) {
                   modulesList.forEach {
                       if (it is LinkedHashMap<*, *>) {
                           val module = parserInclude(it as LinkedHashMap<Any, Any>, dir)
                           module?.let {
                               repo.includeModuleInfo[module.name] = module
  //Generate data model
  fun parserInclude(map: LinkedHashMap<Any, Any>, project: File): IncludeModuleInfo? {
      val name = map["name"].toString()
      val origin = map["origin"].toString()
      val branch = map["branch"].toString()
      val srcBuild = map["srcBuild"].toString().toBoolean()
      val modules = map["modules"]
      val moduleList = mutableListOf<String>()
      if (modules is MutableList<*>) {
          modules.forEach {
              if (it is String) {
      if (moduleList.isEmpty()) {
          return null
      return IncludeModuleInfo(name, origin, srcBuild, project, branch, moduleList)

The above is the deserialization code, which is relatively simple. Just understand it by yourself.

override fun apply(settings: Settings) {
      //  Add the monitoring of project, and insert the source code switching code that does not exist in the warehouse before the module Evaluate
      settings.gradle.addProjectEvaluationListener(object : ProjectEvaluationListener {
          override fun beforeEvaluate(project: Project) {
              project.configurations.all {
                  it.resolutionStrategy.dependencySubstitution.all { depend ->
                      if (depend.requested is ModuleComponentSelector) {
                          val moduleRequested = depend.requested as ModuleComponentSelector
                          val p = project.rootProject.allprojects.find { p ->
                              (p.group == moduleRequested.group && p.name == moduleRequested.module)
                          if (p != null) {
                              depend.useTarget(project.project(p.path), "selected local project")


          override fun afterEvaluate(project: Project, p1: ProjectState) {


      // When the module settings is initialized, the data structure is deserialized
      settings.gradle.addBuildListener(object : BuildAdapter() {

          override fun settingsEvaluated(settings: Settings) {
              var repoInfo = YamlUtils.inflate(settings.rootDir)
              if (repoInfo.moduleInfoMap.isEmpty()) {
                  repoInfo = RepoInflater.inflate(settings.rootDir)
              if (repoInfo.moduleInfoMap.isNotEmpty()) {
                  RepoLogger.info("RepoSettingPlugin start work")
              } else {
              // include building mode
              repoInfo.moduleInfoMap.forEach { (s, moduleInfo) ->
                  if (moduleInfo.srcBuild) {
                      RepoLogger.info("${moduleInfo.name} adopt includeBuild Added to engineering construction ")
                      RepoLogger.info("module:${moduleInfo.name} Added to project dependency , branch:" + moduleInfo.curBranch())
              // include mode
              repoInfo.includeModuleInfo.forEach { (s, moduleInfo) ->
                  moduleInfo.projectNameList.forEach {
                      RepoLogger.info("$it Path is ${moduleInfo.getModulePath(it)}");
                      settings.project(":${it}").projectDir =
                      RepoLogger.info("moudle:${it} Added to project dependency , branch:" + moduleInfo.curBranch())

There are two localists here. One is to set the degradation strategy of configuration for the module after the project is pulled, and then complete the conversion of group+moduleName mentioned above.

Another point is that after setting completes the construction, first analyze the data structure, and then, as mentioned before, find out whether the folder under the specific path exists. If it does not exist, there will be a clone. If it does exist, you need to perform the branch pull operation of the project through the specific command line.

After completing the above steps, the module may be involved in the mixed compilation process in the form of includeBuilding or include.

What are the disadvantages

Because this scheme is actually different from our current momorepo, we have completed a lot of optimization for compilation, such as module aar based on commit and so on. In addition, the original structure is more based on soft connection and relative path correlation.


Although this plan was not adopted in the end, I personally think it is still very interesting and fun. Firstly, the defined data structure is more concise and convenient. In addition, it also makes full use of some of the capabilities provided by gradle, and makes up and enhance on this basis.

Added by flhtc on Sun, 06 Mar 2022 05:06:54 +0200