Business practice training of design mode

								Business practice training of design mode

This paper summarizes how to combine the business and implement it in the whole business project by putting the commonly used design patterns into practice. Some come from real business scenarios, some from large factories, and some from classic cases in the open source framework. It's just for taking notes. Don't spray it if you don't like it.

1: Agent mode

Schema definition

Create a proxy object for an object, through which you can use the functions of the object.

Mode essence

Control object access

Usage scenario

  • Permission control: users need to have relevant permissions when performing an operation
  • Method agent: the program adds some logical processing before or after executing a method
  • Function encapsulation: in order to shield the details of some functions and make users unaware (upload file agent)

Business scenario

In addition to the code level, the application scenario of proxy mode can also migrate the proxy mode to the application and architecture level, as shown in the following figure. For some small image files, we can directly store the files in FastDFS service. For large files, such as product video introduction, we can store them in a third-party OSS.

Users can indirectly access OSS and local FastDFS through the file upload agent service. This distributed massive file management solution makes full use of the agent mode not only at the code level, but also at the architecture level.

Next, I will use the proxy mode to realize a file upload function.

Function Description:

According to different file types, choose to upload to different file servers. Small files (jpg/png) are uploaded to their own server fastdfs, and large files (MP4 / AVI / VEP) are uploaded to oss

Code implementation:

1. Define file upload interface

public interface FileUpload {
    /****
     * File upload
     */
    String upload(byte[] buffers , String extName);
}

2. File upload via OSS

@Component(value = "aliyunOSSFileUpload")
public class AliyunOSSFileUpload implements FileUpload{

    @Value("${aliyun.oss.endpoint}")
    private String endpoint;
    @Value("${aliyun.oss.accessKey}")
    private String accessKey;
    @Value("${aliyun.oss.accessKeySecret}")
    private String accessKeySecret;
    @Value("${aliyun.oss.key}")
    private String key;
    @Value("${aliyun.oss.bucketName}")
    private String bucketName;
    @Value("${aliyun.oss.backurl}")
    private String backurl;

    /****
     * File upload
     *  If the file type is a picture, upload it to the local FastDFS
     *  If the file type is video, upload it to aliyun OSS
     */
    @Override
    public String upload(byte[] buffers,String extName) {
        String realName = UUID.randomUUID().toString()+"."+extName ;
        // Create an OSSClient instance.
        OSS ossClient = new OSSClientBuilder().build(endpoint, accessKey, accessKeySecret);
        // < yourobjectname > means that when uploading files to OSS, you need to specify the full path including the file suffix, such as ABC / EFG / 123 jpg. 
        PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, key+realName, new ByteArrayInputStream(buffers));
        // Upload string.
        ObjectMetadata objectMetadata = new ObjectMetadata();
        objectMetadata.setContentType(FileUtil.getContentType("."+extName));
        putObjectRequest.setMetadata(objectMetadata);
        ossClient.putObject(putObjectRequest);

        // Close the OSSClient.
        ossClient.shutdown();
        return backurl+realName;
    }
}

3. File upload with fastdfs

@Component(value = "fastdfsFileUpoad")
public class FastdfsFileUpoad implements FileUpload{

    @Value("${fastdfs.url}")
    private String url;

    /***
     * File upload
     * @param buffers: File byte array
     * @param extName: Suffix
     * @return
     */
    @Override
    public String upload(byte[] buffers, String extName) {
        /***
         * Return value after file upload
         * uploadResults[0]:The name of the group stored in the file upload, for example: group1
         * uploadResults[1]:File storage path, for example: M00 / 00 / 00 / wkjthf0dbzaap23maaxz2mmp9om26 jpeg
         */
        String[] uploadResults = null;
        try {
            //Get StorageClient object
            StorageClient storageClient = getStorageClient();
            //Execute file upload
            uploadResults = storageClient.upload_file(buffers, extName, null);
            return url+uploadResults[0]+"/"+uploadResults[1];
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }

    /***
     * Initialize tracker information
     */
    static {
        try {
            //Get the configuration file FDFS of the tracker_ client. Location of conf
            String filePath = new ClassPathResource("fdfs_client.conf").getPath();
            //Load tracker configuration information
            ClientGlobal.init(filePath);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    /***
     * Get StorageClient
     * @return
     * @throws Exception
     */
    public static StorageClient getStorageClient() throws Exception{
        //Create TrackerClient object
        TrackerClient trackerClient = new TrackerClient();
        //Get TrackerServer object through TrackerClient
        TrackerServer trackerServer = trackerClient.getConnection();
        //Create StorageClient from TrackerServer
        StorageClient storageClient = new StorageClient(trackerServer,null);
        return storageClient;
    }
}

4. File type configuration

upload:
  filemap:
    aliyunOSSFileUpload: mp4,avi
    fastdfsFileUpload: png,jpg

5. File upload service agent

@Component
@ConfigurationProperties(prefix = "upload")
public class FileUploadProxy implements ApplicationContextAware{

    //Inject application The file type to be processed corresponding to the configuration instance in YML
    private Map<String,List<String>> filemap;

    //Inject the Spring container object ApplicationContext
    private ApplicationContext act;

    /***
     * Upload method
     * Receive file object: MultipartFile
     */
    public String upload(MultipartFile file) throws Exception{
        //buffers: File byte array
        byte[] buffers = file.getBytes();
        //extName: suffix 1 jpg->jpg
        String fileName = file.getOriginalFilename();
        String extName = StringUtils.getFilenameExtension(fileName);

        //Circular filemap mapping relation object
        for (Map.Entry<String, List<String>> entry : filemap.entrySet()) {
            //Gets the specified value MP4, avi | PNG, jpg
            List<String> suffixList = entry.getValue();

            //Match whether the file extensions uploaded by the current user match
            for (String suffix : suffixList) {
                if(extName.equalsIgnoreCase(suffix)){
                    //Get the specified key aliyunossfileupload | fastdfsfileupload
                    //Once matched, execute file upload
                    String key = entry.getKey();
                    return act.getBean(key,FileUpload.class).upload(buffers,extName);
                }
            }
        }
        return "";
    }

    //Injection mapping configuration
    public void setFilemap(Map<String, List<String>> filemap) {
        this.filemap = filemap;
    }

    //Injection container
    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        act = applicationContext;
    }
}

5. File upload controller

@RestController
@RequestMapping(value = "/file")
public class FileController {

    @Autowired
    private FileUploadProxy fileUploadProxy;

    /***
     * File upload
     * @param file
     * @return
     * @throws IOException
     */
    @PostMapping(value = "/upload")
    public String upload(MultipartFile file) throws Exception {
        return fileUploadProxy.upload(file);
    }
}

Mode advantages

Clear and scalable responsibilities

Mode disadvantages

Increased complexity of business access logic

Open source framework

For the open source framework, most of them use dynamic agents. The specific functions can be divided into two parts. One is to create objects and the other is to enhance or control functions. The implementation methods are divided into two categories, as follows:

create object

In the framework, it is generally implemented by implementing the FactoryBean interface of Spring

  • Dubbo:ReferenceBean
  • Dubbo: the Adaptive implementation of extensionloader is a typical dynamic proxy implementation
  • Mybatis:MapperFactoryBean
  • Feign:FeignClientFactoryBean

Function enhancement

In the framework, interceptors are generally used. For example, spring's aop mechanism is typical. There are two categories, as follows

  • Advice: Notification
  • Interceptor: interceptor

2: Sharing element mode

Schema definition

Sharing technology is used to support the reuse of a large number of fine-grained objects. By sharing existing objects, it can greatly reduce the number of objects to be created and avoid the overhead of a large number of similar classes, so as to improve the utilization of system resources.

Mode essence

Reduce the creation, sharing and separation of duplicate objects

Usage scenario

  • A large number of duplicate objects need to be created
  • The created object needs to be passed in the whole context

Business scenario

Session tracking, if it is a traditional project, uses session or Cookie, which is common to the whole project, but in micro service projects, neither session nor Cookie is used

Cookie s, so it is difficult to implement session tracking in microservice projects. In the current micro service project, the mainstream method of identity recognition is that the front end stores the user token in the request header, carries the token in the request header to the background every time, and the background obtains the token from the request header every time to identify the user identity. In the process of project operation, we will use the user identity information in many places. Compared with the following orders, we should know which user the current order belongs to. When recording the Key log of placing an order, we need to record the user operation information and user information. The Key log record is generally intercepted by AOP. At this time, we can't directly transfer the user identity information to AOP. At this time, we can use the shared meta mode to share user session information

Function description

User orders, session sharing

code implementation

1. Define abstract shared session information

@Data
@NoArgsConstructor
@AllArgsConstructor
@ToString
public abstract class Session {
    private String username;
    private String name;
    private String sex;
    private String role;
    private Integer level;

    //Additional operations
    public abstract void handler();
}

2. Shared session implementation

public class SessionShare extends Session {

    //Constructor
    public SessionShare(String username, String name, String sex, String role, Integer level) {
        super(username, name, sex, role, level);
    }

    //Implementation of additional functions
    @Override
    public void handler() {
        System.out.println("To share user information!");
    }
}

3. Storage session sharing

@Component
public class SessionThreadLocal {

    //1. Create a ThreadLocal to store the shared objects under the thread
    private static ThreadLocal<Session> sessions = new ThreadLocal<Session>();

    //2. Add shared objects
    public void add(Session session){
        sessions.set(session);
    }

    //3. Get shared objects
    public Session get(){
        return sessions.get();
    }

    //4. Remove shared objects
    public void remove(){
        sessions.remove();
    }
}

Why not define it as a static method

4. Shared session storage

Generally, the user information is obtained through the interceptor, then stored in the thread context object, and then obtained through get where you want to use it

@Component
public class AuthorizationInterceptor implements HandlerInterceptor {

    @Autowired
    private SessionThreadLocal sessionThreadLocal;

    /****
     * Store user sessions in ThreadLocal
     * @param request
     * @param response
     * @param handler
     * @return
     * @throws Exception
     */
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
        try {
            //Get token
            String authorization = request.getHeader("token");
            //Parse token
            if(!StringUtils.isEmpty(authorization)){
                Map<String, Object> tokenMap = JwtTokenUtil.parseToken(authorization);
                //Encapsulate the user identity information and store it in ThreadLocal for sharing by the current thread
                //1. Encapsulate the information to be shared
                //2. Create an object to inherit the encapsulation information and share the object every time (if you don't need to share, you can create another object to inherit it)
                //3. Create a shared management object to realize the functions of adding, acquiring and removing shared information
                SessionShare session = new SessionShare(
                      tokenMap.get("username").toString(),
                      tokenMap.get("name").toString(),
                      tokenMap.get("sex").toString(),
                      tokenMap.get("role").toString(),
                      Integer.valueOf(tokenMap.get("level").toString())
                );
                sessionThreadLocal.add(session);
                return true;
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
        //Output token verification failed
        response.setContentType("application/json;charset=utf-8");
        response.getWriter().print("Identity verification failed!");
        response.getWriter().close();
        return false;
    }

    /**
     * Remove session information
     * @throws Exception
     */
    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
        sessionThreadLocal.remove();
    }
}

5. Shared session usage

For example, when a user places an order, we must know the user information of the corresponding order. At this time, we don't need to query the user information. We can get it directly from the thread context, as follows

    public int add(Order order) {
        order.setUsername("wangwu");
        order.setPaymoney(100); //Settlement price
        order.setMoney(100);  //Order price
        //Get the objects in the thread by sharing meta mode
        order.setUsername(sessionThreadLocal.get().getUsername())
    }

Mode advantages

  • Reduce the creation of a large number of duplicate objects
  • Reduce the use of memory and improve the performance of the system

Mode disadvantages

  • In order for objects to be shared, some states that cannot be shared need to be externalized, which will increase the complexity of the program.

Open source framework

jdk also uses this mode, such as the creation of integer,long and other objects. See this code for details

public final class Integer extends Number implements Comparable<Integer> {
    
    
    public static Integer valueOf(int i) {
        if (i >= IntegerCache.low && i <= IntegerCache.high)
            return IntegerCache.cache[i + (-IntegerCache.low)];
        return new Integer(i);
    }
    
    private static class IntegerCache {
        static final int low = -128;
        static final int high;
        static final Integer cache[];

        static {
            // high value may be configured by property
            int h = 127;
            String integerCacheHighPropValue =
                sun.misc.VM.getSavedProperty("java.lang.Integer.IntegerCache.high");
            if (integerCacheHighPropValue != null) {
                try {
                    int i = parseInt(integerCacheHighPropValue);
                    i = Math.max(i, 127);
                    // Maximum array size is Integer.MAX_VALUE
                    h = Math.min(i, Integer.MAX_VALUE - (-low) -1);
                } catch( NumberFormatException nfe) {
                    // If the property cannot be parsed into an int, ignore it.
                }
            }
            high = h;

            cache = new Integer[(high - low) + 1];
            int j = low;
            for(int k = 0; k < cache.length; k++)
                cache[k] = new Integer(j++);

            // range [-128, 127] must be interned (JLS7 5.1.7)
            assert IntegerCache.high >= 127;
        }

        private IntegerCache() {}
    }
}

Integer.valueOf method if the input parameter is a number from - 128 to 127 (high defaults to 127), it will take it from the cache instead of re creating it. Let's look at the IntegerCache class. It pre creates these objects in the static code block and puts them into an array

3: Decorator mode

Schema definition

Dynamically add new functions to an object without changing the structure of the object

Mode essence

Dynamic combination

Usage scenario

  • Add responsibilities to objects in a dynamic and transparent manner without affecting other objects
  • It is not suitable to use inheritance to extend the business

Business scenario

When submitting an order, the order price and the settlement price are actually two different things. The order price is the transaction price of the current commodity, and the settlement price is the amount that the user finally needs to pay. The final payment amount is not invariable, nor is it the transaction price of the commodity. There are many factors that can change the settlement price, such as 10 yuan for 100 yuan and 5 yuan for VIP users. We can use decorator mode to calculate the order settlement amount.

Function description

Settlement price calculation, nested operation according to different prices

Design ideas

1,Create interface(MoneyOperation),Define the order price calculation, because all price fluctuations are based on the order price. 
2,Create order price calculation class(OrderPayMoneyOperation),realization MoneyOperation Interface to realize order price calculation.
3,Create decorator object(Decorator),For function expansion. 
4,Realize the extension of coupon discount amount calculation function and create Decorator Extension class for CouponsMoneyOperation,First calculate the order amount, and then calculate the discount amount after the coupon is used. 
5,Realize the extension of gold coin cash out function and create Decorator Extension class for GoldMoneyOperation,First calculate the order amount, and then realize the amount after gold coin discount.

code implementation

1. Create order calculation interface

public interface MoneySum {
    /***
     * Order price [settlement] operation
     */
    void money(Order order);
}

2. Create default order calculation interface implementation

@Component(value = "orderMoneySum")
public class OrderMoneySum implements MoneySum{
    @Autowired
    private ItemDao itemDao;
    /****
     * Base price calculation
     * @param order
     */
    @Override
    public void money(Order order) {
        //Query commodity
        Item item = itemDao.findById(order.getItemId());
        //Commodity price * purchase quantity
        order.setMoney(item.getPrice()*order.getNum());   //Order price
        order.setPaymoney(item.getPrice()*order.getNum());//Settlement price
    }
}

3. Create decorator of order calculation interface

public abstract class DecoratorMoneySum implements MoneySum {
    //Extended object
    private MoneySum moneySum;

    public void setMoneySum(MoneySum moneySum) {
        this.moneySum = moneySum;
    }

    //Price calculation
    @Override
    public void money(Order order) {
        moneySum.money(order);
    }
}

4. Realize full minus price calculation and expand it on decorators

@Component(value = "fullMoneySum")
public class FullMoneySum extends DecoratorMoneySum{
//Enhance the original function
@Override
public void sum(Order order) {
//Original function super sum(order); // Enhance moneySum(order);
}
//$5 less over 100
public void moneySum(Order order){
} }

5. Realize vip price preference calculation

@Component(value = "vipMoneySum")
public class VipMoneySum extends DecoratorMoneySum {
//Enhanced on the original method
@Override
public void sum(Order order) {
//Original function super sum(order);
//enhance
        vipMoneySum(order);
    }
//Vip price discount-5
public void vipMoneySum(Order order){
order.setPaymoney(order.getPaymoney()-5); }
}

6. Price calculation of order payment

    public int add(Order order) {
        //Get the objects in the thread by sharing meta mode
        order.setUsername(sessionThreadLocal.get().getUsername());
        //Settlement price nested operation
        //Enhance orderMoneySum [calculate basic price] and perform full reduction
        fullMoneySum.setMoneySum(orderMoneySum);
        //The fullMoneySum is enhanced by [Full minus operation], and the enhanced Vip price calculation is performed
        vipOrderMoney.setMoneySum(fullMoneySum); 
        vipOrderMoney.money(order);
        //Add order
        int addCount = orderDao.add(order);
        return addCount;
    }

Note: for the use of design patterns in business, we usually go directly to new an object

Mode advantages

  • Simplify high-level definitions
  • Easier to reuse functions
  • More flexible than inheritance

Mode disadvantages

  • Multilayer decoration is more complex

Open source framework

  • Jdk: support for various i/o streams
  • Dubbo:ProtocolFilterWrapper. Take the call chain provided by the Provider as an example. EchoFilter - > classloaderfilter - > genericfilter - > contextfilter - > executelimitfilter - > tracefilter - > timeoutfilter - > monitorfilter - > exceptionfilter
  • Dubbo:ProtocolListenerWrapper,ListenerInvokerWrapper,InvokerWrapper

4: Strategy mode

Schema definition

The policy pattern is the packaging of the algorithm, separating the responsibility of using the algorithm from the algorithm itself and delegating it to different objects for management. Policy patterns usually package a series of algorithms into a series of policy classes as subclasses of an abstract policy class.

Mode essence

Separation algorithm, selection, implementation

Usage scenario

  • Multiple classes have only slightly different scenarios in algorithm or behavior
  • The algorithm needs free switching

Business scenario

When users buy goods, they often give different discounts according to the VIP level, especially in the online mall. We also realize the VIP level price system based on real e-commerce cases:

Vip0 - > ordinary price VIP1 - > minus 5 yuan vip2 - > 7% off vip3 - > 5% off

Function description

The settlement price is calculated according to different levels of Vip

code implementation

1. Create VIP price calculation interface

public interface VipMoney {
    /***
     * Amount calculation
     */
    Integer money(Integer money);
}

2. Create different vip discount strategies

@Component(value = "vipOne")
public class VipOne implements VipMoney {

    //Vip1 price calculation
    @Override
    public Integer money(Integer money) {
        return money;
    }
}


@Component(value = "vipTwo")
public class VipTwo implements VipMoney {

    //Vip2 price calculation
    @Override
    public Integer money(Integer money) {
        return money-5;
    }
}

@Component(value = "vipThree")
public class VipThree implements VipMoney {

    //Vip3 price calculation
    @Override
    public Integer money(Integer money) {
        return (int)(money*0.5);
    }
}

@Component(value = "vipFour")
public class VipFour implements VipMoney {

    //Vip4 price calculation
    @Override
    public Integer money(Integer money) {
        return (int)(money*0.1);
    }
}

3. Create a policy factory

@Component
@ConfigurationProperties(prefix = "strategy")
public class StrategyFactory implements ApplicationContextAware {

    //① Inject ApplicationContext
    //vipOne:VipOneInstance
    //vipTwo:VipTwoInstance
    //vipThree:VipThreeInstance
    //vipFour:VipFourInstance
    private ApplicationContext act;

    //1:vipOne
    //2:vipTwo
    //3:vipThree
    //4:vipFour
    private Map<Integer,String> strategyMap;

    /***
     * Obtain the policy instance corresponding to the user through the level
     */
    public VipMoney get(Integer level){
        //Get the policy instance ID corresponding to the level
        String id = strategyMap.get(level);
        //Get the policy instance from the container according to the policy instance ID
        return act.getBean(id,VipMoney.class);
    }


    //Mapping information in injection configuration
    public void setStrategyMap(Map<Integer, String> strategyMap) {
        this.strategyMap = strategyMap;
    }

    //Injection container
    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        act = applicationContext;
    }
}

4. Policy configuration

strategy:
  strategyMap:
    1: strategyVipOne
    2: strategyVipTwo
    3: strategyVipThree
    4: strategyVipFour

5. Strategy use

 public void vipMoney(Order order){
        //order.setPaymoney(order.getPaymoney()-5);
        //Get user level
        Integer level = sessionThreadLocal.get().getLevel();
        //Get price discount strategy
        VipMoney vipMoney = strategyFactory.get(level);
        Integer payMoney = vipMoney.money(order.getPaymoney());
        order.setPaymoney(payMoney);
    }

Mode advantages

  • The algorithm can be switched freely
  • Easy to expand
  • Eliminate multiple logical judgments

Mode disadvantages

  • Policy classes may increase with different policies
  • All policy classes should be exposed

Open source framework

  • dubbo: the typical usage scenario of policy mode is the implementation of load balance

5: Template mode

Schema definition

Define the skeleton of an operation algorithm, and delay some steps to the subclass, so that the subclass can redefine some specific steps of the algorithm without changing the structure of an algorithm.

Mode essence

Fixed algorithm skeleton

Usage scenario

  • The algorithm skeleton needs to be fixed to realize an algorithm invariant part
  • Each subclass has a common behavior
  • You need to control the extension of subclasses

Business scenario

When updating the data in real time, the business personnel want to know whether the modified values are correct. In our scenario, there are single table data updates and multi table data updates, but the data obtained by the server does not know whether it is single table or multi table. Therefore, data cutting is required before subsequent data updates can be performed. In fact, the template method can be used here

Function description

Real time update for single table or multi table data

code implementation

1. Create table cutting interface

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
public interface TableSplit {
    /**
     * cutting data is converted to the corresponding update request object
     * @param splitParam splitParam
     * @return ModifyRequest
     */
    ModifyRequest split(SplitParam splitParam);
}

2. Implementation of table cutting and updating template

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description table split common base class
 */
@Slf4j
public abstract class AbstractTableSplit implements TableSplit {
    /**
     * cutting data is converted to the corresponding update request object
     * @param splitParam splitParam
     * @return ModifyRequest
     */
    @Override
    public ModifyRequest split(SplitParam splitParam) {
        DataCacheFacade dataCacheFacade = SpringContextUtils.createBean(DataCacheFacade.class);
        if (null!=dataCacheFacade){
            String orderId = splitParam.getOrderId();
            String fixedKey = splitParam.getFixedKey();
            DrvOutput drvOutput = DrvOpUtils.getDpuOutPut(dataCacheFacade, orderId, fixedKey,splitParam.getRuleId()+"");
            if (drvOutput!=null){
                ModifyRequest modifyRequest = doSplitTable(drvOutput);
                log.info("modify request:{}", JSON.toJSONString(modifyRequest));
                return modifyRequest;
            }
            else {
                throw new ExternalDataNotValidException("external data not valid");
            }
        }

        return null;
    }

    /**
     * split table data
     * @param drvOutput drvOutput
     * @return ModifyRequest
     */
    protected abstract ModifyRequest doSplitTable(DrvOutput drvOutput);

3. Single table data update

package com.abcft.phoenix.paas.verify.execution.modify.split;

import com.abcft.phoenix.common.bean.DrvOutput;
import com.abcft.phoenix.common.bean.DrvTableColumn;
import com.abcft.phoenix.common.bean.DrvTableRow;
import com.abcft.phoenix.paas.verify.execution.modify.ModifyRequest;
import com.abcft.phoenix.paas.verify.execution.modify.domain.ModifyRow;
import com.abcft.phoenix.paas.verify.execution.modify.domain.ModifyTable;
import com.google.common.collect.Lists;
import lombok.extern.slf4j.Slf4j;

import java.util.List;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 * for example sql: select a.id,a.name,a.like from user
 */
@Slf4j
public class OneTableSplit extends AbstractTableSplit{
    /**
     * split table data
     * @param drvOutput drvOutput
     * @return ModifyRequest
     */
    @Override
    protected ModifyRequest doSplitTable(DrvOutput drvOutput) {
        log.info("come into one table split....");
        ModifyRequest modifyRequest = new ModifyRequest();
        ModifyTable modifyTable = new ModifyTable();
        // split row list
        List<ModifyRow> modifyRows = splitRow(drvOutput);
        Long dbId = Long.parseLong(drvOutput.getDbId()+"");
        List<ModifyTable> modifyTables = Lists.newArrayList();
        modifyTable.setTableName(getTableName(drvOutput));
        modifyTable.setModifyRows(modifyRows);
        modifyTables.add(modifyTable);
        modifyRequest.setDbId(dbId);
        modifyRequest.setModifyTables(modifyTables);

        return modifyRequest;
    }


    /**
     * in this case, do not judge the related set, because it has been judged when getting row
     * @param drvOutput drvOutput
     * @return String
     */
    private String getTableName(DrvOutput drvOutput){
        String tableName = drvOutput.getDrvTablesList().get(0).getTableName();
        log.info("OneTableSplit table name is:{}",tableName);
        return tableName;
    }


    /**
     * traverse all rows to form an updated modify row collection
     * @param drvOutput drvOutput
     * @return List<ModifyRow>
     */
    private List<ModifyRow> splitRow(DrvOutput drvOutput){
        List<ModifyRow> modifyRows = Lists.newLinkedList();
        checkTableData(drvOutput);
        for (DrvTableRow drvTableRow : drvOutput.getDrvTablesList().get(0).getDrvTableRowsList()){
            List<DrvTableColumn> drvTableColumnsList = drvTableRow.getDrvTableColumnsList();
            ModifyRow modifyRow = setModifyRow(drvTableRow, drvTableColumnsList);
            modifyRows.add(modifyRow);
        }
        return modifyRows;
    }
}

4. Multi table data update

package com.abcft.phoenix.paas.verify.execution.modify.split;

import com.abcft.phoenix.common.bean.DrvOutput;
import com.abcft.phoenix.common.bean.DrvTableColumn;
import com.abcft.phoenix.common.bean.DrvTableList;
import com.abcft.phoenix.common.bean.DrvTableRow;
import com.abcft.phoenix.paas.verify.execution.modify.ModifyRequest;
import com.abcft.phoenix.paas.verify.execution.modify.domain.ModifyRow;
import com.abcft.phoenix.paas.verify.execution.modify.domain.ModifyTable;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import lombok.extern.slf4j.Slf4j;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 * for example sql: select a.id as a1,b.id as b1,a.name as a2,b.emp as b2 from a,b where a.id = b.id
 */
@Slf4j
public class MultiTableFieldSplit extends AbstractTableSplit{

    /**
     * split table data
     *
     * @param drvOutput drvOutput
     * @return ModifyRequest
     */
    @Override
    protected ModifyRequest doSplitTable(DrvOutput drvOutput) {
        log.info("come into multi table field split");
        checkTableData(drvOutput);
        ModifyRequest modifyRequest = new ModifyRequest();
        List<DrvTableList> drvTablesList = drvOutput.getDrvTablesList();
        List<DrvTableRow> drvTableRowsList = drvTablesList.get(0).getDrvTableRowsList();
        List<Map<String,ModifyRow>> mapRowList = Lists.newLinkedList();
        for (DrvTableRow tableRow : drvTableRowsList){
            Map<String, ModifyRow> modifyRowMap = splitTableRow(tableRow);
            mapRowList.add(modifyRowMap);
        }
        List<ModifyTable> modifyTables = splitModifyTables(mapRowList);
        return modifyRequest.setModifyTables(modifyTables).setDbId(Long.parseLong(drvOutput.getDbId()+""));
    }


    /**
     * according to the name of the table, repeat cut, save as a table corresponding to a modified set
     * @param mapList mapList
     * @return
     */
    private List<ModifyTable> splitModifyTables(List<Map<String,ModifyRow>> mapList){
        List<ModifyRow> modifyRows;
        // key:table_name ,value :List<ModifyRow>
        Map<String ,List<ModifyRow>> tableModifyMap = Maps.newHashMap();
        // Map.Entry<String, ModifyRow> for example: key:t1,value:r1 key:t1,value:r1,key:t2,value:r2,key:t2:value:r3
        for (Map<String,ModifyRow> map:mapList){
            Set<Map.Entry<String, ModifyRow>> entries = map.entrySet();
            for (Map.Entry<String, ModifyRow> entry : entries){
                String tableName = entry.getKey();
                ModifyRow modifyRow = entry.getValue();
                modifyRows = tableModifyMap.get(tableName);
                if (null==modifyRows){
                    modifyRows = new ArrayList<>();
                    modifyRows.add(modifyRow);
                }
                else {
                    modifyRows.add(modifyRow);
                }
                tableModifyMap.put(tableName,modifyRows);
            }
        }
        return splitTables(tableModifyMap);
    }

    /**
     * according to the name of the table, get all modify rows and organize them into corresponding modify tables
     * @param tableModifyMap tableModifyMap
     * @return List<ModifyTable>
     */
    private List<ModifyTable> splitTables(Map<String ,List<ModifyRow>> tableModifyMap){
        List<ModifyTable> modifyTables = Lists.newLinkedList();
        ModifyTable modifyTable = new ModifyTable();
        Set<Map.Entry<String, List<ModifyRow>>> entries = tableModifyMap.entrySet();
        for (Map.Entry<String, List<ModifyRow>> entry:entries){
            String tableName = entry.getKey();
            List<ModifyRow> modifyRows = entry.getValue();
            modifyTable.setTableName(tableName);
            modifyTable.setModifyRows(modifyRows);
            modifyTables.add(modifyTable);
        }
        return modifyTables;
    }


    /**
     * the value of each field to be modified, the name of the field and the indication it belongs to are assembled into a map container structure
     * @param tableRow tableRow
     * @return Map<String,ModifyRow>
     */
    private Map<String,ModifyRow> splitTableRow(DrvTableRow tableRow){
        //key:table_name,value:modify row
        Map<String,ModifyRow> tableRowMap = Maps.newHashMap();
        ModifyRow modifyRow = new ModifyRow();
        Integer rowId = ((Long)tableRow.getRowId()).intValue();
        modifyRow.setRowId(rowId);
        List<DrvTableColumn> drvTableColumnsList = tableRow.getDrvTableColumnsList();
        Map<String,String> modifyValueMap = Maps.newHashMap();
        String tableName=null;
        for (DrvTableColumn drvTableColumn : drvTableColumnsList){
            String updateValue = drvTableColumn.getUpdateValue();
            String columnName = drvTableColumn.getColumnName();
            tableName = drvTableColumn.getTableName();
            modifyValueMap.put(columnName,updateValue);
        }
        tableRowMap.put(tableName,modifyRow);

        return tableRowMap;

    }
}

5. Data update instance encapsulation

public class TableSplitFactory {

    private TableSplitFactory(){}

    private static final Map<String,TableSplit> TABLE_SPLIT_MAP = Maps.newHashMap();

    static {
        TABLE_SPLIT_MAP.put(SplitType.ONE_TABLE_SPLIT.name(),new OneTableSplit());
        TABLE_SPLIT_MAP.put(SplitType.MULTI_TABLE_FIELD_SPLIT.name(),new MultiTableFieldSplit());
        TABLE_SPLIT_MAP.put(SplitType.MULTI_TABLE_DETAIL_SPLIT.name(),new MultiTableDetailSplit());
    }


    private static TableSplit instance(String type){
        if (type==null){
            throw new NullPointerException("table data split type is null");
        }
        else if (!TABLE_SPLIT_MAP.containsKey(type)){
            throw new IllegalArgumentException("table data split type is not valid");
        }
        else {
            return TABLE_SPLIT_MAP.get(type);
        }
    }


    /**
     * call a specific instance of table data cutting to cut the data and assemble it into a modified object
     * @param type type
     * @param splitParam splitParam
     * @return ModifyRequest
     */
    public static ModifyRequest split(String type, SplitParam splitParam){
        return instance(type).split(splitParam);
    }
}

Mode advantages

  • Encapsulate invariant part
  • Extract common code, easy to maintain
  • The behavior is controlled by the parent class and implemented by the child class

Mode disadvantages

  • The algorithm skeleton is not easy to upgrade

Open source framework

  • Spring:jdbctemplate,transactiontemplate,jmstempalte,jpatemplate
  • Dubbo:

6: State mode

Schema definition

For stateful objects, the complex "judgment logic" is extracted into different state objects, allowing the state object to change its behavior when its internal state changes.

Mode essence

The state changes its behavior

Usage scenario

  • When running, change its behavior through the state
  • If an operation contains a large number of branches, and the branches depend on different states

Business scenario

In the e-commerce case, different operations must be performed every time the order status changes. Here, the status mode can be used. When the order is paid, we need to immediately notify the merchant to ship. When the order execution is cancelled, we need to perform inventory rollback. If the order has been paid, we also need to perform refund. Whether it is to notify the merchant to ship or perform inventory rollback, it is determined by the order status. Therefore, the status mode can be used here

Function description

The order status is different and different operations are performed (order payment notice, shipment notice, order cancellation notice, rollback inventory)

code implementation

1. Define status interface

public interface State {
    /***
* Change status
* @param order */
    void doAction(Order order);
/***
* Executive behavior */
    void execute();
}

2. Processing order payment status

@Component("sendMsgBehavior")
public class SendMsgBehavior implements State {
    @Override
    public void doAction(Order order) {
System.out.println("Order payment");
order.setState(this); }
 

@Override
    public void execute(){
System.out.println("If the order is changed to paid, the merchant needs to be notified to deliver the goods!"); }
}

3. Processing order cancellation status

@Component("resetStoreBehavior")
public class ResetStoreBehavior implements State {
    @Override
    public void doAction(Order order) {
System.out.println("Order cancellation");
order.setState(this); }
    @Override
    public void execute(){
System.out.println("Cancel the order and roll back the inventory!");
System.out.println("Order cancellation, refund execution!"); }
}

4. Order status usage

[external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (IMG xlwmjyuw-1628044193412) (% E8% AE% be% E8% AE% A1% E6% A8% A1% E5% BC% 8F% E4% B9% 8b% E4% B8% 9A% E5% 8A% A1% E5% AE% 9E% E6% 88% 98% E5% 9F% B9% E8% AE% ad.assets / image-20210801184227880. PNG)]

Mode advantages

  • Avoid a pile of if else
  • Very good encapsulation

Mode disadvantages

  • Class expansion occurs when there are many states
  • It is not very friendly to the design principles. It is necessary to modify the state of a class and the corresponding code

Open source framework

In this regard, the open source framework provides a better extension, that is, the state machine, Spring Machine Status

7: Adapter mode

Schema definition

Convert the interface of a class to another interface that the customer wants

Mode essence

Conversion matching, multiplexing function

Usage scenario

  • You want to use an existing class, but its interface doesn't meet your requirements
  • If you want to create a reusable class, but it can work with some incompatible classes
  • Dynamically add some additional responsibility functions to objects. In terms of adding functions, it is more flexible than generating subclasses

Business scenario

When the third party pays, the general system will receive multiple payment channels, such as Alipay, WeChat, Baidu and so on. When taking this third party business interface, taking into account the incompatibility between the possible parameter interfaces and parameters, we can use the adapter to transform and expose the adapters in the encapsulation process, shielding the specific payment channels.

Function description

Pay according to different payment channels

Business scenario

1. Define payment channel interface

/**
 * @Class PayChannel
 * @Author hpjia.abcft & to be a low profile architect
 * @CreateDate 2021-08-01 20:46
 * @describe
 * @Version 1.0
 */
public interface PayChannel {

    void pay(Order order);
}

2. Realization of various payment channels

/**
 * @author hpjia.abcft & to be a low profile architect
 * @version 1.0
 * @date 2021-08-01 20:49
 * @describe
 */
public class BaiDuPayChannel implements PayChannel {
    @Override
    public void pay(Order order) {
        System.out.println("use bai du to pay");
    }
}


/**
 * @author hpjia.abcft & to be a low profile architect
 * @version 1.0
 * @date 2021-08-01 20:49
 * @describe
 */
public class WeiXiPayChannel implements PayChannel {
    @Override
    public void pay(Order order) {
        System.out.println("use wei xin to pay");
    }
}

/**
 * @author hpjia.abcft & to be a low profile architect
 * @version 1.0
 * @date 2021-08-01 20:48
 * @describe
 */
public class ZhiFuBaoPayChannel implements PayChannel {
    @Override
    public void pay(Order order) {
        System.out.println("use zhi fu bao to pay");
    }
}

3. Define payment adaptation interface

/**
 * @Class PayChannelAdapter
 * @Author hpjia.abcft & to be a low profile architect
 * @CreateDate 2021-08-01 20:52
 * @describe
 * @Version 1.0
 */
public interface PayChannelAdapter {
    PayChannel getPayChannel();
}

4. Payment adaptation interface and its implementation

public class BaiDuPayChannelAdapter implements PayChannelAdapter {
    @Override
    public PayChannel getPayChannel(String channelType) {
        return new BaiDuPayChannel();
    }
}


public class WeiXiPayChannelAdapter implements PayChannelAdapter {
    @Override
    public PayChannel getPayChannel() {
        return new WeiXiPayChannel();
    }
}

public class ZhiFuBaoPayChannelAdapter implements PayChannelAdapter {
    @Override
    public PayChannel getPayChannel() {
        return new ZhiFuBaoPayChannel();
    }
}

5. Payment channel factory

/**
 * @author hpjia.abcft & to be a low profile architect
 * @version 1.0
 * @date 2021-08-01 20:50
 * @describe
 */
public class PayChannelFactory {

    public static PayChannel getPayChannel(String channelType){
        return newInstance(channelType).getPayChannel(channelType);
    }


    private static PayChannelAdapter newInstance(String channelType){
        PayChannelAdapter payChannelAdapter = null;
        switch (channelType){
            case "zhifubao":
                payChannelAdapter = new ZhiFuBaoPayChannelAdapter();
                break;

            case "weixin":
                payChannelAdapter = new WeiXiPayChannelAdapter();
                break;
            case "baidu":
                payChannelAdapter = new BaiDuPayChannelAdapter();
                break;
                default:
                    payChannelAdapter = new ZhiFuBaoPayChannelAdapter();
                    break;
        }
        return payChannelAdapter;
    }
}

Mode advantages

  • Better reusability
  • Better scalability

Mode disadvantages

  • Excessive use of adapters will make the whole system messy

Open source framework

  • Dubbo: the implementation of logger uses a typical adapter, LoggerAdapter

8: Chain model - responsibility chain

Schema definition

Every instance on the chain has the opportunity to participate in the processing of responsibilities

Mode essence

Logical processing is separated from concrete examples

Usage scenario

  • Approval process: leave approval & Financial approval

Business scenario

I have seen that many examples prefer to use the approval function to describe the responsibility chain mode, but in reality, we rarely do so, because most enterprises or systems involved in the approval function will use open source workflow. Let me share with you how to use this model to solve practical problems.

Business background

At that time, our main business needs were verification based on data and given rules (the actual situation is very complex). In the early stage, it was difficult for developers (more than 3000 lines of code are written in one class). Later, in order to quickly respond to the changes of business needs, I personally looked at the code, combed it and reconstructed it. Through combing, I found that the verification link, It will go through loading rules - > rule classification - > error rule processing - > waiting for rule processing - > rule data volume detection - > executing correct rule processing,

There will be more processing steps in the future. At the same time, different data verification requirements have different step requirements for the above link

Function description

Assemble different data processing steps to realize different data verification processing

code implementation

1. Interceptor interface for abstract data processing

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
public interface Interceptor {
    /**
     * do interceptor something,about may classify,or other
     * @param context context
     * @param entry entry
     */
    void interceptor(InterceptorContext context,InterceptorEntry entry);

    /**
     * next interceptor something,about may classify,or other
     * @param context context
     * @param entry entry
     */
    void nextInterceptor(InterceptorContext context,InterceptorEntry entry);
}

2. Define the parameters required for storing data processing

package com.abcft.phoenix.paas.verify.execution.interceptor;

import com.abcft.pass.phoneix.cache.DataCacheFacade;
import com.abcft.phoenix.paas.verify.common.dict.DataDictClient;
import com.abcft.phoenix.paas.verify.domain.vo.ToolKey;
import com.abcft.phoenix.paas.verify.execution.executor.impl.ExecutorFactory;
import com.abcft.phoenix.paas.verify.execution.send.SendMessageCmp;
import com.abcft.phoenix.paas.verify.execution.trigger.cmp.PreTriggerCheck;
import com.abcft.phoenix.paas.verify.loader.loader.VerificationRuleLoader;
import lombok.Data;


/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
@Data
public class InterceptorEntry {
    /**
     * current table id
     */
    private String currentTableId;

    /**
     * current table name
     */
    private String currentTableName;

    private ToolKey toolKey;
    /**
     * dataCacheFacade
     */
    private DataCacheFacade dataCacheFacade;

    /**
     * preTriggerCheck
     */
    private PreTriggerCheck preTriggerCheck;

    /**
     * sendMessageCmp
     */
    private SendMessageCmp sendMessageCmp;

    /**
     * verificationRuleLoader
     */
    private VerificationRuleLoader verificationRuleLoader;

    /**
     * executorFactory
     */
    private ExecutorFactory executorFactory;
    /**
     * data dict
     */
    private DataDictClient dataDictClient;
    /**
     * rule search max count
     */
    private Integer searchMaxCount;


}

3. Define the results of each interception process

package com.abcft.phoenix.paas.verify.execution.interceptor.context;

import com.abcft.phoenix.paas.verify.domain.dto.SqlRule;
import lombok.Data;

import java.util.List;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
@Data
public class InterceptorContext {
    private List<SqlRule> executableRules;
    private List<SqlRule> errorRules;
    private List<SqlRule> waitingRules;
    private Long startTime;
    private List<SqlRule> sqlRules;
}

4. Define the abstract implementation of data processing interceptor

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
public abstract class AbstractInterceptor implements Interceptor {

    private AbstractInterceptor next = null;


    /**
     * next interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void nextInterceptor(InterceptorContext context, InterceptorEntry entry) {
        if (null!=next){
            next.interceptor(context,entry);
        }
    }

    public AbstractInterceptor getNext() {
        return next;
    }

    public void setNext(AbstractInterceptor next) {
        this.next = next;
    }

5. Define the interceptor implementation corresponding to various data processing

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
@Slf4j
public class LoadAllRuleInterceptor extends AbstractInterceptor {
    /**
     * do interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do all rule interceptor...");
        //send success message to queue
        ToolKey toolKey = entry.getToolKey();
        String tableId = toolKey.getTableId();
        String ruleSetId = toolKey.getRuleSetId();
        VerificationRuleLoader verificationRuleLoader = entry.getVerificationRuleLoader();
        List<SqlRule> allRules = getAllRules(tableId, ruleSetId, verificationRuleLoader);
        if (allRules==null || allRules.size()==0){
            this.sendMessage(entry.getSendMessageCmp(), StatusCode.SUCCESS_NO_RULE.getCode(),"Current table ID by["+tableId+"],No rules configured",context.getStartTime(),entry.getToolKey().getMessageContext());
            return;
        }
        else {
            log.info("load rule size:{}",allRules.size());
            context.setSqlRules(allRules);
            nextInterceptor(context,entry);
        }

    }
    
    
    
    /**
 * @author hpjia.abcft
 * @version 1.0
 * @date rule classify interceptor(executable rule,error rule,waiting rule)
 */
@Slf4j
public class RuleClassifyInterceptor extends AbstractInterceptor {

    private static final String ERROR_FLAG = "error";



    /**
     * do interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do rule classify interceptor...");
        Objects.requireNonNull(entry, "verification executable rule interceptor param is null");
        Long startTime = System.currentTimeMillis();
        context.setStartTime(startTime);
        // executable rules
        List<SqlRule> executableRules = Lists.newArrayList();
        String currentTableName = entry.getCurrentTableName();
        if (StringUtils.isEmpty(currentTableName) || StringUtils.isBlank(currentTableName)) {
            throw new NullPointerException("current table name is null");
        }
        List<SqlRule> sqlRules = context.getSqlRules();
        // error rules
        List<SqlRule> errorRules = Lists.newArrayList();
        // wait rules
        List<SqlRule> waitRules = Lists.newArrayList();

        for (SqlRule sqlRule : sqlRules) {
            String ruleSql = sqlRule.getRuleSql();
            int tableSize = ParserSqlUtils.getTableSize(ruleSql);
            if (tableSize == 0) {
                errorRules.add(sqlRule);
            }
            // representative current table no relation table,can immediately verification
            else if (tableSize == 1) {
                executableRules.add(sqlRule);
            }
            // representative contain relation table
            else {
                validationMultiExecutableRule(entry, executableRules, waitRules, sqlRule, ruleSql);
            }
        }

        if (CollectionUtils.isNotEmpty(errorRules)) {
            log.info("get error rules size:{},table name:{}", errorRules.size(), currentTableName);
            context.setErrorRules(errorRules);
        }

        if (CollectionUtils.isNotEmpty(executableRules)) {
            log.info("get executable rules size:{},table name:{}", executableRules.size(), currentTableName);
            context.setExecutableRules(executableRules);
        }

        if (CollectionUtils.isNotEmpty(waitRules)) {
            log.info("get waiting rules size:{},table name:{}", waitRules.size(), currentTableName);
            context.setWaitingRules(waitRules);
        }

        nextInterceptor(context, entry);
    }
}


@Slf4j
public class ErrorRuleInterceptor extends AbstractInterceptor{


    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do error rule interceptor");
        List<SqlRule> errorRules = context.getErrorRules();
        if (CollectionUtils.isNotEmpty(errorRules)){
            List<DrvOutput> drvOutputs = Lists.newLinkedList();
            SqlVerificationResult.Builder builder = SqlVerificationResult.newBuilder();
            for (SqlRule sqlRule:errorRules){
                DrvOutput.Builder drvOutputBuilder = DrvOutput.newBuilder();
                drvOutputBuilder.setRuleFlag(true);
                drvOutputBuilder.setRuleName(sqlRule.getRuleName());
                drvOutputBuilder.setRuleId(sqlRule.getRuleId());
                drvOutputs.add(drvOutputBuilder.build());
            }
            builder.addAllDrvOutputs(drvOutputs);
            SqlVerificationResult build = builder.build();
            String orderId = entry.getToolKey().getOrderId();
            String dataKey = entry.getToolKey().getOutputKey();
            boolean b = entry.getDataCacheFacade().setDataContentToCache(orderId, dataKey,build);
            log.info("generate rule configuration error,order id:{},data key:{}",orderId,dataKey);
            if (b){
                sendMessage(entry.getSendMessageCmp(),VerificationConstant.HDPU_CODE_RULE,"The current rule is incorrect,Business personnel verification rules are required",context.getStartTime(),entry.getToolKey().getMessageContext());
            }
            else{
                throw new RuntimeException("Failed to write error rule to cache system");
            }
        }
        else {
            log.info("no error rule interceptor...");
            nextInterceptor(context,entry);
        }
    }

}


public class WaitingRuleInterceptor extends AbstractInterceptor {
    /**
     * do interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do waiting rule interceptor...");
        List<SqlRule> waitingRules = context.getWaitingRules();
        List<SqlRule> executableRules = context.getExecutableRules();
        ToolKey toolKey = entry.getToolKey();
        if (null== waitingRules || waitingRules.size()==0){
            toolKey.setTableExecStatus(VerificationConstant.IMMEDIATE_EXECUTION);
            log.warn("waiting rule is null,need not interceptor");
            entry.setToolKey(toolKey);
            nextInterceptor(context,entry);
        }
        else {
            //save waiting rule to db
            saveWaitingRules(waitingRules, entry);
            //no executable rules,save waiting status to message queue
            if (null==executableRules || executableRules.size()==0){
                sendMessage(entry.getSendMessageCmp(), VerificationConstant.WAITING_RUNNING_CODE,"Current data table["+waitingRules.get(0).getTableName()+"],The rules do not satisfy the execution mechanism,Pending",context.getStartTime(),entry.getToolKey().getMessageContext());
                return;
            }
            else {
                //Contains partially executable rules,next interceptor
                toolKey.setTableExecStatus(VerificationConstant.PARTIAL_RULE_EXECUTED);
                entry.setToolKey(toolKey);
                nextInterceptor(context, entry);
            }
        }

    }
    
    
    public class ExecutableRuleInterceptor extends AbstractInterceptor {
    /**
     * do interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do executable rule interceptor...");
        List<SqlRule> executableRules = context.getExecutableRules();
        if (CollectionUtils.isNotEmpty(executableRules)){
            log.info("executable rule size :{}",executableRules.size());
            //modify or verification + ignore,for a rule
            Type type = getExecutorType(entry.getToolKey(),entry.getDataCacheFacade());
            log.info("execution all type is:{}", JSON.toJSONString(type));
            // get execution instance
            Executor executorInstance = entry.getExecutorFactory().getExecutorInstance(type.getExecutorType());
            log.info("executor instance is :{}",executorInstance.getClass().getSimpleName());
            // set execution param
            ExecutionParam executionParam = setExecutionParam(entry.getToolKey().getRuleSetId(), entry.getToolKey(), entry.getCurrentTableId(), type);
            executionParam.setSqlRules(context.getExecutableRules());
            // call method execution
            executorInstance.execution(executionParam);
        }
        else {
            log.info("no any executable rule exec...");
        }
    }
    
    
    public class RuleIndexInterceptor extends AbstractInterceptor {
    /**
     * do interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do rule index interceptor");
    }
}


public class RuleSearchCountInterceptor extends AbstractInterceptor {

    /**
     * do interceptor something,about may classify,or other
     * <pre>
     *     1.rule search count
     *     <pre/>
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        log.info("do rule search count interceptor...");
        List<SqlRule> executableRules = context.getExecutableRules();
        if (CollectionUtils.isNotEmpty(executableRules)){
            Long databaseId = executableRules.get(0).getDatabaseId();
            DataDictClient dataDictClient = entry.getDataDictClient();
            DictDbInfo dataBaseInfo = dataDictClient.getDataBaseInfo(databaseId);
            // check each rule for search count
            Map<Integer,SearchRuleResult> searchRuleResultMap = Maps.newHashMap();
            Integer searchMaxCount = entry.getSearchMaxCount();
            for (SqlRule sqlRule : executableRules){
                String ruleSql = sqlRule.getRuleSql();
                try {
                    List<Map<String, Object>> mapList = DbOperationTemplate.queryForList(dataBaseInfo, ruleSql);
                    if (CollectionUtils.isNotEmpty(mapList)){
                        if (mapList.size()>searchMaxCount){
                            log.info("current rule id:{},search count:{},super max count:{}",sqlRule.getRuleId(),mapList.size(),searchMaxCount);
                            SearchRuleResult.SearchRuleResultBuilder searchRuleResultBuilder = SearchRuleResult.builder().ruleId(sqlRule.getRuleId()).ruleName(sqlRule.getRuleName()).tableName(sqlRule.getTableName()).searchCount(mapList.size());
                            searchRuleResultMap.put(sqlRule.getRuleId(),searchRuleResultBuilder.build());
                        }
                    }

                } catch (Exception e) {
                    log.error("search rule count occur exception",e);
                }
            }

    

6. Define interceptor chain and default implementation

public abstract class InterceptorChain extends AbstractInterceptor {
    /**
     * use linked list add interceptor
     * @param abstractInterceptor  abstractInterceptor
     */
    public abstract void addLast(AbstractInterceptor abstractInterceptor);
}


public class DefaultInterceptorChain extends InterceptorChain {

    AbstractInterceptor first = new AbstractInterceptor() {
        @Override
        public void interceptor(InterceptorContext context, InterceptorEntry entry) {
            super.nextInterceptor(context,entry);
        }
    };

    AbstractInterceptor end = first;

    @Override
    public void addLast(AbstractInterceptor abstractInterceptor) {
        end.setNext(abstractInterceptor);
        end = abstractInterceptor;
    }

    /**
     * do interceptor something,about may classify,or other
     *
     * @param context context
     * @param entry   entry
     */
    @Override
    public void interceptor(InterceptorContext context, InterceptorEntry entry) {
        first.interceptor(context,entry);
    }

    @Override
    public void setNext(AbstractInterceptor next) {
        addLast(next);
    }

    @Override
    public AbstractInterceptor getNext() {
        return first.getNext();
    }

}

7. Implement the builder of specific interceptor chain

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
public interface InterceptorChainBuilder {
    /**
     * builder interceptor interceptor chain
     * @return InterceptorChain
     */
    InterceptorChain builder();
}



/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
public final class DefaultInterceptorChainBuilder implements InterceptorChainBuilder {

    /**
     * builder interceptor interceptor chain
     *
     * @return InterceptorChain
     */
    @Override
    public InterceptorChain builder() {
        InterceptorChain interceptorChain = new DefaultInterceptorChain();
        interceptorChain.addLast(new LoadAllRuleInterceptor());
        interceptorChain.addLast(new RuleClassifyInterceptor());
        interceptorChain.addLast(new ErrorRuleInterceptor());
        interceptorChain.addLast(new WaitingRuleInterceptor());
        //interceptorChain.addLast(new RuleSearchCountInterceptor());
        interceptorChain.addLast(new ExecutableRuleInterceptor());
        return interceptorChain;
    }
}


/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
public class WaitExecutableInterceptorChainBuilder implements InterceptorChainBuilder {
    /**
     * builder interceptor interceptor chain
     *
     * @return InterceptorChain
     */
    @Override
    public InterceptorChain builder() {
        InterceptorChain interceptorChain = new DefaultInterceptorChain();
        interceptorChain.addLast(new ExecutableRuleInterceptor());
        return interceptorChain;
    }
}

8. Provider of encapsulated interceptor

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date
 * @description
 */
@Slf4j
public class InterceptorChainProvider {

    private InterceptorChainProvider(){}

    private static volatile InterceptorChain chain = null;

    private static final Object LOCK = new Object();

    /**
     * new instance interceptor chain
     * @return InterceptorChain
     */
    public static InterceptorChain newInterceptorChain(InterceptorChainBuilder builder){
        if (chain==null){
            synchronized (LOCK){
                if (chain==null){
                    if (builder==null){
                        log.warn("current builder is null,so use default builder");
                        builder = new DefaultInterceptorChainBuilder();
                    }
                    chain = builder.builder();
                }
            }
        }
        log.info("init interceptor chain:{}",chain);
        return chain;
    }
}

9. Chain use

  // 1 in this case, it may be a data update operation
            // 2 in this case, it may be data ignore operation
            // 3 in this case, it may be data pure validation
            // 4 the order of execution is update > check > ignore. (after the update, check immediately. If there is failure data after verification, ignore it again.)
            // 4.1 do some functions before and after verification
            // 5 according to the execution type to distinguish
            //4 Start processing the business call execution

            InterceptorChain interceptorChain = InterceptorChainProvider.newInterceptorChain(null);
            // builder entry
            InterceptorEntry entry = setInterceptorEntry(toolKey, tableId);
            entry.setSearchMaxCount(searchMaxCount);
            entry.setCurrentTableName(tableName);
            InterceptorContext context = new InterceptorContext();
            context.setStartTime(System.currentTimeMillis());
            // execute interceptor chain
            interceptorChain.interceptor(context,entry);

Mode advantages

  • The responsibility chain model is a good processing sheet, which simplifies the coupling and makes the object relationship clearer. Moreover, external users do not need to pay attention to how the responsibility chain is handled * (in the above procedures, the combination of responsibility chains can be packaged and provided to external users)*

Mode disadvantages

  • Avoid performance and scheduling confusion, debugging and testing omissions.

Open source framework

  • Sentinel: Alibaba's open source fuse current limiting framework, in which slot uses this mode well, solves the problem of core link processing, and also provides a good extension

9: Chain mode - filter chain

Schema definition

Before executing the core function logic of a business, execute some columns of related filters, which are put together to form a filter connection.

Mode essence

Separation of filtering function and core business function

Usage scenario

  • Before executing the core logic, other filtering functions (permission, black-and-white list, current limit, etc.) must be performed first

Business scenario

Here we share several common cases, such as filtering illegal characters and adding some other interception functions. There are two writing methods: one is with return value and the other is without return value:

code implementation

There is a return value

public interface Filter {                                                           
    /**                                                                             
     * Make sure call invoker.invoke() in your implementation.                      
     */                                                                             
    Result invoke(Invoker<?> invoker, Invocation invocation) throws RpcException;   
    }

No return value

public interface Filter {
	void doFilter(Request request, Response response, FilterChain chain);
}

Specific cases

In this case, illegal characters are filtered through filter

1. Define filter interface

package filter;

import filter.chain.FilterChain;

public interface Filter {
	void doFilter(Request request, Response response, FilterChain chain);
}

2. Define various filter implementations

package filter.impl;

import filter.Filter;
import filter.Request;
import filter.Response;
import filter.chain.FilterChain;

public class HTMLFilter implements Filter {
	
	@Override
	public void doFilter(Request request, Response response, FilterChain chain) {
		//process the html tag <>
		request.requestStr = request.requestStr.replace('<', '[')
				   .replace('>', ']') + "---HTMLFilter()";
		chain.doFilter(request, response, chain);
		response.responseStr += "---HTMLFilter()";
	}
}


package filter.impl;

import filter.Filter;
import filter.Request;
import filter.Response;
import filter.chain.FilterChain;

public class SesitiveFilter implements Filter {
	@Override
	public void doFilter(Request request, Response response, FilterChain chain) {
		request.requestStr = request.requestStr.replace("", "")
		 .replace("", "") + "---SesitiveFilter()";
		chain.doFilter(request, response, chain);
		response.responseStr += "---SesitiveFilter()";
	}
}

3. Define the request of the filter

package filter;

public class Request {
	public String requestStr;
	public String getRequestStr() {
		return requestStr;
	}
	public void setRequestStr(String requestStr) {
		this.requestStr = requestStr;
	}
}

4. Define the response of the filter

package filter;

public class Response {
	public String responseStr;
	public String getResponseStr() {
		return responseStr;
	}
	public void setResponseStr(String responseStr) {
		this.responseStr = responseStr;
	}
	
}

5. Define filter chain

package filter.chain;

import filter.Filter;
import filter.Request;
import filter.Response;

import java.util.ArrayList;
import java.util.List;

public class FilterChain implements Filter {
	List<Filter> filters = new ArrayList<Filter>();
	int index = 0;
	
	public FilterChain addFilter(Filter f) {
		this.filters.add(f);
		return this;
	}
	
	@Override
	public void doFilter(Request request, Response response, FilterChain chain) {
		if(index == filters.size()) return ;
		
		Filter f = filters.get(index);
		index ++;
		f.doFilter(request, response, chain);
	}
}

Mode advantages

  • Good separation of business filtering function and core function, so that the non core business logic function can be well extended

Mode disadvantages

  • Too many filtering functions will lead to class expansion, which indirectly increases the complexity of the system

Open source framework

  • Servlet: provides the implementation of Filter
  • Dubbo: some column functions (non core business) are implemented through Filter

10: Plug in mode

Schema definition

It provides good expansion function through pluggable or extensible mechanism

Mode essence

Flexible extension mechanism

Implementation form

For this mode, we should understand that its core is pluggable and extensible. There are two forms of code implementation, one is based on SPI mechanism and the other is based on Plugin. The specific implementation depends on the business scenario, and the implementation should not be too serious

Plugin

1. Define general interceptors

public interface Interceptor {

  Object intercept(Invocation invocation) throws Throwable;

  default Object plugin(Object target) {
    return Plugin.wrap(target, this);
  }

  default void setProperties(Properties properties) {
    // NOP
  }

}

2. Implement log interceptor

@Intercepts({@Signature(type = Executor.class, method = "query", args = {String.class})})
public class LoggingInterceptor implements Interceptor {
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        Method method = invocation.getMethod();
        Object[] args = invocation.getArgs();
        Object target = invocation.getTarget();
        System.out.println("method:"+method.getName()+",args:"+args);
        return invocation.proceed();
    }

    @Override
    public void setProperties(Properties properties) {

    }

    @Override
    public Object plugin(Object target) {
        if (target instanceof Executor) {
            return Plugin.wrap(target, this);
        }
        return target;
    }
}

3. Define plug-ins

package interceptor;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;


/**
 * @author Clinton Begin
 */
public class Plugin implements InvocationHandler {

  private final Object target;
  private final Interceptor interceptor;
  private final Map<Class<?>, Set<Method>> signatureMap;

  private Plugin(Object target, Interceptor interceptor, Map<Class<?>, Set<Method>> signatureMap) {
    this.target = target;
    this.interceptor = interceptor;
    this.signatureMap = signatureMap;
  }

  public static Object wrap(Object target, Interceptor interceptor) {
    Map<Class<?>, Set<Method>> signatureMap = getSignatureMap(interceptor);
    Class<?> type = target.getClass();
    Class<?>[] interfaces = getAllInterfaces(type, signatureMap);
    if (interfaces.length > 0) {
      return Proxy.newProxyInstance(
          type.getClassLoader(),
          interfaces,
          new Plugin(target, interceptor, signatureMap));
    }
    return target;
  }

  @Override
  public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
    try {
      Set<Method> methods = signatureMap.get(method.getDeclaringClass());
      if (methods != null && methods.contains(method)) {
        return interceptor.intercept(new Invocation(target, method, args));
      }
      return method.invoke(target, args);
    } catch (Exception e) {
      throw new RuntimeException(e);
    }
  }

  private static Map<Class<?>, Set<Method>> getSignatureMap(Interceptor interceptor) {
    Intercepts interceptsAnnotation = interceptor.getClass().getAnnotation(Intercepts.class);
    // issue #251
    if (interceptsAnnotation == null) {
      throw new PluginException("No @Intercepts annotation was found in interceptor " + interceptor.getClass().getName());
    }
    Signature[] sigs = interceptsAnnotation.value();
    Map<Class<?>, Set<Method>> signatureMap = new HashMap<>();
    for (Signature sig : sigs) {
      Set<Method> methods = signatureMap.computeIfAbsent(sig.type(), k -> new HashSet<>());
      try {
        Method method = sig.type().getMethod(sig.method(), sig.args());
        methods.add(method);
      } catch (NoSuchMethodException e) {
        throw new PluginException("Could not find method on " + sig.type() + " named " + sig.method() + ". Cause: " + e, e);
      }
    }
    return signatureMap;
  }

  private static Class<?>[] getAllInterfaces(Class<?> type, Map<Class<?>, Set<Method>> signatureMap) {
    Set<Class<?>> interfaces = new HashSet<>();
    while (type != null) {
      for (Class<?> c : type.getInterfaces()) {
        if (signatureMap.containsKey(c)) {
          interfaces.add(c);
        }
      }
      type = type.getSuperclass();
    }
    return interfaces.toArray(new Class<?>[interfaces.size()]);
  }

}

3. Define interception parameter encapsulation

public class Invocation {

  private final Object target;
  private final Method method;
  private final Object[] args;

  public Invocation(Object target, Method method, Object[] args) {
    this.target = target;
    this.method = method;
    this.args = args;
  }

  public Object getTarget() {
    return target;
  }

  public Method getMethod() {
    return method;
  }

  public Object[] getArgs() {
    return args;
  }

  public Object proceed() throws InvocationTargetException, IllegalAccessException {
    return method.invoke(target, args);
  }

}

4. Define interceptor chain

package interceptor;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

public class InterceptorChain {

  private final List<Interceptor> interceptors = new ArrayList<>();

  public Object pluginAll(Object target) {
    for (Interceptor interceptor : interceptors) {
      target = interceptor.plugin(target);
    }
    return target;
  }

  public void addInterceptor(Interceptor interceptor) {
    interceptors.add(interceptor);
  }

  public List<Interceptor> getInterceptors() {
    return Collections.unmodifiableList(interceptors);
  }

}

5. Definitions and related notes

@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Intercepts {
  Signature[] value();
}

package interceptor;

import java.lang.annotation.Documented;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({})
public @interface Signature {
  Class<?> type();

  String method();

  Class<?>[] args();
}

6. Use of plug-ins

By using interceptor chains

        InterceptorChain interceptorChain = new InterceptorChain();
        interceptorChain.addInterceptor(new LoggingInterceptor());
        interceptorChain.addInterceptor(new TimeInterceptor());
        interceptorChain.addInterceptor(new SqlExplainInterceptor());

        Executor simpleExecutor = ExecutorFactory.getExecutor("simple", false);
        Executor  simpleInterceptorExecutor= (Executor)interceptorChain.pluginAll(simpleExecutor);
        simpleInterceptorExecutor.query("");

SPI

advantage

Interface oriented programming, decoupling, pluggable, suitable for various component development and commercial development

shortcoming

All implementations need to be traversed and instantiated, which wastes resources. The way to obtain an implementation class is not flexible enough, and it is unsafe for multithreading to use the instance of ServiceLoader class.

Usage scenario

Expandable or pluggable features are required

Business scenario

  • A good function extension needs to be reserved. There is a default implementation, but there is its extension interface
  • When executing the core logic, it can perform some other functions, and can be inserted or removed at any time

Mode advantages

  • Good compliance with design principles
  • Flexible scalability
  • Reduce the cost of system modification

Mode disadvantages

  • Indirectly, it increases the complexity of the system

Open source framework

  • Dubbo: one of the core mechanisms for extending the whole framework is SPI
  • ShardingJdbc: Registry & configuration center & encryption and decryption policies are implemented through SPI mechanism
  • Jdbc: database driven. It can support mysql, sqlserver, DB2 and Oracle. The underlying mechanism is SPI
  • Mybatis: by providing various interceptor functions to realize paging, sql rewriting and performance statistics, we can also implement custom interceptor functions to realize various functions. The essence is that it is implemented based on plugin mechanism at the bottom
  • Spring makes a lot of use of SPI, such as servlet3 Implementation of ServletContainerInitializer, automatic Type Conversion SPI(Converter SPI, Formatter SPI) in 0 specification

11: Factory method model

Schema definition

Define a factory interface for creating product objects, and postpone the actual creation of product objects to specific sub factory classes. This meets the requirement of "separation of creation and use" in the creation mode.

Mode essence

Defer to subclasses to select implementations

Usage scenario

  • The creation of objects is deferred to subclasses

Business scenario

After each order is placed, the user needs to pay the order. The payment order will select different payment methods according to its own situation, and the background service will create instances of different payment channels according to the user's selection. The factory method mode can be used to create instances of payment channels here.

  • Function description

For cashier cases, different payment channels are selected according to different payment methods

  • code implementation

1. Create payment channels and define payment behavior

public interface PayChannel {
/***
* payment */
void pay(); }

2. Define the implementation of various payment channels

@Component("weixinPay")
public class WeixinPay implements PayChannel {
    @Override
    public void pay() {
System.out.println("Wechat payment succeeded!"); }
}


@Component("aliPay")
public class AliPay implements PayChannel {
    @Override
    public void pay() {
System.out.println("Alipay paid successfully!"); }
}

3. Realize payment factory

@ConfigurationProperties(prefix = "pay")
public class PayFactory implements ApplicationContextAware{
//Spring container
private static ApplicationContext applicationContext;
//Payment key value pair information
private Map<String,String> paymap;
/***
* Create a payment channel, obtain the instance name of the corresponding channel from paymap, and obtain the channel instance from applicationContext */
public PayChannel createChannel(String key){
return applicationContext.getBean(paymap.get(key),PayChannel.class);
}
/***
* Get container
* @param applicationContext * @throws BeansException
*/
@Override
    public void setApplicationContext(ApplicationContext applicationContext) throws
BeansException {
PayFactory.applicationContext = applicationContext; }
}
 

4. Configure payment channels

pay:
  paymap: {"1":"weixinPay","2":"aliPay"}

5. Realization of payment channels

@Service
public class PayServiceImpl implements PayService {

    @Autowired
    private PayChannelFactory payChannelFactory;
    /***
     * payment
     * @param type
     * @param id
     */
    @Override
    public void pay(String type, String id) {
        PayChannel payChannel = payChannelFactory.getPayChannel(type);
        payChannel.pay(null);
    }
}

Mode advantages

  • It can be programmed without knowing the specific implementation
  • Easier to extend new versions of objects
  • Connecting parallel class hierarchies

Mode disadvantages

  • Specific product objects and factory methods are coupled

Open source framework

  • Spring:BeanFactory is the most typical representative
  • Dubbo:ExtensionFactory

12: Pipeline mode

Schema definition

Pipeline Pattern is one of the common variants of * * Chain of Responsibility Pattern) * *. In Pipeline Pattern, pipeline plays the role of pipeline and transfers data to a processing sequence. After data is processed in each step, it is transferred to the next step for processing until all steps are processed.

In the pure responsibility chain mode, there will only be one processor in the chain to process data, while in the pipeline mode, multiple processors will process data

Mode essence

The data processing steps are reasonably separated, and then can be dynamically assembled

Usage scenario

When the task code is complex and needs to be split into multiple sub steps, especially when new sub steps may be added at any position, old sub steps may be deleted, and the sequence of sub steps may be exchanged, the pipeline mode can be considered.

Business scenario

At the beginning of the model platform, the functions of creating model instances include: * * input data verification - > create model instances according to input - > Save model instances to related DB tables * * a total of three steps, which is not complicated, so the code at that time is probably like this:

public class ModelServiceImpl implements ModelService {

    /**
     * Submit model (build model instance)
     */
    public CommonReponse<Long> buildModelInstance(InstanceBuildRequest request) {
        // Input data verification
        validateInput(request);
        // Create model instances based on input
        ModelInstance instance = createModelInstance(request);
        // Save instance to related DB table
        saveInstance(instance);
    }
}

However, before long, we found that the format of form input data did not fully meet the input requirements of the model, so we wanted to add "form data preprocessing". This function has not been started yet, and some business parties have proposed that they also need to process the data (for example, generate some other business data as model input according to the merchant's form input).

Therefore, after "input data verification", you also need to add "form input and output preprocessing" and "business party custom data processing (optional)"

In fact, you can use the pipeline mode here. Take a look at the code framework template

1. Create pipeline online document object

package pipeline.one;

import java.time.LocalDateTime;

/**
 * Context passed to pipeline
 */

public class PipelineContext {

    /**
     * Processing start time
     */
    private LocalDateTime startTime;

    /**
     * Processing end time
     */
    private LocalDateTime endTime;

    /**
     * Get data name
     */
    public String getName() {
        return this.getClass().getSimpleName();
    }
}



package pipeline.one;

import java.util.Map;

public class InstanceBuildContext extends PipelineContext {

    /**
     * Model id
     */
    private Long modelId;

    /**
     * User id
     */
    private long userId;

    /**
     * Form input
     */
    private Map<String, Object> formInput;

    public Map<String, Object> getFormInput() {
        return formInput;
    }

    /**
     * After saving the model instance, record the id
     */
    private Long instanceId;

    /**
     * Error message when creating model
     */
    private String errorMsg;

    // Other parameters

    @Override
    public String getName() {
        return "Model instance construction context";
    }


    public String getErrorMsg() {
        return errorMsg;
    }

    public void setErrorMsg(String errorMsg) {
        this.errorMsg = errorMsg;
    }
}

2. Create the interface of the context object processor

package pipeline.one;

/**
 * @Class ContextHandler
 * @Author hpjia.abcft & to be a low profile architect
 * @CreateDate 2021-08-03 10:37
 * @describe Online text processor
 * @Version 1.0
 */
public interface ContextHandler<T extends PipelineContext> {
    /**
     * Process input context data
     * @param context Context data when processing
     * @return If true is returned, the next ContextHandler will continue processing. If false is returned, the processing will end
     */
    boolean handle(T context);
}

3. Create the implementation of each context object processor

public class FormInputPreprocessor implements ContextHandler<InstanceBuildContext> {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Override
    public boolean handle(InstanceBuildContext context) {
        logger.info("--Form input data preprocessing--");

        // Pretending to preprocess form input data

        return true;
    }
}



public class InputDataPreChecker implements ContextHandler<InstanceBuildContext> {


    @Override
    public boolean handle(InstanceBuildContext context) {
        logger.info("--Input data verification--");

        Map<String, Object> formInput = context.getFormInput();
        context.setErrorMsg("error");
        return true;
    }
}


public class ModelInstanceCreator implements ContextHandler<InstanceBuildContext> {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Override
    public boolean handle(InstanceBuildContext context) {
        logger.info("--Create model instances based on input data--");

        // Pretend to create a model instance

        return true;
    }
}


public class ModelInstanceSaver implements ContextHandler<InstanceBuildContext> {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Override
    public boolean handle(InstanceBuildContext context) {
        logger.info("--Save model instances to related DB surface--");

        // Pretend to save model instances

        return true;
    }
}

public class BizSideCustomProcessor implements ContextHandler<InstanceBuildContext> {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Override
    public boolean handle(InstanceBuildContext context) {
        logger.info("--Business party custom data processing--");

        // First judge whether there is user-defined data processing. If not, directly return true

        // Call the customized form data processing of the business party

        return true;
    }
}


public class CommonHeadHandler implements ContextHandler<PipelineContext> {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Override
    public boolean handle(PipelineContext context) {
        logger.info("Pipeline start execution: context={}", JSON.toJSONString(context));

        // Set start time
        context.setStartTime(LocalDateTime.now());

        return true;
    }
}


public class CommonTailHandler implements ContextHandler<PipelineContext> {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Override
    public boolean handle(PipelineContext context) {
        // Set processing end time
        context.setEndTime(LocalDateTime.now());

        logger.info("Pipeline execution completed: context={}", JSON.toJSONString(context));

        return true;
    }
}

4. Define context processor routing table

package pipeline.one.impl;

public class PipelineRouteConfig implements ApplicationContextAware {

    /**
     * Data type - > routing of processor type list in pipeline
     */
    private static final
    Map<Class<? extends PipelineContext>,
        List<Class<? extends ContextHandler<? extends PipelineContext>>>> PIPELINE_ROUTE_MAP = new HashMap<>(4);

    /*
     * Configure the processing pipeline corresponding to various context types here: the key is the context type and the value is the list of processor types
     */
    static {
        PIPELINE_ROUTE_MAP.put(InstanceBuildContext.class,
                               Arrays.asList(
                                       InputDataPreChecker.class,
                                       ModelInstanceCreator.class,
                                       ModelInstanceSaver.class
                               ));

        // Pipeline configuration of other Context in the future
    }

    /**
     * When Spring starts, the corresponding pipeline mapping relationship is generated according to the routing table
     */
    @Bean("pipelineRouteMap")
    public Map<Class<? extends PipelineContext>, List<? extends ContextHandler<? extends PipelineContext>>> getHandlerPipelineMap() {
        return PIPELINE_ROUTE_MAP.entrySet()
                                 .stream()
                                 .collect(Collectors.toMap(Map.Entry::getKey, this::toPipeline));
    }

    /**
     * Build a pipeline based on a list of ContextHandler types in a given pipeline
     */
    private List<? extends ContextHandler<? extends PipelineContext>> toPipeline(
            Map.Entry<Class<? extends PipelineContext>, List<Class<? extends ContextHandler<? extends PipelineContext>>>> entry) {
        return entry.getValue()
                    .stream()
                    .map(appContext::getBean)
                    .collect(Collectors.toList());
    }

    private ApplicationContext appContext;

    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        appContext = applicationContext;
    }
}

5. Define context processing executor

package pipeline.one;

public class PipelineExecutor {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    /**
     * Reference the pipelineRouteMap in PipelineRouteConfig
     */
    @Resource
    private Map<Class<? extends PipelineContext>,
                List<? extends ContextHandler<? super PipelineContext>>> pipelineRouteMap;

    /**
     * Synchronize input context data < br / >
     * If the context data flows to the last processor during processing and the last processor returns true, it returns true; otherwise, it returns false
     *
     * @param context Input context data
     * @return Whether the pipeline is unblocked during processing, return true if it is unblocked, and return false if it is not unblocked
     */
    public boolean acceptSync(PipelineContext context) {
        // [general head processor] processing
        commonHeadHandler.handle(context);

        // Whether the pipeline is unblocked
        boolean lastSuccess = true;

        for (ContextHandler<? super PipelineContext> handler : pipeline) {
            try {
                // The current processor processes the data and returns whether to continue downward processing
                lastSuccess = handler.handle(context);
            } catch (Throwable ex) {
                lastSuccess = false;
                logger.error("[{}] Handling exceptions, handler={}", context.getName(), handler.getClass().getSimpleName(), ex);
            }

            // No more downward processing
            if (!lastSuccess) { break; }
        }

        // [general tail processor] processing
        commonTailHandler.handle(context);

        return lastSuccess;
    }
}

Mode advantages

  • Each step to be processed is well separated from the specific processor
  • By assembling a specific processor, you can dynamically complete the processing of a variety of different forms of services
  • The system is flexible and easy to expand

Mode disadvantages

  • If there are too many steps, there will be too many corresponding processors, which will indirectly lead to class expansion
  • The code call hierarchy becomes more complex

Open source framework

  • Sentinel: a typical fusing current limiting framework. The slot in it is a typical application
  • Netty: there is a classic ChannelPipeline mode, interspersed with the whole input and output processing

13: Master work mode

Schema definition

The master allocates resources and receives the processing results of each Work at the same time. A master can correspond to multiple Work, which is run based on mpp. So as to make full use of the cpu resources of each machine

Mode essence

The unified allocation of resources is separated from the execution of tasks

Usage scenario

  • A large task can be cut into multiple small tasks, which can be calculated in parallel
  • Data batch fast parallel processing

code implementation

1. Create master

package master.work;

import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date 06-28
 * @description Encapsulate task + worker + write back the result set executed by each worker
 */
public class Master {

    /**
     * Encapsulates the queue of tasks
     */
    private ConcurrentLinkedQueue<Product> productConcurrentLinkedQueue = new ConcurrentLinkedQueue<Product>();

    /**
     * Number of encapsulated work
     */
    private Map<String,Thread> workerThread = new HashMap<String, Thread>(100000);

    /**
     * Save the results of each worker execution
     */
    private ConcurrentHashMap<String,Object> workerRunResult = new ConcurrentHashMap<String, Object>();


    public Master(Worker worker,int workCount){
        worker.setProductConcurrentLinkedQueue(productConcurrentLinkedQueue);
        worker.setWorkerRunResult(workerRunResult);
        for (int i=0;i<workCount;i++){
            this.workerThread.put(Integer.toString(i),new Thread(worker));
        }
    }


    /**
     * submit Task submission
     */
    public void submit(Product product){
        this.productConcurrentLinkedQueue.add(product);
    }

    /**
     * execute Task execution
     */
    public void execute(){
        for (Map.Entry<String, Thread> entry:workerThread.entrySet()){
            entry.getValue().start();
        }
    }

    /**
     * isEnd Whether thread execution ends
     */
    public boolean isComplete(){
        for (Map.Entry<String, Thread> entry:workerThread.entrySet()){
            if (entry.getValue().getState()!=Thread.State.TERMINATED){
                return false;
            }
        }
        return true;
    }

    /**
     * getResult Get execution results
     */

    public int getResult() {
        int priceResult = 0;
        for(Map.Entry<String, Object> me : this.workerRunResult.entrySet()){
            priceResult += (Integer)me.getValue();
        }
        return priceResult;
    }
}

2. Define work

package master.work;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date 06-28
 * @description Simulate a single thread for commodity pricing
 */
public abstract class Worker implements Runnable {

    /**
     * Define a concurrent queue to store the commodity information to be processed
     */

    private ConcurrentLinkedQueue<Product> productConcurrentLinkedQueue;


    public void setProductConcurrentLinkedQueue(ConcurrentLinkedQueue<Product> productConcurrentLinkedQueue) {
        this.productConcurrentLinkedQueue = productConcurrentLinkedQueue;
    }

    public void setWorkerRunResult(ConcurrentHashMap<String, Object> workerRunResult) {
        this.workerRunResult = workerRunResult;
    }

    /**
     * Define a to store the corresponding results of each work execution
     */
    private ConcurrentHashMap<String,Object> workerRunResult;

    /**
     * When an object implementing interface <code>Runnable</code> is used
     * to create a thread, starting the thread causes the object's
     * <code>run</code> method to be called in that separately executing
     * thread.
     * <p>
     * The general contract of the method <code>run</code> is that it may
     * take any action whatsoever.
     *
     * @see Thread#run()
     */
    public void run() {
        while (true){
            Product product = this.productConcurrentLinkedQueue.poll();
            if (product==null){
                break;
            }
            Object handle = handle(product);
            System.out.println("thread  "+Thread.currentThread().getName()+" , The calculation is completed"+ product);
            this.workerRunResult.put(Integer.toString(product.getPrdId()),handle);
        }
    }

    /**
     * There are business components that really realize product pricing
     * @param input
     * @return
     */
    protected abstract Object handle(Product input) ;
}

3. Define the task to be processed

package master.work;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date 06-29
 * @description Product information
 */
public class Task {
    private int prdId;
    private int prdPrice;
    public int getPrdId() {
        return prdId;
    }
    public void setPrdId(int prdId) {
        this.prdId = prdId;
    }
    public int getPrdPrice() {
        return prdPrice;
    }
    public void setPrdPrice(int prdPrice) {
        this.prdPrice = prdPrice;
    }

    @Override
    public String toString() {
        return "Commodity information: commodity id " + prdId + ", commodity price" + prdPrice+"element" ;
    }
}

4. Define the implementation of specific processing task s

package master.work;

/**
 * @author hpjia.abcft
 * @version 1.0
 * @date 06-28
 * @description Business logic to truly realize the function of commodity pricing
 */
public class TaskWork extends Worker {


    @Override
    protected Object handle(Task input) {
        Object output = null;
        try {
            //Simulate time-consuming operations
            Thread.sleep(100);
            output = input.getPrdPrice();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return output;
    }
}

Mode advantages

  • It can reasonably drain the cpu of each machine to achieve the best performance
  • Work can be expanded horizontally

Mode disadvantages

  • System communication becomes complex and unstable
  • master may be the bottleneck

Open source framework

  • Jdk: own forkjoin
  • Hadoop:mr mechanism

Keywords: Java Design Pattern Algorithm

Added by mojodojo on Sat, 01 Jan 2022 01:45:39 +0200