Brief Analysis of Hammer.js Source Code

start

Say last weekend idle egg pain, suddenly want to know how to deal with the front-end gesture, so as to solve their own knowledge blind spot, and began to gnaw the source code... And record it.

A gesture

Complex gestures in our front-end pages should be rare, usually dragging, double-clicking, zooming in and out of these, but the rational use of gestures can obviously enhance the interactive experience of our pages, so the problem is, how to identify a gesture neighbour?

Hammer.js

Hammer.js should be considered as a gesture framework widely used in the front end (I also know about AlloyTouch, which is less abstract than Hammer.js), so let's start with this framework today.

configuration parameter

Let's first look at the configuration parameters of Hammer.js:

{
      //Whether a custom dom event is triggered simultaneously when a gesture event is triggered, of course, this event callback without direct binding is efficient
      domEvents: false, 
      //This affects the value of the corresponding css attribute touch-action, which I will continue with
      touchAction: TOUCH_ACTION_COMPUTE, 
      enable: true, //Whether to Open Gesture Recognition
      //You can specify that touch-related events can be detected on other elements and acted as input source. If not set, it is the element currently detected.
      inputTarget: null, 
      inputClass: null, //Input Source Type, Mouse or Touch or Mix
      recognizers: [], //The gesture recognizer we configured
      //Presupposed gesture recognizers, formats: [RecognizerClass, options, [RecognieWith,...], [requireFailure,...]]
      preset: [ 
          [RotateRecognizer, { enable: false }],
          [PinchRecognizer, { enable: false }, ['rotate']],
          [SwipeRecognizer, { direction: DIRECTION_HORIZONTAL }],
          [PanRecognizer, { direction: DIRECTION_HORIZONTAL }, ['swipe']],
          [TapRecognizer],
          [TapRecognizer, { event: 'doubletap', taps: 2 }, ['tap']],
          [PressRecognizer]
      ],
      cssProps: { //Some additional css attributes
        userSelect: 'none',
        touchSelect: 'none',
        touchCallout: 'none',
        contentZooming: 'none',
        userDrag: 'none',
        tapHighlightColor: 'rgba(0,0,0,0)'
     }
}

Generally speaking, the configuration parameters are not many and complex. This framework is basically out of the box. Okay, let's go deeper.

Initialization

Then go to the source code manager.js, you can see the following code:

export default class Manager {
    constructor() {
        ...
        this.element = element;
        this.input = createInputInstance(this);// 1
        this.touchAction = new TouchAction(this,this.options.touchAction);// 2

        toggleCssProps(this, true);
        
        each(this.options.recognizers, (item) => { //3
           let recognizer = this.add(new (item[0])(item[1]));
               item[2] && recognizer.recognizeWith(item[2]);
               item[3] && recognizer.requireFailure(item[3]);
           }, this);
        }
    ...
} 

1. Create a new input source
According to the different gestures of the device may be from the mouse or the touch screen on the mobile phone, and there is a slight difference between the attributes of the mouse event and the attributes of the touch event (as well as pointer event), so for the convenience of subsequent processing, Hammer.js also defines different types of input sources: MouseInput, PointerEventIn. Put, Single TouchInput, TouchInput and TouchMouseInput; and for different events, a simple handler method is used to process the parameters, and finally a uniform format of data output is obtained, like this:

    {
       pointers: touches[0],
       changedPointers: touches[1],
       pointerType: INPUT_TYPE_TOUCH,
       srcEvent: ev
    }

After obtaining the input data in a uniform format, it will be handed over to InputHandler for further processing. It will judge whether the input is the beginning or the end of gesture. If it is the beginning, it will create a new session for gesture recognition, and calculate some data related to gesture (angle, offset distance, moving direction, etc.). Specifically in C. You can see it in ompute-input-data.js.
After this round of calculation, we have enough data to support gesture recognition.
In addition, these five input sources inherit Input, in which events are bound as follows:


    this.evEl && addEventListeners(this.element, this.evEl, this.domHandler);
    this.evTarget && addEventListeners(this.target, this.evTarget, this.domHandler);
    this.evWin && addEventListeners(getWindowForElement(this.element), this.evWin, this.domHandler);

There are three binding targets, the current element, the inputTarget, and the window to which the element belongs. It is still necessary to bind the event handler on the window (for example, dragging an element); when you turn over the code, the inputTarget binding is touch-related events, and it is not clear what its intentions and scenarios are and why. Separate a target to handle touch events separately.

2. Set the value of touch-action in the element style
In mobile browsers, there are also some gesture processing, such as right or left sliding is forward and backward, so in addition to our own definition of gestures, we also need to restrict or prohibit the browser's gesture.
Here's a chestnut. In Hammer.js, the default drag gesture recognizer (pan.js) is provided. When detecting horizontal drag, the recognizer sets the value of touch-action to pay-y (allowing browsers to handle vertical drag, which can be a vertical scroll or something), such as If I then define a vertical drag identifier, what is the value of touch-action? (The answer is none, the browser won't help us deal with it anymore, and the vertical scroll can only depend on itself). How does that work out?

When creating TouchAction objects, if the value of touchAction in the configuration parameter is TOUCH_ACTION_COMPUTE, the compute method is called to start traversing recognizers and collecting the value of touch-action they want to set:

    compute() {
        let actions = [];
        each(this.manager.recognizers, (recognizer) => {
          if (boolOrFn(recognizer.options.enable, [recognizer])) {
            actions = actions.concat(recognizer.getTouchAction());
          }
        });
        return cleanTouchActions(actions.join(' '));
      }

Finally, the cleanTouchActions method centralizes the calculation of the final value:

     ...
     let hasPanX = inStr(actions, TOUCH_ACTION_PAN_X);
     let hasPanY = inStr(actions, TOUCH_ACTION_PAN_Y);
     if (hasPanX && hasPanY) {
       return TOUCH_ACTION_NONE;
     }
     ...

3. Configuration of gesture recognizer
Mainly configure the relationship between gesture recognizers, whether they can cooperate or mutually exclusive, with an example of official website:


    var hammer = new Hammer(el, {});
    
    var singleTap = new Hammer.Tap({ event: 'singletap' });
    var doubleTap = new Hammer.Tap({event: 'doubletap', taps: 2 });
    var tripleTap = new Hammer.Tap({event: 'tripletap', taps: 3 });
    
    hammer.add([doubleTap, doubleTap, singleTap]);
    
    tripleTap.recognizeWith([doubleTap, singleTap]);
    doubleTap.recognizeWith(singleTap);
    
    doubleTap.requireFailure(tripleTap);
    singleTap.requireFailure([tripleTap, doubleTap]);

Three gesture recognizers are defined: singleTap, doubleTap and tripleTap. Obviously, these three recognizers are mutually exclusive. It would be embarrassing if the user clicked three times on the screen.
Note the order of addition here, because Hammer. JS will traverse the recognizer in order to call their recognition method, because we have set the mutually exclusive gesture, Hammer. JS in order to know whether the gesture is click or double-click, singleTap, doubleTap, tripleTap recognizer have set 300 ms waiting time to judge after the judgment. There will be no click events. According to the recognition order, singleTap can always get the recognition results of tripleTap and doubleTap to judge whether to trigger events. If we do not set the mutually exclusive relationship between them, Hammer.js will trigger by default once the conditions are met, and the embarrassing scenario just mentioned will appear.
What's the use of recognizeWith? Look at the following code:


    if (!curRecognizer || (curRecognizer && curRecognizer.state & STATE_RECOGNIZED)) {
          curRecognizer = session.curRecognizer = null;
        }
    
        let i = 0;
        while (i < recognizers.length) {
          recognizer = recognizers[i];
          if (session.stopped !== FORCED_STOP && (
                  !curRecognizer || recognizer === curRecognizer || 
                  recognizer.canRecognizeWith(curRecognizer))) {
            recognizer.recognize(inputData);
          } else {
            recognizer.reset();
          }
          if (!curRecognizer && recognizer.state & (STATE_BEGAN | STATE_CHANGED | STATE_ENDED)) {
            curRecognizer = session.curRecognizer = recognizer;
          }
          i++;
        }

Although singleTap, doubleTap and tripleTap should be mutually exclusive from the final result, the same data input may allow several gesture recognizers to recognize at the same time. For example, when the user clicks on the screen, the status of singleTap recognizer may be STATE_RECOGNIZED or STATE_BEGAN (waiting for the recognition of doubleTap and tripleTap). As a result, sessions record singTap recognizers as current gesture recognizers, but doubleTap and tripleTap also need to record some states (such as the number of current clicks), because it is likely that the next click will turn into a double-click gesture; when the user clicks again, the doubleTap recognizer because Recognition With (singleTap) is set up and data input is identified by cooperating with singleTap. Then the doubleTap recognizer begins to enter STATE_RECOGNIZED or STATE_BEGAN (waiting for the result of tripleTap recognizer). At this time, the current session gesture recognizer is doubleTap, while the singleTap recognizer does not have recognition With (do). UbleTap) will be reset.

A little detail

When we rotate a picture, how do we rotate and how do we know the angle of rotation?
Back to the computeInputData method, there's a line of code to get the deflection angle:

    ...
    let center = input.center = getCenter(pointers);
    ...
    input.angle = getAngle(offsetCenter, center);
    ...

In tracking the getCenter method:


     while (i < pointersLength) {
        x += pointers[i].clientX;
        y += pointers[i].clientY;
        i++;
      }
    
     return {
        x: round(x / pointersLength),
        y: round(y / pointersLength)
      };

It is very simple to calculate the center position of the gesture. When our fingers rotate, the center position will follow the movement. It is easy to calculate the front and back deflection angle.

Last Point of Consideration

Hammer.js binds event handlers in the bubble phase. Why not intercept event Ni in the capture phase? If a right-moving gesture is recognized, subsequent events (such as touchMove) are no longer necessary to pass to the child nodes, and can be processed on the intercepted elements, so the performance should also be improved a little. Dig a hole for yourself to realize later.
Last but not least...
Because there is no experience in using the source code, it is unavoidable to have errors and omissions, look forward to correcting.

Keywords: Javascript Session Mobile less Attribute

Added by sledgeweb on Mon, 08 Jul 2019 01:08:57 +0300