Film recommendation system Xiamen University database laboratory version

Resource address: http://dblab.xmu.edu.cn/post/movierecommend/

Project introduction

1. Recommendation system

Discover the potential needs of users according to their historical data.

2. Long tail commodity

Different from popular goods, popular goods represent the general needs of users, while long tail goods represent the personalized needs of users.

3. Recommended method

1) Expert recommendation (manual recommendation): senior experts give recommendations based on experience.
2) Statistical recommendation (popular recommendation): make statistics according to the historical records and get the recommendation results, which is simple and effective.
3) Content recommendation: use machine learning to find similar items according to the characteristics of items.
4) Collaborative filtering recommendation: find the closest user according to the user's historical data, and calculate the user's preference for an item according to the recent user's preference for the item.
5) Mixed recommendation: a combination of multiple methods.

4. Recommended system architecture

Three elements:
1) User characteristics (user behavior data, attribute data)
2) Article characteristics (interaction data and attribute data between articles and users)
3) Recommendation algorithm
Two key:
1) Recommendation accuracy
2) Recommended calculation time

5. Collaborative filtering algorithm

Three types:
1) User based collaborative filtering (userCF)
2) Item based collaborative filtering (itemCF)
3) Model based collaborative filtering (modelCF)

userCF algorithm - steps:
1) Find a set of users with the same interests as the target user.
2) Find out the items in the collection that the user likes but the target user has not touched, and recommend them to the target user.

For example, if user 1 has seen two movies a and b and user 2 has seen three movies a, b and c, it is considered that user 1 and user 2 are similar users, and the movie c seen by user 2 but not seen by user 1 is recommended to user 1.

itemCF algorithm - steps:

1) Calculate the similarity between items.

2) Recommend according to the similarity of items and the user's history.
For example, if user 1 has seen three movies a, d and G, and user 2 has seen three movies c, d and G, it is considered that movies d and G are similar movies. If user 3 has seen movie d but not movie g, it is recommended to user 3.
Itemcf (collaborative filtering based on items) and the content recommendation mentioned above need to find similar items. What is the difference between the two?
The content recommendation calculates the similarity based on the attribute data of the article (for example, if the type attribute of films a and c is "comedy", or the regional attribute is "mainland", then the two films are similar).

itemCF calculates the similarity based on the behavior records of a large number of users (for example, if most users have seen both movie a and movie c, the two movies are similar)

modelCF algorithm:
At the same time, both users and items are considered to judge a user's preferences through the scores given by the users to the items (see below for details).
Different algorithms have appropriate application scenarios.

6. Film recommendation system
The movie recommendation system uses the collaborative filtering algorithm based on ALS matrix decomposition in spark MLlib, which belongs to the model-based collaborative filtering algorithm (modelCF).

spark MLlib :
MLlib is Spark's machine learning library, which aims to simplify the engineering practice of machine learning and facilitate its expansion to a larger scale. MLlib consists of some general learning algorithms and tools, including classification, regression, clustering, collaborative filtering, dimensionality reduction, etc. at the same time, it also includes the underlying optimization primitives and the high-level pipeline API.

7. Movie recommendation algorithm steps

1) Get user id

2) Delete the user's previous recommendation results

3) Load sample rating data (rating data of different movies by different users: userid, movieid, rating, timestamp)

4) Load movie information data (take out movieid, moviename and typelist from movieinfo table)

Note: sample scoring data and movie information data are in dat file is passed into HDFS

5) The sample score data is divided into three parts, 60% for training (training set), 20% for verification (verification set) and 20% for testing (test set)

6) Train the models under different parameters and check them in the verification set to find the best model
How to train?
Set the parameters (the number of implicit factors, the regularization parameters of ALS and the number of iterations), and pass the set parameters and training set as parameters into the als() function of spark MLlib library to get the recommended model. Adjusting the parameters will get multiple different models.

How to verify?
The verification set is loaded into the model to obtain the user's prediction score for the film, calculate the root mean square error between the prediction score and the actual score, and find the model with the smallest root mean square error among multiple models as the best model.
7) The best model is used to predict the score of the test set, and the root mean square error between the predicted score and the actual score is calculated to improve the best model.
8) The best model is used to predict the score of a user on all movies in the movie information data set, and the top ten movies with the highest score are selected

9) Store the recommendation results in the recommended result table of the database

8. ALS() function

ALS() function:
The scoring behavior of users for movies can be expressed as a scoring matrix (mxn), U (user), and I (movie item). Since users will not rate all movies, there must be missing values in the matrix, similar to sparse matrix.
In practical application, the values of m and n are very large, and the matrix is likely to break through hundreds of millions of items. At this time, it is necessary to decompose the matrix, and ALS algorithm can achieve good analysis results for the decomposition of this large-scale matrix.
ALS algorithm aims to find two low dimensional matrices X (m x k) and Y (k x n), so that the inner product XY of the two matrices can approximately represent A. Here K is called the argot sense factor.
The original scoring matrix A is the scores of m users for n films. After decomposition:
The X matrix represents the preference of m users for k characteristics (type, director, age, etc.) of the film. The Y matrix represents the degree of inclusion of k film features in n films. The inner product of the transpose of vectors Xu and Yi is the approximate score of user u for movie i.
ALS algorithm, i.e. "minimum alternating double multiplication" algorithm, will first initialize the X and Y matrices randomly, then initialize X0 randomly, then fix X0 to solve Y0, and then fix Y0 to solve X1. Until the root mean square error of the inner product (i.e. predicted score) and actual score of the two matrices is less than the predefined threshold, a recommended model is obtained.

requirement analysis

The film recommendation system can provide information services for users according to their interests, characteristics and needs. Different from general search engines, recommendation system makes personalized recommendation by studying users' own interests and preferences. A good recommendation system can automatically mine users' interest points, guide users to find their own information needs, and establish contact with users by providing personalized recommendation services, so that users can rely on the recommendation system.
Movie recommendation is to study some personalized data of users according to the current popular movies, and provide users with personalized video recommendation services, so as to increase user viscosity and improve video website traffic. For online film providers, the recommendation efficiency of film recommendation system will have a direct impact on the economic benefits of the company, and even affect the development of the company.

Outline design

1. Overall function module design
The movie recommendation system designed in this paper includes the following six functional modules: user
Registration, movie rating and movie recommendation. The film recommendation module is the core of the system
Heart module.
2. Database design
The system adopts MySQL database and is based on Django model[9]
Four additional data tables are designed: movieinfo, personal ratings, recommendresult and user.
The structure of the four tables is as follows
movieinfo

recommendresult

user

personalratings

Detailed design and Implementation

Implementation process of movie recommendation based on ALS collaborative filtering algorithm

1. ETL data set into HDFS

1. Prepare data set
The data set used in this case is movie_recommend.zip, these datasets are ready for us by the teacher. You can also download them in the database Laboratory of Xiamen University.
movie_ recommend. The zip contains three data sets:
User rating dataset ratings dat
Sample score dataset personalratings txt
Movie dataset movies dat
Start Hadoop with the following command:
$ cd /usr/local/hadoop
$ ./sbin/start-dfs.sh
Next, create an HDFS directory input for storing the data set of this case_ Spark. If the directory has not been created before, use the following command to create it:
$ cd /usr/local/hadoop
$ ./bin/hdfs dfs -mkdir /input_spark

2. Use the Kettle tool to ETL data into HDFS
Use Kettle to load the data into HDFS and rates dat,personalRatings.txt,movies.dat,user.dat is uploaded to the "input_spark" directory of HDFS.
Through this small experiment, we roughly understand how to use Kettle's workflow.

After the transfer, you can use the HDFS Shell command in the Linux terminal to view the files just transferred to HDFS. For example, you can check rates The first five rows of DAT data:

2. Write Spark Program to realize film recommendation

code:
package recommend

import java.io.File
import scala.io.Source
import org.apache.log4j.{ Level, Logger }
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.rdd._

import org.apache.spark.mllib.recommendation.ALS
import org.apache.spark.mllib.recommendation.Rating
import org.apache.spark.mllib.recommendation.MatrixFactorizationModel

object MovieLensALS {

def main(args: Array[String]) {
//Mask unnecessary logs displayed on the terminal
Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)

if (args.length != 5) {
  println("Usage: /usr/local/spark/bin/spark-submit --class recommend.MovieLensALS " +
    "Spark_Recommend.jar movieLensHomeDir personalRatingsFile bestRank bestLambda bestNumiter")
  sys.exit(1)
}

// Set up the running environment
val conf = new SparkConf().setAppName("MovieLensALS").setMaster("local[1]")
val sc = new SparkContext(conf)

// Load parameter 2, i.e. user score, which is generated by the rater
val myRatings = loadRatings(args(1))
val myRatingsRDD = sc.parallelize(myRatings, 1)

// Sample data directory
val movieLensHomeDir = args(0)

// Load the sample scoring data. The last column of Timestamp takes the remainder of 10 as the key, and Rating is the value, that is (Int,Rating)
//ratings.dat raw data: user number, movie number, rating, rating timestamp
val ratings = sc.textFile(new File(movieLensHomeDir, "ratings.dat").toString).map { line =>
  val fields = line.split("::")
  (fields(3).toLong % 10, Rating(fields(0).toInt, fields(1).toInt, fields(2).toDouble))
}

//Load movie directory comparison table (movie ID - > movie title)
//movies.dat raw data: movie number, movie name, movie category
val movies = sc.textFile(new File(movieLensHomeDir, "movies.dat").toString).map { line =>
  val fields = line.split("::")
  (fields(0).toInt, fields(1).toString())
}.collect().toMap

val numRatings = ratings.count()

val numUsers = ratings.map(_._2.user).distinct().count()

val numMovies = ratings.map(_._2.product).distinct().count()

// The sample scoring table is divided into three parts according to the key value, which are used for training (60%, and the user score is added), verification (20%) and test (20%)
// The data needs to be applied to multiple times in the calculation process, so the cache is stored in memory
val numPartitions = 4

// Training training sample data
val training = ratings.filter(x => x._1 < 6) //Take the value less than 6 after dividing the score time by 10 as the training sample
  .values
  .union(myRatingsRDD) //Note that ratings is (Int,Rating), just take value
  .repartition(numPartitions)
  .cache()

// Validation validation sample data
val validation = ratings.filter(x => x._1 >= 6 && x._1 < 8) //Take the value greater than or equal to 6 and less than 8 points after dividing the scoring time by 10 as the verification sample
  .values
  .repartition(numPartitions)
  .cache()

// test sample data
val test = ratings.filter(x => x._1 >= 8).values.cache() //Take the remainder of the scoring time divided by 10 with a score greater than or equal to 8 as the test sample

val numTraining = training.count()

val numValidation = validation.count()

val numTest = test.count()

// The models with different parameters are trained and verified in the verification set to obtain the model with the best parameters
val ranks = List(8, 12) //Number of implicit semantic factors in the model
val lambdas = List(0.1, 10.0) //Is the regularization parameter of ALS
val numIters = List(10, 20) //Number of iterations

var bestModel: Option[MatrixFactorizationModel] = None //Best model
var bestValidationRmse = Double.MaxValue //Best calibration root mean square error
var bestRank = args(2).toInt  //The number of the best argot meaning factors
var bestLambda = args(3).toDouble //Best ALS regularization parameters
var bestNumIter = args(4).toInt //Best number of iterations
//val model = ALS.train(training, bestRank, bestNumIter, bestLambda) / / if parameters are passed in from outside, use this statement to train the model
//If you use the list values of ranges, lambdas and numIters defined above for model training instead of the externally passed in parameters, use the following for loop statement to train the model
for (rank <- ranks; lambda <- lambdas; numIter <- numIters) {
  val model = ALS.train(training, rank, numIter, lambda) //Training samples, number of argot semantic factors, number of iterations, regularization parameters of ALS
  // Model training model
  //Input training model, check sample and check number
  val validationRmse = computeRmse(model, validation, numValidation) // Verification model results

  if (validationRmse < bestValidationRmse) {
    bestModel = Some(model)
    bestValidationRmse = validationRmse
    bestRank = rank
    bestLambda = lambda
    bestNumIter = numIter
  }
}

// The best model is used to predict the score of the test set, and the root mean square error between the test set and the actual score is calculated
val testRmse = computeRmse(bestModel.get, test, numTest)

//Create a na ï ve baseline and compare it with the best model
val meanRating = training.union(validation).map(_.rating).mean
val baselineRmse =
  math.sqrt(test.map(x => (meanRating - x.rating) * (meanRating - x.rating)).mean)
//The best model of baseline is improved
val improvement = (baselineRmse - testRmse) / baselineRmse * 100
println("The best model improves the baseline by " + "%1.2f".format(improvement) + "%.")

// Recommend the top 5 movies that you are most interested in, and pay attention to excluding the movies that users have rated
val myRatedMovieIds = myRatings.map(_.product).toSet

val candidates = sc.parallelize(movies.keys.filter(!myRatedMovieIds.contains(_)).toSeq)

val recommendations = bestModel.get
  .predict(candidates.map((1, _)))
  .collect()
  .sortBy(-_.rating)
  .take(5)

var i = 1
println("Movies recommended for you(user ID: Recommended movies ID: Recommended score: recommended movie title):")
recommendations.foreach { r =>
  println( r.user + ":"+ r.product + ":"+ r.rating+":" + movies(r.product))
  i += 1
}

val recommendations2 = bestModel.get
  .predict(candidates.map((2, _)))
  .collect()
  .sortBy(-_.rating)
  .take(5)
var i2 = 1
recommendations2.foreach { r =>
  println( r.user + ":"+ r.product + ":"+ r.rating+":" + movies(r.product))
  i2 += 1
}

val recommendations3 = bestModel.get
  .predict(candidates.map((3, _)))
  .collect()
  .sortBy(-_.rating)
  .take(5)
var i3 = 1
recommendations3.foreach { r =>
  println( r.user + ":"+ r.product + ":"+ r.rating+":" + movies(r.product))
  i3 += 1
}

val recommendations4 = bestModel.get
  .predict(candidates.map((4, _)))
  .collect()
  .sortBy(-_.rating)
  .take(5)
var i4 = 1
recommendations4.foreach { r =>
  println( r.user + ":"+ r.product + ":"+ r.rating+":" + movies(r.product))
  i4 += 1
}

sc.stop()

}

/**Check the root mean square error between the predicted data and the actual data of the set**/
//Input training model, check sample and check number
def computeRmse(model: MatrixFactorizationModel, data: RDD[Rating], n: Long): Double = {
val predictions = model. Predict (data. Map (x = > (x.user, x.product)) / / call the predicted function
val mapuser = data.map(x => (x.user))
val mapproduct = data.map(x => (x.product))
val maprating = data.map(x => (x.rating))

// Output predictions and ratings
val predictionsAndRatings = predictions.map(x => ((x.user, x.product), x.rating))
  .join(data.map(x => ((x.user, x.product), x.rating)))
  .values
math.sqrt(predictionsAndRatings.map(x => (x._1 - x._2) * (x._1 - x._2)).reduce(_ + _) / n)

}

/**Load user rating file**/
def loadRatings(path: String): Seq[Rating] = {
val lines = Source.fromFile(path).getLines()
val ratings = lines.map { line =>
val fields = line.split(":😊
Rating(fields(0).toInt, fields(1).toInt, fields(2).toDouble)
}.filter(_.rating > 0.0)
if (ratings.isEmpty) {
sys.error("No ratings provided.")
} else {
ratings.toSeq
}
}
}
Operation results:

3. The application JAR package runs in Spark environment

In order to deploy the application to Spark environment, you need to use IDEA tool to package the application and generate application JAR package.
In the IDEA project interface, open the menu "file - > project structure".
Then, the interface shown in the figure below will pop up. Please click "Artifacts", green plus sign, "JAR" and "From modules with dependencies...".
Then, in the pop-up interface (as shown in the figure below), click the ellipsis button on the right of "Main Class". In the pop-up interface, enter "movielens ALS" in the search text box, and then click OK. Then, return to the interface shown in the figure below and set "Directory for META-INF/MANIFEST.MF" to the following directory:
/home/linziyu/IdeaProjects/Spark_Recommend "and click" OK ". In the "Output Layout" tab, delete all jar packages starting with Extracted and keep only Spark_Recommend.jar and spark_ Recommend 'compile output, and then click "OK". Finally, click the "Build" menu in the top menu, and click "Build Artifacts..." in the pop-up submenu. Then, click "Build" to start packaging. The final jar package path is: "~ / IdeaProjects/Spark_Recommend/out/artifacts/Spark_Recommend_jar/ Spark_Recommend.jar".

4. Submit the jar package to spark for running

The command is as follows:
cd /usr/local/spark
bin/spark-submit --class recommend.MovieLensALS ~/IdeaProjects/Spark_Recommend/out/artifacts/Spark_Recommend_jar/Spark_Recommend.jar /input_spark ~/Downloads/personalRatings.dat 10 5 10
In the command, for spark_ The recommend program provides five parameters, of which the first parameter "/ input_spark" is the directory in the HDFS file system, which contains two files movies DAT and ratings Dat (if the directory and file do not exist, use the HDFS command to create the directory and upload the data file). The second parameter is personalratings Dat file path (here is a file placed in the Linux local file system and not stored in HDFS). The third, fourth and fifth parameters are the number of argot meaning factors, ALS regularization parameters and iteration times respectively.

5. Use node JS displays the results in the web page

1. Create project directory
In the Linux terminal, use the following command to create the project directory and complete the initialization:
$cd ~ # enters the home directory of the current Linux user
$mkdir mysparkapp # create a directory
$ cd mysparkapp
$ npm init
After inputting the command "npm init" to initialize the project, the terminal will prompt for the relevant information of the project and automatically record these information in package JSON. If you want to carry out quick development and do not want to Enter project information manually, just press "Enter" all the time to accept the default automatic configuration.
2. Install relevant modules
In the Linux terminal, continue with the following commands to install the Express, jade and body parser modules:
$npm install express --save #save is preceded by two English dashes
$npm install jade --save #save is preceded by two English dashes
$NPM install body parser -- save #save is preceded by two English dashes
The modules installed through the above command will be placed in the node under the current project folder_ Modules folder and update to package JSON file. Node. When JS refers to the module, it will be automatically from node_ Find modules in the modules folder.
3. Create a server
In the mysparkapp project directory, create a project named index JS file, which is the portal of the whole web application. The contents of the file are as follows:
const express = require('express')
const bodyParser = require('body-parser')
const spawnSync = require('child_process').spawnSync

const app = express();

//Set template engine
app.set('views','./views')
app.set('view engine', 'jade')
//Add body parser to parse the data from the post
app.use(bodyParser.urlencoded({extended: false}))
app.use(bodyParser.json())

app.get('/', function (req, res) {
res.render('index ', {title:' film recommendation system ', message:' database Laboratory of Xiamen University! '})
})

app.post('/',function(req, res){
const path = req.body.path.trim() || '/input_spark'
const myRatings = req.body.myRatings.trim() || '~/Downloads/personalRatings.dat'
const bestRank = req.body.bestRank.trim() || 10
const bestLambda = req.body.bestLambda.trim() || 5
const bestNumIter = req.body.bestNumIter.trim() || 10
let spark_submit = spawnSync('/usr/local/spark/bin/spark-submit',['–class', 'recommend.MovieLensALS',' ~/IdeaProjects/Spark_Recommend/out/artifacts/Spark_Recommend_jar/Spark_Recommend.jar', path, myRatings, bestRank, bestLambda, bestNumIter],{ shell:true, encoding: 'utf8' })
Results. Render ('index ', {title:' film recommendation system ', message:' database Laboratory of Xiamen University! ', result: spark_submit.stdout})
})

const server = app.listen(3000, function () {
const host = server.address().address;
const port = server.address().port;

console.log('Example app listening at http://%s:%s', host, port);
});

The above code is used to start an HTTP server and listen for all connection requests from port 3000.

4. Add template file
Add a subdirectory named "views" in the current project directory, and add a jade template file index. In the views directory Jade, the specific method is to enter the following command in the Linux terminal:
$cd ~/mysparkapp # set current directory
$mkdir views # create a views directory
$ cd views
$ vim index.jade # uses VIM editor to create a new index Jade file
In index Enter the following information in the jade file:
html
head
title!= title
body
h1!= message
form(action='/', method='post')
p please enter information about modeling
table(border=0)
tr
td path of sample data (default is / input_spark)
td
input(style='width:350px',placeholder='/input_spark',name='path')
tr
td path of user rating data (the default is ~ / Downloads/personalRatings.dat)
td
input(style='width:350px',placeholder='~/Downloads/personalRatings.dat ',name='myRatings')
tr
td quotation meaning factor:
td
input(placeholder='10',type='number',min='8',max='12',name='bestRank')
tr
td regularization parameters:
td
input(placeholder='5',type='number',min='0',max='10',step='0.1',name='bestLambda')
tr
td iterations:
td
input(placeholder='10',type='number',min='10',max='20',name='bestNumIter')
input(type='submit')
br
textarea(rows='20', cols='40')!=result
Save the file and exit the vim editor.

5. calling programs and displaying results in web pages
Enter the following command in the Linux terminal to start the HTTP server:
$ node index.js
Open a browser in the Linux system and visit the website "localhost:3000" again. The following page will appear:

Using Node to complete the movie recommendation system
code:
Node.js project code
myapp.js code is as follows:
/**
*express receives parameters passed by html
/
var express=require('express');
var bodyParser = require('body-parser')
const exec = require('child_process').exec
var app=express();
var mysql=require('mysql');
app.set('view engine', 'jade');
app.set('views', './views');
app.use(bodyParser.urlencoded({extended: false}))
app.use(bodyParser.json())
var userid;
var name;
var movieid = new Array(10);
/*
*Configure MySQL
*/
var connection = mysql.createConnection({
host : '127.0.0.1',
user : 'root',
password : '123456',
database : 'tuijian',
port:'3306'
});
connection.connect();

/**
 * Jump to the home page of the website
 */
app.get('/',function (req,res) {
    res.render('index',{title:'Film recommendation system'});
})
/**
 * Jump to login screen
 */
app.get('/login',function (req,res) {
  res.render('login',{title:'Sign in'});
})
/**
 * Realize the login verification function, and randomly read the movies in the database
 */
app.post('/login',function (req,res) {
    name=req.body.username.trim();
    var pwd=req.body.pwd.trim();
console.log('username:'+name+'password:'+pwd);

    var selectSQL = "select * from userinfo where username = '"+name+"' and password = '"+pwd+"'";
    connection.query(selectSQL,function (err,rs) {
	if(rs.length==0){
	    res.render('faile',{title:'Login failed'});
	}
	else{
            userid=rs[0].userid;
          	console.log(rs);
          	console.log('ok');
            var selectm = "SELECT * FROM movieinfo where movieid < 7000 ORDER BY rand() LIMIT 10";
            connection.query(selectm,function (err,rs) {
            for(var i=0;i<10;i++){
              movieid[i]=rs[i].movieid;
            }
          	console.log(movieid);
          	res.render('recommendtest',{title:'Recommended test',rs:rs,message:name});
            })
	}
    })
})

/**
 * Jump to the registration page
 */     
app.get('/registerpage',function (req,res) {
  res.render('registerpage',{title:'register'});
})

/**
 * Realize the registration function
 */
app.post('/register',function (req,res) {
    name=req.body.username.trim();
    var  pwd=req.body.pwd.trim();
    var  user={username:name,password:pwd};
    connection.query('insert into userinfo set ?',user,function (err,rs) {
        if (err) throw  err;
        console.log('ok');
       res.render('ok',{title:'Welcome User',message:name});
    })
})

var  server=app.listen(3000,function () {
    console.log("userloginjade server start......");
})

/**
 * Jump to main page
 */ 
app.get('/index',function (req,res) {
  res.render('index',{title:'homepage'});
})


/**
 * Select movie
 */
app.post('/tijiao',function (req,res) {
    var rating = new Array(10);
    rating[0] = req.body.a;
    rating[1] = req.body.b;
    rating[2] = req.body.c;
    rating[3] = req.body.d;
    rating[4] = req.body.e;
    rating[5] = req.body.f;
    rating[6] = req.body.g;
    rating[7] = req.body.h;
    rating[8] = req.body.i;
    rating[9] = req.body.j;
    for(var i=0;i<10;i++)
    {
      if(rating[i]!=0)
      {
         var  user={userid:userid,movieid:movieid[i],rating:rating[i],ratingtime:1};
         connection.query('insert into ratings set ?',user,function (err,rs) {
         if (err) throw  err;
         console.log('ok');
         })
      }
    }
    const jarStr = '/usr/local/spark/bin/spark-submit --class "MoviesRecommond" /home/hadoop/MoviesRecommond.jar '+userid
    exec(jarStr, function(err, stdout, stderr){
      if(stderr){
        console.log('ok1111');
         var selectm = "select * from recommend";
         var movid = new Array(10);
         connection.query(selectm,function (err,rs) {
           for(var i=0;i<10;i++){
           movid[i]=rs[i].movieid;
           }
           console.log(movid[0]);
           var selectm = "select * from movieinfo where movieid in("+movid[0]+","+movid[1]+","+movid[2]+","+movid[3]+","+movid[4]+","+movid[5]+","+movid[6]+","+movid[7]+","+movid[8]+","+movid[9]+")";
           connection.query(selectm,function (err,rs) {
           res.render('recommend',{title:'Recommended results',rs:rs,message:name});
           })
         })
        
      }
    })
  
})

1 login interface code
The login interface code is as follows:
html
head
title!= title
style.
body{
background-image:url(https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1537261291133&di=c04553d39f158272a36be6e3ec0c8071&imgtype=0&src=http%3A%2F%2Fh.hiphotos.baidu.com%2Fzhidao%2Fpic%2Fitem%2Fc2fdfc039245d6885bc3be94a2c27d1ed21b2438.jpg);
}
#log{
padding-top: 2px;
margin-top: 10%;
margin-left: 37%;
background: white;
width: 25%;
height: 40%;
text-align: center;
}
body
div#log
form(action='/login', method='post')
h1 user login
br
span account:
input(type='text',name='username')
br
span password:
input(type='password',name='pwd')
br
br
input(type ='submit ', value ='login')
br
a(href = '/ registerpage', title = 'registration')
br
a(href = '/ index', title = 'home') return to the home page

2 registration interface code
The registration interface code is as follows:
html
head
title!= title
style.
body{
background-image:url(https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1537261291133&di=c04553d39f158272a36be6e3ec0c8071&imgtype=0&src=http%3A%2F%2Fh.hiphotos.baidu.com%2Fzhidao%2Fpic%2Fitem%2Fc2fdfc039245d6885bc3be94a2c27d1ed21b2438.jpg);
}
#reg{
padding-top: 2px;
margin-top: 10%;
margin-left: 37%;
background: white;
width: 25%;
height: 40%;
text-align: center;
}
body
div#reg
form(action='/register', method='post')
h1 user registration
br
span account:
input(type='text',name='username')
br
span password:
input(type='password',name='pwd')
br
br
input(type = 'submit', value = 'register')

3 login failure code
Login failure interface code is as follows:
html
head
title!=title
style.
body{
background-image:url(https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1537261291133&di=c04553d39f158272a36be6e3ec0c8071&imgtype=0&src=http%3A%2F%2Fh.hiphotos.baidu.com%2Fzhidao%2Fpic%2Fitem%2Fc2fdfc039245d6885bc3be94a2c27d1ed21b2438.jpg);
text-align:center;
}
body
h1 login failed. Please try again
a(href = '/ Login', title = 'Login') return to login

4 home page interface code
The homepage interface code is as follows:
html
head
title!=title
meta(charset='utf-8')
meta(name='description')
meta(name='keywords')
meta(name='author')
link(rel='shortcut icon', href='http://eduppp.cn/images/logo4.gif')
link(rel='apple-touch-icon', href='http://eduppp.cn/images/logo.gif')
style
include css/index.css
style(type='text/css').
#Frame {/ - -------- picture rotation frame container----------/
position: absolute; /– Absolute positioning facilitates the positioning of child elements/
width: 1500px;
height: 75%;
overflow: hidden;/– Photo frame function, only one picture is displayed -/
border-radius:5px;
}
#dis {/ – absolute positioning is convenient for automatic distribution and positioning of li picture introduction -/
position: absolute;
left: -50px;
top: -10px;
opacity: 0.5;
}
#dis li {
display: inline-block;
width: 200px;
height: 20px;
margin: 0 650px;
float: left;
text-align: center;
color: #fff;
border-radius: 10px;
background: #000;
}
#photos img {
float: left;
width:1500px;
height:75%;
}
#photos {/ - set the total picture width – achieve the rotation effect through displacement----/
position: absolute;z-index:9px;
width: calc(1500px * 5);/— To modify the number of pictures, you need to modify the following animation parameters/
}
.play{
animation: ma 20s ease-out infinite alternate;/**/
}
@keyframes ma {/ - each picture can be switched in two stages: displacement switching and standing. The effect in the middle can be customized arbitrarily----/
0%,20% { margin-left: 0px; }
25%,40% { margin-left: -1500px; }
45%,60% { margin-left: -3000px; }
65%,80% { margin-left: -4500px; }
85%,100% { margin-left: -6000px; }
}
.num{
position:absolute;z-index:10;
display:inline-block;
right:10px;top:550px;
border-radius:100%;
background:#778899;
width:50px;height:50px;
line-height:50px;
cursor:pointer;
color:#fff;
background-clor:rgba(0,0,0,0.5);
text-align:center;
opacity:0.8;
}
.num:hover{background:#000;}
.num:hover,#photos:hover{animation-play-state:paused;}
.num:nth-child(2){margin-right:60px}
.num:nth-child(3){margin-right:120px}
.num:nth-child(4){margin-right:180px}
.num:nth-child(5){margin-right:240px}
#a1:hover ~ #photos{animation: ma1 .5s ease-out forwards;}
#a2:hover ~ #photos{animation: ma2 .5s ease-out forwards;}
#a3:hover ~ #photos{animation: ma3 .5s ease-out forwards;}
#a4:hover ~ #photos{animation: ma4 .5s ease-out forwards;}
#a5:hover ~ #photos {animation: ma5 .5s ease-out forwards;}
@keyframes ma1 {0%{margin-left:-1200px;}100%{margin-left:-0px;} }
@keyframes ma2 {0%{margin-left:-1200px;}100%{margin-left:-1500px;} }
@keyframes ma3 {100%{margin-left:-3000px;} }
@keyframes ma4 {100%{margin-left:-4500px;} }
@keyframes ma5 {100%{margin-left:-6000px;} }

body
div#navigation team 9 film recommendation system
div#logreg
input(type ='submit ', value ='login', ο nclick=“window.location=’/login’”)
input(type = 'submit', value = 'register', ο nclick=“window.location=’/registerpage’”)
div#mid
#frame
a#a5.num 5
a#a4.num 4
a#a3.num 3
a#a2.num 2
a#a1.num 1
#photos.play
img(src='http://img05.tooopen.com/products/20150130/44128217.jpg')
img(src='http://image.17173.com/bbs/v1/2012/11/14/1352873759491.jpg')
img(src='http://t1.27270.com/uploads/tu/201502/103/5.jpg')
img(src='http://img.doooor.com/img/forum/201507/15/171203xowepc3ju9n9br9z.jpg')
img(src='http://4493bz.1985t.com/uploads/allimg/170503/5-1F503140J0.jpg')
ul#dis
li ring: hobbit
li fairyland
li Avatar
li the return of the great sage
li bomb disposal expert

5 scoring interface code
The scoring interface code is as follows:
html
head
title!= title
style
include css/recommendtest.css
style.
span{
width:295px;
height:15px;
line-height:15px;
overflow:hidden;
margin: 0 auto;
}
.movie select{
margin: 0 auto;
text-align:center;
position:absolute;
}
body
div#top please select your favorite movies below and rate them
div#user
Welcome: #{message}
input(type ='submit ', value ='exit') ο nclick=“window.location=’/index’”)
form(action='/tijiao', method='post')
div#mid
div#movie
img(src=rs[0].picture+"")
span movie name: #{rs[0].moviename}
br
select(name='a')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[1].picture+"")
span movie name: #{rs[1].moviename}
br
select(name='b')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[2].picture+"")
span movie name: #{rs[2].moviename}
br
select(name='c')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[3].picture+"")
span movie name: #{rs[3].moviename}
br
select(name='d')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[4].picture+"")
span movie name: #{rs[4].moviename}
br
select(name='e')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[5].picture+"")
span movie name: #{rs[5].moviename}
br
select(name='f')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[6].picture+"")
span movie name: #{rs[6].moviename}
br
select(name='g')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[7].picture+"")
span movie name: #{rs[7].moviename}
br
select(name='h')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[8].picture+"")
span movie name: #{rs[8].moviename}
br
select(name='i')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#movie
img(src=rs[9].picture+"")
span movie name: #{rs[9].moviename}
br
select(name='j')
option(value = '0') is not selected
option(value='1') 1
option(value='2') 2
option(value='3') 3
option(value='4') 4
option(value='5') 5
div#buttom
input(type ='submit ', value ='submit')

6 recommended interface code

The recommended interface code is as follows:
html
head
title!= title
style
include css/recommend.css
style.
img{
border:0
}
body{
behavior:url("csshover.htc");
text-align:center;
}
#movie span{
display:none;
text-decoration:none;
height:330px;
//line-height:2px;
overflow:hidden;
text-align:left;
}
#movie:hover{
cursor:pointer;
}
#movie:hover span {
display:block;
position:absolute;
bottom:0;
left:0;
color:#FFF;
width:295px;
z-index:10;
background:#000;
filter:alpha(opacity=60);
-moz-opacity:0.5;
opacity:0.5;
}
body
div#top here are 10 films recommended for you
div#user
Welcome: #{message}
input(type ='submit ', value ='exit') ο nclick=“window.location=’/index’”)
form(action='/tijiao', method='post')
div#mid
div#movie
img(src=rs[0].picture+"")
span
|Movie name: #{rs[0].moviename}
br
|Movie rating: #{rs [0]. Averaging}
br
|Movie introduction: #{rs[0].description}
div#movie
img(src=rs[1].picture+"")
span
|Movie name: #{rs[1].moviename}
br
|Movie rating: #{rs [1]. Averaging}
br
|Movie introduction: #{rs[1].description}
div#movie
img(src=rs[2].picture+"")
span
|Movie name: #{rs[2].moviename}
br
|Movie rating: #{rs [2]. Averaging}
br
|Movie introduction: #{rs[2].description}
div#movie
img(src=rs[3].picture+"")
span
|Movie name: #{rs[3].moviename}
br
|Movie rating: #{rs [3]. Averaging}
br
|Movie introduction: #{rs[3].description}
div#movie
img(src=rs[4].picture+"")
span
|Movie name: #{rs[4].moviename}
br
|Movie rating: #{rs [4]. Averaging}
br
|Movie introduction: #{rs[4].description}
div#movie
img(src=rs[5].picture+"")
span
|Movie name: #{rs[5].moviename}
br
|Movie rating: #{rs[5]. Averaging}
br
|Movie introduction: #{rs[5].description}
div#movie
img(src=rs[6].picture+"")
span
|Movie name: #{rs[6].moviename}
br
|Movie rating: #{rs[6]. Averaging}
br
|Movie introduction: #{rs[6].description}
div#movie
img(src=rs[7].picture+"")
span
|Movie name: #{rs[7].moviename}
br
|Movie rating: #{rs[7]. Averaging}
br
|Movie introduction: #{rs[7].description}
div#movie
img(src=rs[8].picture+"")
span
|Movie name: #{rs[8].moviename}
br
|Movie rating: #{rs[8]. Averaging}
br
|Movie introduction: #{rs[8].description}
div#movie
img(src=rs[9].picture+"")
span
|Movie name: #{rs[9].moviename}
br
|Movie rating: #{rs[9]. Averaging}
br
|Movie introduction: #{rs[9].description}
div#buttom
7 Pom.xml code

<?xml version="1.0" encoding="UTF-8"?>


4.0.0

<groupId>dblab</groupId>
<artifactId>WordCount</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <spark.version>2.1.0</spark.version>
    <scala.version>2.11</scala.version>
</properties>

<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-mllib_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.1.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.1.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.40</version>
    </dependency>

</dependencies>

<build>
    <plugins>

        <plugin>
            <groupId>org.scala-tools</groupId>
            <artifactId>maven-scala-plugin</artifactId>
            <version>2.15.2</version>
            <executions>
                <execution>
                    <goals>
                        <goal>compile</goal>
                        <goal>testCompile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>

        <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.6.0</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>

        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.19</version>
            <configuration>
                <skip>true</skip>
            </configuration>
        </plugin>

    </plugins>
</build>

8 MoviesRecommond code

import java.sql.DriverManager
import java.util.Properties

import org.apache.log4j.{Level, Logger}
import org.apache.spark.mllib.recommendation.{ALS, MatrixFactorizationModel, Rating}
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types._
import org.apache.spark.sql.{Row, SQLContext, SparkSession}
import org.apache.spark.{SparkConf, SparkContext}

object MoviesRecommond {

def main(args: Array[String]) {
//Get user id
val userid = if(args.size != 0) args(0).toInt else 6100
//val userid = 4;

Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)

//Create entry object
val conf = new SparkConf().setMaster("local[4]").setAppName("MoviesRecommond")
val sc = new SparkContext(conf)

//Total score training data set, tuple format (args (1) + "/ rates. Dat")
val ratingsList_Tuple = sc.textFile("file:///home/hadoop / download / ratings dat").map { lines =>
  val fields = lines.split("::")
  (fields(0).toInt, fields(1).toInt, fields(2).toDouble, fields(3).toLong % 10)//Here, the timespan column is used to fetch the remainder of 10
}

//The total data set of scoring training is in the form of analog key value pair. The key is a number from 0 to 9, and the value is Rating type
val ratingsTrain_KV = ratingsList_Tuple.map(x =>
  (x._4, Rating(x._1, x._2, x._3)))
//Print out from ratings In DAT, how many rating records do we get from how many users and movies
println("obtain " + ratingsTrain_KV.count()
  + "Data from " + ratingsTrain_KV.map(_._2.user).distinct().count()
  + "User in " + ratingsTrain_KV.map(_._2.product).distinct().count() + " movies")
// get 1000209ratings from 6040users on 3706movies

//Extract data from mysql
val spark = SparkSession.builder().appName("MoviesRecommond").master("local[2]").getOrCreate()
val jdbcDF = spark.read.format("jdbc").
  option("url", "jdbc:mysql://localhost:3306/personrating").
  option("driver","com.mysql.jdbc.Driver").
  option("dbtable", "ratings").
  option("user", "root").
  option("password", "123").load()
val myRatedData_Rating = jdbcDF.where("userid="+userid).rdd.map(x => Rating(x(0).toString.toInt,x(2).toString.toInt,x(3).toString.toDouble))
//jdbcDF.show();

//Set number of partitions
val numPartitions = 3
//Training data
val traningData_Rating = ratingsTrain_KV.filter(_._1 < 8)
  .values//Note that since the original data set is in the form of pseudo key value pairs, as training data, only RDD[Rating] type data, i.e. values set, is required
  .union(myRatedData_Rating)//Use the union operation to add my scoring data into the training set as the benchmark for training
  .repartition(numPartitions)
  .cache()
//test data
val testData_Rating = ratingsTrain_KV.filter(x=>x._1 >= 8 && x._1 <= 9)
  .values
  .cache()

//Print out how many records are used for training data set
println("Training data : " + traningData_Rating.count()+ " test data : " + testData_Rating.count())
// training data's num : 821160 validate data's num : 198919 test data's num : 199049

//Start model training and select the best model
    val ranks = List(8, 22)//Argot meaning factor
    val lambdas = List(0.1, 10.0)//Regularization parameters
    val iters = List(5, 7)//Number of iterations
    var bestModel: MatrixFactorizationModel = null
    var bestValidateRnse = Double.MaxValue
    var bestRank = 0
    var bestLambda = -1.0
    var bestIter = -1
    //A three-tier nested loop will generate a combination of eight ranks, lambdas and iters. Each combination will generate a model to calculate the variance of the eight models. The smallest one is recorded as the best model
    for (rank <- ranks; lam <- lambdas; iter <- iters) {
      val model = ALS.train(traningData_Rating, rank, iter, lam)
      //rnse is a function for calculating variance, which is defined at the bottom
      val validateRnse = rnse(model, traningData_Rating, traningData_Rating.count())
      println("validation = " + validateRnse
        + " for the model trained with rank = " + rank
        + " lambda = " + lam
        + " and numIter" + iter)
      if (validateRnse < bestValidateRnse) {
        bestModel = model
        bestValidateRnse = validateRnse
        bestRank = rank
        bestLambda = lam
        bestIter = iter
      }
    }
//val bestModel = ALS.train(traningData_Rating, 22, 7, 0.1)

    //Apply the best model to the test data set
    val testDataRnse = rnse(bestModel, traningData_Rating, traningData_Rating.count())
    println("The best test model is in rank=" + bestRank + " and lambda = " + bestLambda
      + " and numIter = " + bestIter + " Get the variance of the test set data=" + testDataRnse)


//Movie data (1,Toy Story (1995),Animation|Children's|Comedy) format, (args(1) + "/movies.dat")
val movieList_Tuple = sc.textFile("file:///home/hadoop / download / movies dat").map { lines =>
  val fields = lines.split("::")
  (fields(0).toInt, fields(1), fields(2))
}


//Map type. The key is id and the value is name
val movies_Map = movieList_Tuple.map(x =>
  (x._1, x._2)).collect().toMap


println("Here are the 10 films recommended:")
//Get the id of the movie I've seen
val myRatedMovieIds = myRatedData_Rating.map(_.product).collect().toSet
//Filter these movies from the movie list, and the rest of the movie list will be sent to the model to predict the possible score of each movie
val recommondList = sc.parallelize(movies_Map.keys.filter(!myRatedMovieIds.contains(_)).toSeq)
//Sort the result data according to the score from small to large, and select the top 10 records for output
val recommondRdd = bestModel.predict(recommondList.map((userid, _)))
  .collect()
  .sortBy(-_.rating)
  .take(10)
recommondRdd.foreach {
  println
}
//Next, load recommondRdd to generate Rdd file and record
val resultRdd = spark.sparkContext.parallelize(recommondRdd)
//Generate field, schema as header
val schema = StructType(List(
  StructField("userid", IntegerType, false),
  StructField("movieid", IntegerType, false),
  StructField("tating",FloatType , false)))
//Parse each line element of resultRdd
val rowRDD = resultRdd.map(p => Row(p.user.toInt, p.product.toInt, p.rating.toFloat))
//Combine header and table data
val resultDF = spark.createDataFrame(rowRDD,schema)
//Create a prop variable to save JDBC connection parameters
val prop = new Properties()
prop.put("user", "root")
prop.put("password", "123")
prop.put("driver","com.mysql.jdbc.Driver")
//Delete original data
val connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/personrating","root","123")
val statement = connection.createStatement()
statement.executeUpdate("delete from recommend where userid="+userid)
//Write the recommended data in append mode, indicating that the data is added to the recommond table
resultDF.write.mode("append").
  jdbc("jdbc:mysql://localhost:3306/personrating?useSSL=false", "personrating.recommend", prop)

}
//Calculate variance function
def rnse(model: MatrixFactorizationModel, predictionData: RDD[Rating], n: Long): Double = {
//According to the parameter model, the validation data set is predicted
val prediction = model.predict(predictionData.map(x => (x.user, x.product)))
//After join ing the prediction results and the validation data set, calculate the variance of the score and return it
val predictionAndOldRatings = prediction.map(x => ((x.user, x.product), x.rating))
.join(predictionData.map(x => ((x.user, x.product), x.rating))).values
math.sqrt(predictionAndOldRatings.map(x => (x._1 - x._2) * (x._1 - x.2)).reduce( - _) / n)
}
}

Screenshot of project operation:
Home page:

Registration page:

Login page:

Pagination:

Submission page:

Recommendation page:

summary

1.Node

"SyntaxError: block scoped declarations (let, const, function, class) not yet supported outside strict mode": SyntaxError: block scope declarations (let, const, function, class) are not supported outside strict mode.
The reason for the error is that the node version is too low. At first, I downloaded node directly using Ubuntu without specifying the version, so I deleted all the downloaded nodes and downloaded them again.
Delete:
#Apt get uninstall
sudo apt-get remove --purge npm
sudo apt-get remove --purge nodejs
sudo apt-get remove --purge nodejs-legacy
sudo apt-get autoremove
Download:
sudo wget https://nodejs.org/download/release/v12.18.3/node-v12.18.3-linux-x64.tar.gz
decompression
sudo tar -zxvf node-v12.18.3-linux-x64.tar.gz
Move to your environment directory
sudo mv node-v12.18.3-linux-x64 /usr/local
Create connection
sudo ln -s /usr/local/node-v12.18.3-linux-x64/bin/node /usr/local/bin/node
sudo ln -s /usr/local/node-v12.18.3-linux-x64/bin/npm /usr/local/bin/npm
View version information

Using node to build the server again is no problem.
Node has rarely been used as a construction tool before. This project let me learn about node commands and usage scenarios. Let's say Js, JavaScript is a programming language created in Netscape (as a scripting tool for manipulating Web pages in its browser Netscape Navigator). Part of Netscape's business model is to sell Web servers, including an environment called Netscape LiveWire, which can create dynamic pages using server-side JavaScript.
As many browsers compete to provide users with the best performance, the JavaScript engine has also become better. The development teams behind mainstream browsers are trying to provide better support for JavaScript and find ways to make JavaScript run faster. Thanks to this competition, node The V8 engine used by JS (also known as Chrome V8, which is the open source JavaScript engine of the Chromium project) has been significantly improved. Node.js happens to be built in the right place and time, but luck is not the only reason why it is popular today. It introduces many innovative ideas and methods for JavaScript server-side development, which has helped many developers.

2. Project conclusion
This project constructs a recommendation model based on ALS collaborative filtering algorithm by building a film recommendation system and using it as auxiliary information, and applies it to the film recommendation system. The method of predicting user satisfaction score is used to optimize the recommendation performance, determine the user's preference for movies through the scoring results, and recommend the appropriate movie types to users. With the rapid development of film and television industry, the number of films is increasing sharply every year, which puts forward higher requirements for video recommendation, and this system has personalized recommendation function and certain commercial value.

Keywords: Big Data Hadoop Spark

Added by linusx007 on Sun, 23 Jan 2022 18:04:39 +0200