Recently, I'm working on integrating graphql into the project. The reason is that the back end of our team uses restful specification. Redundant fields may appear in every query. To eliminate these redundant fields is not technical and time-consuming for back-end students. In addition, the back-end students are not very interested in the bff layer, because data aggregation has little content for them and is entirely a service for the front-end students. Therefore, we can introduce query to take over the bff layer of back-end students. Or we need to query the newly added fields, and the back-end students also need to change the newly added fields. Based on these attempts, node+graphql is introduced. The query advantage of graphql is that the front end can actively control the acquisition of fields (as long as these fields are accessible). There are two ways to integrate graphql.
- Back end integration (java interface (restful or graphql) - > front end)
- The front end adds an intermediate service layer (java interface -- > front end intermediate service layer nodejs (graphql -- > front end)
For the first method, the back-end students may change more. Changing the interface specification to meet the front-end may be too expensive, and the back-end students may not be happy with the extra workload of modifying the interface specification. Therefore, we chose the second one and introduced the nodejs middle layer as the request forwarding.
First, modify the front-end proxy to the local nodejs service, and directly use the proxy configuration of weback:
proxy: { '/api': { target: 'http://localhost:8080/', changeOrigin: true, }, '/local': { target: 'http://localhost:8080/', changeOrigin: true, pathRewrite: { '^/local': '' }, }, },
The agent writes two configurations. The proxy with '/ api' prefix is directly sent to the back end, and the proxy with '/ local' is processed in the node middle layer. Why write two configurations? Because not all requests need to be processed with graphql. You will know that it has advantages and disadvantages when you use it later. Introducing your project depends on how valuable it can be.
After writing these two configurations, the requests with two keywords will be proxied to port 8080 of the local node service. Next, configure the node middle tier.
Configuration of front-end intermediate service layer
The middle service layer is built with koa2. Of course, you can also use express and other services. The integration of graphql is to use the middleware KOA graphql
const Koa = require('koa'); const koaStatic = require('koa-static'); const views = require('koa-views'); const koaBody = require('koa-body'); const path = require('path'); const mount = require('koa-mount'); const { graphqlHTTP } = require('koa-graphql'); const { makeExecutableSchema } = require('graphql-tools'); const loggerMiddleware = require('./middleware/logger'); const errorHandler = require('./middleware/errorHandler'); const responseWrapperMiddleware = require('./middleware/responseWrapper'); // const decoratorRequest = require('./middleware/decoratorRequest'); const axiosRequest = require('./middleware/axiosRequest'); const accessToken = require('./middleware/accessToken'); const apiProxy = require('./middleware/apiProxy'); const typeDefs = require('./graphql/typeDefs'); const resolvers = require('./graphql/resolvers'); const router = require('./routes/_router'); const { APP_KEYS, API_HOST, APP_ID, APP_SECRET } = require('./config'); const port = process.env.PORT || 8080; const distPath = path.join(__dirname, '/dist'); const getSchema = (...rst) => { const schema = makeExecutableSchema({ typeDefs: typeDefs, resolvers: resolvers(...rst), }); return schema; }; const app = new Koa(); // logger configuration app.use(loggerMiddleware()); // Set static resource directory app.use( koaStatic(path.resolve(__dirname, './dist'), { index: false, maxage: 60 * 60 * 24 * 365, }), ); // General app configuration in each environment // cookie verification signature app.keys = APP_KEYS; //Set template engine ejs app.use( views(distPath, { map: { html: 'ejs', }, }), ); // exception handling app.use(errorHandler); // req.body app.use(koaBody({ multipart: true })); // Return of packaging request app.use(responseWrapperMiddleware()); // request app.use( axiosRequest({ baseURL: `${API_HOST}/audit`, }), ); // Request the accessToken of the backend app.use( accessToken({ appId: APP_ID, appSecret: APP_SECRET, }), ); // The / api request from the front end is directly forwarded to the back end, and the internal authentication and parameter settings are unified app.use( apiProxy({ prefix: '/api', }), ); // koa graphql Middleware app.use( mount( '/graphql', graphqlHTTP(async ( request, response, ctx, graphQLParams ) => { return ({ schema: getSchema(request, response, ctx, graphQLParams), graphiql: true, }); }) ), ); // route app.use(router.routes()); app.use(router.allowedMethods()); app.listen(port, function() { console.log( `\n[${ process.env.NODE_ENV === 'production' ? 'production' : 'development' }] app server listening on port: ${port}\n`, ); });
Mainly look at the configuration of graphql. Others are the conventional middleware configuration of koa
const getSchema = (...rst) => { const schema = makeExecutableSchema({ typeDefs: typeDefs, resolvers: resolvers(...rst), }); return schema; }
It mainly generates the schema required by graphql. typeDefs is the type definition of graphql, which uses schema to constrain types. resolvers is the interpreter, that is, how to deal with the types you define.
For example:
Your typeDefs type is customized as follows (it is a string):
const typeDefs = ` type ExportItem { applicantStatus: String approving: [ String ] approvingMulitPassType: String auditFlowId: String bizName: String createdAt: Int createdBy: Int createdByName: String deleted: Boolean finishTime: Int groupId: String groupName: String id: String showApplyId: String templateId: String templateName: String updatedAt: Int updatedBy: Int updatedByName: String auditFlowForwardType: String uiConfig: String templateDesc: String } input QueryExportListParams { pageIndex: Int pageSize: Int finishedTimeBegin: Int finishedTimeEnd: Int showApplyId: String auditFlowId: String bizName: String initiatorEmployeeId: Int status: String } type Query { exportList(params: QueryExportListParams): [ ExportItem ] exportDetail(id: String): ExportItem } `
Except that query is the internal keyword of graphql, we define everything else. Query is the top-level type in graphql. Apart from query, we often use mutation. Graphql stipulates that all query definitions should be placed in query, so modification operations. For example, we need to add and modify these operations in mutation. In fact, even if all operations are resolved in query or mutation, they will pass. However, as a standard query, the write operation in mutation may be better. What does the above definition mean? Let's analyze it first and look at the inside of query:
type Query { exportList(params: QueryExportListParams): [ ExportItem ] exportDetail(id: String): ExportItem }
On behalf of us, we have defined two queries called exportList and exportDetail. exportDetail accepts a parameter named params. The type of params is QueryExportListParams and returns an array. The data item type in the array is ExportItem. exportDetail accepts an id parameter. The id type is string, and the returned data type is ExportItem. ExportItem is a data type defined by ourselves. QueryExportListParams is a self-defined parameter type. Parameters are input types and must be defined using the input keyword. Then, where the type implementation is defined here, the implementation is in resolvers, and each type definition must have one-to-one corresponding parser in resolver.
So resolvers Zhang looks like this
const resolvers = { Query: { exportList: async (_, { params }) => { const res = await ctx.axios({ url: '/data/export/all', method: 'get', params, headers }); return res.data; }, exportDetail: async (_, { id }) => { const res = await ctx.axios({ url: `/applicant/byId/${id}`, method: 'get', headers }); return res.data; } } };
The parser has the implementation exportList and exportDetail of the type definition. In the parser, their data source can be anywhere, database or other interfaces. We are here to do the middle layer forwarding. So we can directly use axios to forward to the back end. Then the parameters of the type definition are obtained and used here.
After configuration, start the middle tier service. After the graphql query takes effect, a path interface of / graphql will be opened. If we want to use graphql query, we will request the path of / graphql. For example, when we request graphql at the front end, the query will say:
post('/graphql', { query: `query ExportList($params: QueryExportListParams){ exportList(params: $params) { id } }`, variables: { params: { finishedTimeBegin: finishedTime ? +moment(finishedTime[0]).startOf('day') : void 0, finishedTimeEnd: finishedTime ? +moment(finishedTime[1]).endOf('day') : void 0, ...rst, } } })
QueryExportListParams is the parameter type defined in the middle layer, variables Params is the parameter value we passed to resolvers
exportList(params: $params) { id }
This means that the list with id is returned in the returned data of our query. The list is returned because we have defined the list to be returned by this query when the type is defined:
type Query { exportList(params: QueryExportListParams): [ ExportItem ] exportDetail(id: String): ExportItem }
Here we have defined that the return type of exportList is a list, and the object type of the class is ExportItem, so we don't need to tell the query whether to retrieve the list. The return types are defined in advance. What we need to do is to control the return fields. As long as the fields contained in ExportItem, we can define whether to retrieve or not, for example, as we mentioned above
exportList(params: $params) { id }
This means that we only take the id field, so the returned data will only have the id field. If we need other fields, such as the groupName field, we can write it like this
exportList(params: $params) { id groupName }
As long as it is in the ExportItem type defined by us, we can control whether it gets or not. If the parameters you query are not defined in the graphql of the server, an error will occur. Another good thing about the query of graphql is the instruction. The addition of the instruction will make the bff layer more effective (I'll talk about it next time)