FastDFS concept
- FastDFS is an open source lightweight distributed file system, which realizes file management. Its main functions are as follows:
- file store
- File synchronization
- File access (file upload, file download)
- It solves the problems of mass storage and load balancing, and is especially suitable for online services with file as carrier: photo album website and video website
- FastDFS is tailor-made for the Internet. It fully considers redundant backup, load balancing, linear capacity expansion and other mechanisms, and pays attention to high availability, high performance and other indicators. Using FastDFS, it is convenient to build a set of high-performance file server cluster to provide file upload, download and other services
FastDFS file system architecture
- The FastDFS server has two roles:
- Tracker: it is mainly used for scheduling and load balancing in access
- The tracker and storage node can be composed of one server or multiple servers. The servers in the tracker and storage node can be added or offline at any time without affecting the offline service
- All services in the tracker are peer-to-peer and can be increased or decreased at any time according to the pressure of the server
- Storage node: stores files and completes all functions of file management
- That's the storage
- Synchronous storage interface
- Provide storage interface
- FastDFS manages file metadata at the same time. File metadata is a file attribute list and can contain multiple key value pairs
- File metadata: the related attributes of the file, expressed as key value pairs
- In order to support large capacity, storage nodes are organized in volumes
- The storage system is composed of one or more volumes. The files between volumes are independent of each other. The cumulative file capacity of all volumes is the file capacity of the whole storage system
- A volume can be composed of one or more storage servers. The files in the storage servers under a volume are the same. Multiple servers in the volume play the role of redundant backup and load balancing
- When a server is added to a volume, the system automatically synchronizes the existing files. After the synchronization is completed, the system automatically switches the new server to online service
- When the storage space is insufficient or about to run out, you can dynamically add volumes. You only need to add one or more servers and configure a new volume to expand the capacity of the storage system
- FastDFS file identification is divided into two parts:
- Volume name
- file name
- Tracker: it is mainly used for scheduling and load balancing in access
High availability requires crash recovery capability The service cluster should have the function of synchronization Otherwise, there must be load balancing
Upload interaction process
- The client asks about the storage uploaded by the tracker. No additional parameters are required
- The tracker returns an available storage
- client communicates directly with storage to complete file upload
client For use FastDFS Caller of,client It's also a server,yes tracker And right storage All calls are between servers
Download interaction process
- The client asks the tracker to download the storage of the file. The parameter is the file ID (volume name and file name)
- The tracker returns an available storage
- client communicates directly with storage to complete file download
client For use FastDFS Caller of,client It's also a server,yes tracker And right storage All calls are between servers
FastDFS combined with Nginx
- When using FastDFS to deploy distributed file system, upload, download and delete files through FastDFS client API, and provide HTTP services through FastDFS and HTTP server However, the HTTP service of FastDFS is relatively simple and cannot provide high-performance services such as load balancing You need to use the Nginx module of FastDFS to make up for this defect
- FastDFS stores files on the storage server through the tracker server, but servers in the same group need to replicate files. If there is a delay, you can redirect the connection to the source server through FastDFS nginx module to get files, so as to avoid errors on the client due to replication delay
Installing FastDFS based on Docker
Environmental preparation:
- Libfastcommon: some public function packages separated by fastdfs
- FastDFS: FastDFS Ontology
- Fastdfs nginx module: the associated module of fastdfs and nginx
- nginx: nginx1.15.4
Create working directory:
- Creating in Linux
/usr/local/docker/fastdfs/environment /usr/local/docker/fastdfs:For storage docker-compose.yml Configuration files and FastDFS Data volume /usr/local/docker/fastdfs/environment:For storage Dockerfile Image profile and FastDFS Required environment
Create a Dockerfile in the / usr/local/docker/fastdfs/environment directory
# Update data source WORKDIR /etc/apt RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse' > sources.list RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse' >> sources.list RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse' > sources.list RUN echo 'deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse' > sources.list RUN apt-get update # Installation dependency RUN apt-get install make gcc libpcre3-dev zliblg-dev --assume-yes # Replication Kit ADD fastdfs-5.11.tar.gz /usr/local/src ADD fastdfs-nginx-module_v1.16.tar.gz /usr/local/src ADD libfastcommon.tar.gz /usr/local/src ADD nginx-1.15.4.tar.gz /usr/local/src # Install libfastcommon WORKDIR /usr/local/src/libfastcommon RUN ./make.sh && ./make.sh install # Installing FastDFS WORKDIR /usr/local/src/fastdfs-5.11 RUN ./make.sh && ./make.sh install # Configure FastDFS tracker ADD tracker.conf /etc/fdfs RUN mkdir -p /fastdfs/tracker # Configure FastDFS storage ADD storage.conf /etc/fdfs RUN mkdir -p /fastdfs/storage # Configure FastDFS client ADD client.conf /etc/fdfs # Configure fastdfs nginx module ADD config /usr/local/src/fastdfs-nginx-modules/src # FastDFS and Nginx integration WORKDIR /usr/local/src/nginx-1.13.6 RUN ./configure --add-module=/usr/local/src/fastdfs-nginx-module/src RUN make && make install ADD mod_fastdfs.conf /etc/fdfs WORKDIR /usr/local/src/fastdfs-5.11/conf RUN cp http.conf mime.types /etc/fdfs/ # Configure Nginx ADD nginx.conf /usr/local/nginx/conf COPY entrypoint.sh /usr/local/bin/ ENTRYPOINT ["/usr/local/bin/entrypoint.sh"] WORKDIR / EXPOSE 8888 CMD ["/bin/bash"]
- Create an entrypoint.com in / usr/local/docker/fastdfs/environment SH, execute Chmod + X entrypoint The SH command file can only be used
# !/bin/sh /etc/init.d/fdfs_trackerd start /etc/init.d/fdfs_storaged start /usr/local/nginx/sbin/nginx -g 'daemon off;'
Related configuration files
- tracker.conf: FastDFS tracker configuration. The container path is: / etc/fdfs. Modify:
base_path=/fastdfs/tracker
- storage.conf: FastDFS storage node configuration. The container path is: / etc/fdfs. Modify:
base_path=/fastdfs/storage store_path0=/fastdfs/storage tracker_server=192.168.32.255:22122 http.server_port=8888
- client.conf: FastDFS client configuration. The path in the container is: / etc/fdfs. Modify:
base_path=/fastdfs/tracker tracker_server=192.168.32.255:22122
- Config: fastdfs nginx module configuration file. The path in the container is: / usr / local / SRC / fastdfs nginx module / SRC. Modification:
# Before modification CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/" CORE_LIBS="$CORE_LIBS -L/usr/local/lib -lfastcommon -lfdfsclient" # After modification CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/" CORE_LIBS="$CORE_LIBS -L/usr/lib -lfastcommon -lfdfsclient"
- mod_ fastdfs. Conf: fastdfs nginx module configuration file, path in container: / usr / local / SRC / fastdfs nginx module / SRC, modification:
connect_timeout=10 tracker_server=192.168.75.128:22122 url_have_group_name=true store_path0=/fastdfs/storage
- nginx.conf: Nginx configuration file. The path in the container is: / usr/local/src/nginx-1.15.4/conf. Modify:
user root; worker_processes 1; events { worker_connections 1024; } http{ include mime.types; defaulte_type application/octet-stream; sendfile on; keepalive_timeout 65; server{ listen 8888; server_name localhost; location ~/group([0-9])/M00{ ngx_fastddfs_module; } error_page 500 502 503 504 /50x.html location = /50x.html { root html; } } }
Start container
- docker-compose.yml: create docker compose.com in the / usr/local/docker/fastdfs folder yml
version: '3.1' services: fastdfs: build: environment restart: always container_name: fastdfs volumes: - ./storage:/fastdfs/storage network_mode: host # Network mode: host mode -- map all ports to the host, and the Docker container shares ports with the host, that is, the ports are consistent
- Execute the command to make the file compilation take effect
docker-compose up -d
Test upload
- Interactive access container:
docker exec -it fastdfs /bin/bash
- Test file upload: execute in / usr/bin directory (the first is the binary executable client, the second is the client configuration file of the client, and the third is the file to be uploaded)
/usr/bin/fdfs_upload_file /etc/fdfs/client.conf /usr/local/src/fastdfs-5.11/INSTALL
- Server feedback upload address: the upload path (non address) of the file. You can access the file on the server by entering the access address of Ngnix + file upload path in the browser
group1/M00/00/00/wKliyyfhHHkjsio986777
- Test Nginx access: you can access the files on the server by entering the access address of Ngnix + file upload path in the browser
http://192.168.32.255:8888/group1/M00/00/00/wKliyyfhHHkjsio986777
Configure FastDFS Java client
- Create project: create a service provider project named myshop service upload
Install FastDFS Java client
- From github, git clone FastDFS project code:
git clone https://github.com/happyfish100/fastdfs-client-java.git
- Configure to local warehouse: there are jar files of the project under the target package of the project directory
mvn clean install
- Upload the project jar file to Nexus
- Add dependencies to the project:
<!--FastDFS Begin--> <dependency> <groupId>org.csource</groupId> <artifactId>fastdfs-client-java</artifactId> <version>1.27-SNAPSHOT</version> </dependency>
Create FastDFS tool class
- Define the file storage service interface:
package com.oxford.myshop.service.upload.fastdfs; public interface StorageService{ /** *Upload file * *@param data Binary character of file *@param extName Extension *@return The id of the generated file is returned after the upload is successful, or null if it fails */ public String upload(byte[] data,String extName); /** *Delete file * *@param fileId Deleted file id *@return 0 is returned after successful deletion, and error code is returned after failure */ public int delete(String fileId); }
- Implement file storage service interface:
public class FastDFSStorageService implements StorageService,InitializingBean{ private static final Logger logger=LoggerFactory.getLogger(FastDFSStorageService.class); private TrackerClient trackerClient; @Value("${storage.fastdfs.tracker_server}") @Override public String upload(byte[] data,String extName){ TrackerServer trackerServer=null; StorageServer storageServer=null; StorageClient storageClient=null; try{ NameValuePair[] meta_list=null; // new NameValuePair[0] trackerServer=trackerClient.getConnection(); if(trackerServer==null){ logger.error("getConnection return null"); } storageServer=trackerClient.getStoreStorage(trackerServer); storageClient1=new StorageClient1(trackerServer,storageServer); String fileId=storageClient1.upload_file1(data,extName,meta_list); logger.debug("uploaded file <{}>",fileId); return fileId; }catch(Exception ex){ logger.error("Uploaded fail",ex); return null; }finally{ if(storageServer!=null){ try{ storageServer.close(); }catch(IOException e){ e.printStackTrace(); } } if(trackeServer!=null){ try{ trackeServer.close(); }catch(IOException e){ e.printStackTrace(); } } storageClient1=null; } } @Override public int delete(String fileId){ TrackerServer trackerServer=null; StorageServer storageServer=null; StorageClient storageClient=null; int index=fileId.indexOf('/'); String groupName=fileId.substring(0,index); try{ trackerServer=trackerClient.getConnection(); if(trackerServer==null){ logger.error("getConnection return null"); } storageServer=trackerClient.getStoreStorage(trackerServer,groupName); storageClient1=new StorageClient1(trackerServer,storageServer); int result=storageClient1.delete_file1(fileId); return result; }catch(Exception ex){ logger.error("Delete fail",ex); return 1; }finally{ ifreturn fileId; }catch(Exception ex){ logger.error("Uploaded fail",ex); return null; }finally{ if(storageServer!=null){ try{ storageServer.close(); }catch(IOException e){ e.printStackTrace(); } } if(trackeServer!=null){ try{ trackeServer.close(); }catch(IOException e){ e.printStackTrace(); } } storageClient1=null; } } @Override public void afterPropertiesSet() throws Exxception{ File confFile=File.createTempFile("fastdfs",".conf"); PrintWriter confWriter=new PrintWriter(new FileWriter(confFile)); confWriter.println("tracker_server="+trackerServer); confWriter.close(); ClientGlobal.init(confFile.getAbsolutePath()); confFile.delete(); TrackerGroup trackerGroup=ClientGlobal.g_tracker_group; trackerClient=new TrackerClient(trackerGroup) logger.info("Init FastDFS with tracker_server : {}",trackerServer); } }
- File storage service factory
public class StorageFactory implements FactoryBean<StorageService>{ @Autowired private AutowireCapableBeanFactory acbf; /** * The type of storage service. Only FastDFS is supported */ @Value("${storage.type}") private String type; private Map<String,Class<? extends StorageService>> classMap; public StorageFactory(){ classMap=new HashMap<>(); classMap.put("fastdfs",FastDFSStorageService.class); } @Override public StorageService getObject() throws Exception{ Class<? extends StorageService> clazz=classMap.get(type); if(class==null){ throw new RuntimeException("Unsupported storage type ["+type+"],valid are"+ classMap.keySet()); } StorageService bean=clazz.newInstance(); acbf.autowireBean(bean); acbf.initializeBean(bean,bean.getClass().getSimpleName()); return bean; } @Override public Class<?> getObjectType(){ return StorageService.class; } @Override public boolean isSingleton(){ return true; } }
- Profile storage service factory class
/** * Java The configuration mode defines the bean of StorageFactory so that it can be dependency injected */ @Configuration public classs FastDFSConfiguration{ @Bean public StorageFactory storageFactory(){ return new StorageFactory(); } }
Create FastDFS controller
- Add cloud configuration: application yml
# SpringBoot Application spring: application: name: myshop-service-upload # FastDFS Configuration fastdfs.base.url: htttp//192.168.32.255:8888/ storage: type: fastdfs fastdfs: tracker_server: 192.168.32.255:22122
- Controller code
@CrossOrigin(origins="*",maxAge=3600) @RestController public class UploadController{ @Value("${fastdfs.base.url}") private String FASTDFS_BASE_URL; @Autowired private StorageService storageService; @RequestMapping(value="upload",method=RequestMethod.POST) public Map<String,Object> upload(MultipartFile dropFile,MultipartFile[] editorFiles){ Map<String,Object> result=new HashMap<>(); //DropZone upload if(dropFile!=null){ result.put("fileName",writeFile(dropFile)); } //wangEditor upload if(editorFiles != null && editorFiles.length > 0){ List<String> fileNames=new ArrayList<>(); for(MultipartFile editorFile:editorFiles){ fileNames.add(writeFile(editorFile)); } result.put("error",0); result.put("data",fileNames); } return result; } /** *Write pictures to the specified directory */ private String writeFile(MultipartFile multipartFile){ // Get file suffix String oName=multipartFile.getOriginalFilename(); String exName=oName.substring(oName.lastIndexOf(".")+1); // File storage path String url=null; try{ String uploadUrl=storageService.upload(multipartFile.getBytes(),exName); url=FASTDFS_BASE_URL+uploadUrl; }catch(IOException e){ e.printStackTrace(); } // Returns the full path of the file return url; } }
- Create SpringBoot Application, run and execute distributed file upload project