brief introduction
This scenario is designed to support you to migrate from HAProxy to Envoy Proxy. It will help you apply any previous experience and understanding of HAProxy to Envoy.
You will learn how to:
- Configure Envoy proxy server configuration and settings.
- Configure the Envoy agent to proxy traffic to external services.
- Configure access and error logs.
At the end of the scenario, you will learn about the core Envoy Proxy functionality and how to migrate existing HAProxy scripts to the platform.
HAProxy example
You can view the sample HA proxy configuration haproxy.cfg in the editor by opening.
global log /dev/log local0 log /dev/log local1 notice user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull frontend localnodes bind *:8080 mode http default_backend nodes backend nodes mode http balance roundrobin option forwardfor server web01 172.18.0.3:80 check server web02 172.18.0.4:80 check
HA agent configuration usually has four key elements:
- Configure HA proxy server, log structure and process wide security and performance adjustments, which will affect HAProxy at a low level. This is defined globally in all instances. See the global section haproxy.cfg in.
- The configuration defaults apply to all frontend and backend sections after it. See the defaults section in haproxy.cfg.
- Configure the HA agent to accept incoming requests. This section defines the IP addresses and ports that clients can connect to. See haproxy.cfg in the frontend section of.
- Configure where traffic is handled. It defines a set of servers that will be load balanced and distributed to process requests. See the backend section in haproxy.cfg.
Not all configurations are applicable to envoy agents, and some aspects are not required due to different architectures and decisions. Envoy Proxy has four key types and supports the core infrastructure provided by HA Proxy. The core is:
- Listeners: they define how the Envoy agent accepts incoming requests. Currently, Envoy Proxy only supports TCP based listeners. After the connection is established, it is passed to a set of filters for processing.
- Filters: they are part of a pipeline architecture that can handle inbound and outbound data. This feature enables filters, such as Gzip, which compresses data before sending it to the client.
- Routes: they forward traffic to the desired destination, defined as clusters.
- Clusters: they define the destination endpoint and configuration settings for traffic.
We will use these four components to create the envoy agent configuration to match the defined ha agent configuration. Envoy has always focused on API s and dynamic configuration. In this case, the configuration will use the static, hard coded resources defined by the HA agent.
Frontend Configuration
In the HTTP configuration block, the HA agent configures the listening port 8080, and all traffic is processed by the back-end node.
frontend localnodes bind *:8080 mode http default_backend nodes
In Envoy Proxy, this concept is handled by Listeners.
Envoy Listeners
The configured Envoy binding is defined as Listeners. Each listener can define a port and a series of filters, routes and clusters that respond on the port. In this case, a listener bound to port 8080 is defined.
Envoy Proxy is configured using YAML notation. If you are not familiar with this notation, you can view it link.
static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 8080 }
In the next step, you will find the routing and cluster configuration that will handle the traffic.
Backend Configuration
The backend configuration defines the load balancer configuration to handle incoming traffic. In this configuration example, two nodes are defined in a circular fashion.
backend nodes mode http balance roundrobin option forwardfor server web01 172.18.0.3:80 check server web02 172.18.0.4:80 check
In Envoy, this function is handled by creating filters and clusters.
Envoy Filters and Clusters
For static configurations, filters define how incoming requests are handled. In this case, we define a filter that matches all flows. When a request matching the defined domain and route is issued, the traffic is forwarded to the cluster. This is equivalent to the upstream configuration:
filter_chains: - filters: - name: envoy.http_connection_manager config: codec_type: auto stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: backend domains: - "*" routes: - match: prefix: "/" route: cluster: nodes http_filters: - name: envoy.router
Name envoy.http_connection_manager is a built-in filter in envy proxy. Other filters include Redis, Mongo and TCP. You can find the complete list in the document.
The filter controls how Envoy matches incoming HTTP requests and which cluster should process them. The cluster controls which servers are handling traffic and load balancing configurations, such as loops.
clusters: - name: nodes connect_timeout: 0.25s type: STRICT_DNS dns_lookup_family: V4_ONLY lb_policy: ROUND_ROBIN hosts: [ { socket_address: { address: 172.18.0.3, port_value: 80 }}, { socket_address: { address: 172.18.0.4, port_value: 80 }} ]