As we saw in the previous lab, after we deployed the configuration that exposed our Frontend and Backend applications via the service mesh, it still had 2 issues:
- The Public Cloud services are not able to talk to the Private Cloud services and vice versa.
- The Secure applications are not actually secure. The Insecure application can communicate freely with Secure services.
We will configure global mesh policy to correct these issues
Our application workspace for the secure applications does not contain any policy for service to service authentication or authorization (AuthN/Z). Within TSB establishing this default policy is a trivial task. First, ensure you have set an environment variable in the shell you are using named PREFIX
. You will also want to ensure that your tctl
CLI is targeted and logged into the TSB management plane.
- Create a default
WorkspaceSetting
that will apply to the Secure Application workspace usingtctl apply
:
envsubst < 03-Security/01-workspace-setting.yaml | tctl apply -f -
Inspect the file 03-Security/01-workspace-setting.yaml
. As you may expect, the metadata section contains the names and IDs that map the WorkspaceSetting
to the Secure application's workspace. However, the important piece is the defaultSecuritySetting
defaultSecuritySetting:
authenticationSettings:
trafficMode: REQUIRED
authorization:
mode: WORKSPACE
This will configure all services deployed to any cluster that is part of the Secure workspace to require strict mTLS authentication. Additionally, and more importantly, it will also configure mTLS authorization that only allows services within the Secure Workspace
to connect.
- Let's now verify that our changes had the desired effect. Once again, open your browser and navigate to https://insecure.public.$PREFIX.cloud.zwickey.net. Make sure you replace $PREFIX in the URL with your prefix value. Enter the internal address for the secure backend running in the public cloud west cluster --
secure.$PREFIX.public.mesh
. This will cause the frontend microservice to attempt to call the secure backend. However, you should receive anRBAC: Access Denied
message, which is an http 403 response. We've successfully added policy to block insecure -> secure app communication.
As we demonstrated previously, the Frontend application running in a public cloud cluster is not able to talk to any application services running in a private cloud cluster. We will create an Allow-List for this communication in the DMZ in this portion of the lab. But first, let's verify one more time that the Frontend application running in the Public Cloud West cluster, which is part of the Secure workspace, is unable to discover and communicate with the backend running in the the Private Cloud West cluster.
Open your browser and navigate to https://secure.public.$PREFIX.cloud.zwickey.net. Make sure you replate $PREFIX in the URL with your prefix value. The application should display in your browser. Enter the internal address for the backend running in the private west cluster -- west.secure.$PREFIX.private.mesh
. This should result in an error:
Create the policy and configuration that will apply to the Secure Application workspace using tctl apply
:
envsubst < 03-Security/02-dmz-policy-config.yaml | tctl apply -f -
Inspect the file 03-Security/02-dmz-policy-config.yaml
. As you may expect, the metadata section contains the names and IDs that map the WorkspaceSetting
to a specific Tenant
, Workspace
, and GatewayGroup
. The key is the definition of a set of internalServers
that represent the endpoints that will be allowed to traverse through the DMZ gateway.
internalServers:
- hostname: east.secure.$PREFIX.private.mesh
name: east-private
- hostname: west.secure.$PREFIX.private.mesh
name: west-private
It will take a few seconds for the config to get distributed to all the clusters. Click the back button, which will ensure you don't get a cached page, and try the request again. You should now receive a response from the Backend application running in the Private West cluster. (if you're not getting expected behavoir - please check if the last step of Lab 00 - deploying Tier1 GW is completed:
envsubst < 00-App-Deployment/dmz/cluster-t1.yaml | kubectl --context dmz apply -f -
If you really want to dig into the details of how this worked, we can look into the ServiceEntries
that were distributed across the global mesh.
- First we'll look at the configuration that was distributed to the
public-west
cluster. Look at theServiceEntry
, that was distributed and facilitates the Public Cloud -> DMZ communication, using thekubectl describe
command:
kubectl --context public-west describe se -n xcp-multicluster gateway-west-secure-$PREFIX-private-mesh
Spec:
Endpoints:
Address: 35.245.211.33
Labels:
Network: dmz
security.istio.io/tlsMode: istio
xcp.tetrate.io/cluster: dmz
xcp.tetrate.io/hops: 1
xcp.tetrate.io/serviceType: remoteGateway
Locality: dmz
Ports:
Http: 15443
Export To:
*
Hosts:
west.secure.abz.private.mesh
Location: MESH_INTERNAL
Ports:
Name: http
Number: 80
Protocol: HTTP
Resolution: STATIC
You'll note that the mesh configuration delivered to the Public Cloud West clusters identifies the route for west.secure.$PREFIX.private.mesh
(west.secure.abz.private.mesh
in my example) to connect via a remoteGateway
located in a network
that we've tagged as DMZ in our topology, and it will transit via a mesh internal port of 15443
via mTLS communication.
- Now lets look at the same
ServiceEntry
using the exact samekubectl describe
command, but look at the configuration in thedmz
cluster:
kubectl --context dmz describe se -n xcp-multicluster gateway-west-secure-$PREFIX-private-mesh
Spec:
Endpoints:
Address: a87ee1f1b4dd040fabc438a08e421f3a-639535762.us-west-1.elb.amazonaws.com
Labels:
Network: private
security.istio.io/tlsMode: istio
xcp.tetrate.io/cluster: private-west
xcp.tetrate.io/hops: 0
xcp.tetrate.io/serviceType: remoteGateway
Locality: us-west-1
Ports:
Http: 15443
Export To:
*
Hosts:
west.secure.abz.private.mesh
Location: MESH_INTERNAL
Ports:
Name: http
Number: 80
Protocol: HTTP
Resolution: DNS
The mesh configuration delivered to the DMZ Cluster is very similar, except it knows how to forward traffic for the route west.secure.$PREFIX.private.mesh
to the remote gateway on the private network; still using mTLS over port 15443.
- To explain the last piece of the puzzle, let's take a look at
NetworkReachability
, that has already been defined in the environment that forces all Cross-Cloud traffic to transit throught the DMZ. Execute the followingtctl
command:
tctl get OrganizationSetting tetrate-settings -o yaml
Focus on the networkReachability
section, which defines the allowed communication paths for the global mesh topology:
networkSettings:
networkReachability:
cloud: dmz
dmz: cloud,private
private: dmz
We now have some sensible security policies applied to our Secure Application(s). Next, let's introduce some VM-based workloads into the global mesh.