subcollection | copyright | lastupdated | lasttested | content-type | services | account-plan | completion-time | use-case | ||
---|---|---|---|---|---|---|---|---|---|---|
solution-tutorials |
|
2024-01-11 |
2023-09-26 |
tutorial |
openshift, log-analysis, monitoring, containers, Cloudant |
paid |
3h |
ApplicationModernization, Containers |
{{site.data.keyword.attribute-definition-list}}
{: #openshift-microservices} {: toc-content-type="tutorial"} {: toc-services="openshift, log-analysis, monitoring, containers, Cloudant"} {: toc-completion-time="3h"}
This tutorial may incur costs. Use the Cost Estimator to generate a cost estimate based on your projected usage. {: tip}
This tutorial demonstrates how to deploy applications to {{site.data.keyword.openshiftlong_notm}}. {{site.data.keyword.openshiftshort}} provides a great experience for developers to deploy software applications and for System Administrators to scale and observe the applications in production. {: shortdesc}
{: #openshift-microservices-objectives}
- Deploy a {{site.data.keyword.openshiftshort}} cluster
- Deploy a microservice
- Scale the microservice
- Use an operator to deploy {{site.data.keyword.cloudant_short_notm}} and bind to a microservice
- Observe the cluster using {{site.data.keyword.la_short}}
- Observe the cluster using {{site.data.keyword.mon_full_notm}}
{: caption="Figure 1. Architecture diagram of the tutorial" caption-side="bottom"}
{: style="text-align: center;"}
- A developer initializes a {{site.data.keyword.redhat_openshift_notm}} application with a repository URL resulting in a Builder, DeploymentConfig, and Service.
- The Builder clones the source, creates an image, pushes it to {{site.data.keyword.redhat_openshift_notm}} registry for DeploymentConfig provisioning.
- Users access the frontend application.
- The {{site.data.keyword.cloudant_short_notm}} database instance is provisioned through an IBM Cloud Operator Service.
- The backend application is connected to the database with an IBM Cloud Operator Binding.
- {{site.data.keyword.la_short}} is provisioned and agent deployed.
- {{site.data.keyword.mon_short}} is provisioned and agent deployed.
- An Administrator monitors the app with {{site.data.keyword.la_short}} and {{site.data.keyword.mon_short}}.
There are scripts{: external} that will perform some of the steps below. It is described in the README.md{: external}. If you run into trouble and want to start over just execute the destroy.sh
script and sequentially go through the scripts that correspond to the steps to recover.
{: #openshift-microservices-prereqs}
This tutorial requires:
- {{site.data.keyword.cloud_notm}} CLI,
- {{site.data.keyword.containerfull_notm}} plugin (
kubernetes-service
),
- {{site.data.keyword.containerfull_notm}} plugin (
oc
to interact with OpenShift.
You will find instructions to download and install these tools for your operating environment in the Getting started with tutorials guide.
To avoid the installation of these tools, you can use the {{site.data.keyword.cloud-shell_short}} from the {{site.data.keyword.cloud_notm}} console. Use oc version
to ensure the version of the {{site.data.keyword.openshiftshort}} CLI matches your cluster version (4.12.x
). If they do not match, install the matching version by following these instructions.
{: note}
{: #openshift-microservices-create_openshift_cluster} {: step}
With {{site.data.keyword.openshiftlong_notm}}, you have a fast and secure way to containerize and deploy enterprise workloads in clusters. {{site.data.keyword.redhat_openshift_notm}} clusters build on Kubernetes container orchestration that offers consistency and flexibility for your development lifecycle operations.
In this section, you will provision a {{site.data.keyword.openshiftlong_notm}} cluster in one (1) zone with two (2) worker nodes:
- Create a {{site.data.keyword.openshiftshort}} cluster from the {{site.data.keyword.Bluemix}} catalog.
- Set the Orchestration service to 4.12.x version of {{site.data.keyword.openshiftshort}}.
- Select your OCP entitlement.
- Under Infrastructure choose Classic or VPC
- For {{site.data.keyword.redhat_openshift_notm}} on VPC infrastructure, you are required to have a VPC and one subnet prior to creating the {{site.data.keyword.openshiftshort}} cluster. Create or inspect a desired VPC keeping in mind the following (see instructions provided under the Creating a standard VPC cluster):
- One subnet that can be used for this tutorial, take note of the subnet's zone and name
- Public gateway is attached to the subnet
- Select an existing Cloud Object Storage service or create one if required
- For {{site.data.keyword.redhat_openshift_notm}} on VPC infrastructure, you are required to have a VPC and one subnet prior to creating the {{site.data.keyword.openshiftshort}} cluster. Create or inspect a desired VPC keeping in mind the following (see instructions provided under the Creating a standard VPC cluster):
- Under Location
- For {{site.data.keyword.redhat_openshift_notm}} on VPC infrastructure
- Select a Resource group
- Uncheck the inapplicable zones
- In the desired zone verify the desired subnet name and if not present click the edit pencil to select the desired subnet name
- For {{site.data.keyword.redhat_openshift_notm}} on Classic infrastructure follow the Creating a standard classic cluster instructions.
- Select a Resource group
- Select a Geography
- Select Single zone as Availability
- Choose a Datacenter
- For {{site.data.keyword.redhat_openshift_notm}} on VPC infrastructure
- Under Worker pool,
- Select 4 vCPUs 16GB Memory as the flavor
- Select 2 Worker nodes per data center for this tutorial (classic only: Leave Encrypt local disk)
- Under Integrations, enable and configure Logging and Monitoring.
- Under Resource details,Set Cluster name to <your-initials>-myopenshiftcluster by replacing
<your-initials>
with your own initials. - Click Create to provision a {{site.data.keyword.openshiftshort}} cluster.
Take a note of the resource group selected above. This same resource group will be used for all resources in this lab. {: note}
{: #openshift-microservices-3}
The {{site.data.keyword.redhat_openshift_notm}} Container Platform CLI{: external} exposes commands for managing your applications, as well as lower level tools to interact with each component of your system. The CLI is available using the oc
command.
To avoid installing the command line tools, the recommended approach is to use the {{site.data.keyword.cloud-shell_notm}}.
{{site.data.keyword.Bluemix_notm}} Shell is a cloud-based shell workspace that you can access through your browser. It's preconfigured with the full {{site.data.keyword.Bluemix_notm}} CLI and many plug-ins and tools that you can use to manage apps, resources, and infrastructure.
In this step, you'll use the {{site.data.keyword.Bluemix_notm}} shell and configure oc
to point to the cluster assigned to you.
-
When the cluster is ready, click the button (next to your account) in the upper right corner to launch a Cloud shell. Make sure you don't close this window/tab.
-
Check the version of the OpenShift CLI:
oc version
{: pre}
The version needs to be at minimum 4.12.x, otherwise install the latest version by following these instructions.
-
Validate your cluster is shown when listing all clusters:
ibmcloud oc clusters
{: pre}
-
Initialize the
oc
command environment by replacing the placeholder <your-cluster-name>:ibmcloud oc cluster config -c <your-cluster-name> --admin
{: pre}
-
Verify the
oc
command is working:oc get projects
{: pre}
{: #openshift-microservices-deploy} {: step}
In this section, you'll deploy a Node.js Express application named patient-health-frontend
, a user interface for a patient health records system to demonstrate {{site.data.keyword.redhat_openshift_notm}} features. You can find the sample application GitHub repository here: https://github.com/IBM-Cloud/patient-health-frontend
{: #openshift-microservices-5}
A project is a collection of resources managed by a DevOps team. An administrator will create the project and the developers can create applications that can be built and deployed.
- Navigate to the {{site.data.keyword.redhat_openshift_notm}} web console by clicking the OpenShift web console button in the selected Cluster.
- On the left navigation pane, under the Administrator perspective, select Home > Projects view to display all the projects.
- Create a new project by clicking Create Project. In the pop up Name the project
example-health
, leave Display Name and Description blank, click Create. - The new project's Project Details page is displayed. Observe that your context is Administrator > Home > Projects on the left and Projects > Project details > example-health on the top.
{: #openshift-microservices-6}
- Switch from the Administrator to the Developer perspective. Your context should be Developer > +Add on the left and Project: example-health on the top.
{: caption="Project View" caption-side="bottom"}
- Let's build and deploy the application by selecting Import from Git.
- Enter the repository
https://github.com/IBM-Cloud/patient-health-frontend.git
in the Git Repo URL field.- Note the green check
Builder image detected
and the Node.js 16 (UBI 8). - Note that the builder image automatically detected the language Node.js. If not detected, select
Node.js
from the provided list. - Builder Image Version leave at the default.
- Application Name delete all of the characters and leave it empty (this will default to the Name)
- Name : patient-health-frontend.
- Click the Resource type link and choose DeploymentConfig.
- Leave defaults for other selections.
- Note the green check
- Click Create at the bottom of the window to build and deploy the application.
{: #openshift-microservices-7}
-
You should see the app you just deployed. Notice that you are in the Topology view of the example-health project in the Developer perspective. All applications in the project are displayed.
-
Select the node patient-health-frontend to bring up the details view of the
DeploymentConfig
. Note the DC next to patient-health-frontend. The Pods, Builds, Services and Routes are visible.{: caption="App Details" caption-side="bottom"}
- Pods: Your Node.js application containers
- Builds: The auto-generated build that created a Docker image from your Node.js source code, deployed it to the {{site.data.keyword.redhat_openshift_notm}} container registry, and kicked off your deployment config
- Services: Tells {{site.data.keyword.redhat_openshift_notm}} how to access your Pods by grouping them together as a service and defining the port to listen to
- Routes: Exposes your services to the outside world using the LoadBalancer provided by the IBM Cloud network
-
Click on View Logs next to your completed Build. This shows you the process that {{site.data.keyword.redhat_openshift_notm}} took to install the dependencies for your Node.js application and build/push a Docker image. The last entry should looks like this:
Successfully pushed image-registry.openshift-image-registry.svc:5000/example-health/patient-health-frontend@sha256:f9385e010144f36353a74d16b6af10a028c12d005ab4fc0b1437137f6bd9e20a Push successful
{: screen}
-
Click back to the Topology and select your app again.
-
Click on the URL under Routes to visit your application. Enter any string for username and password, for instance
test:test
because the app is running in demonstration mode.
The Node.js
app has been deployed to {{site.data.keyword.redhat_openshift_notm}} Container Platform. To recap:
- The "Example Health" Node.js application was deployed directly from GitHub into your cluster.
- The application was examined in the {{site.data.keyword.openshiftshort}} console.
- A Build Configuration was created - a new commit can be both built and deployed by clicking Start Build in the Builds section of the application details.
{: #openshift-microservices-logging-monitoring} {: step}
In this section, you will explore the out-of-the-box logging and monitoring capabilities that are offered in {{site.data.keyword.openshiftshort}}.
{: #openshift-microservices-9}
Create a script to simulate load.
-
Make sure you're connected to the project where you deployed your app.
oc project example-health
{: pre}
-
Retrieve the public route to access your application:
oc get routes
{: pre}
Output looks similar to this, note your value for Host:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD patient-health-frontend patient-health-frontend-example-health.roks07-872b77d77f69503584da5a379a38af9c-0000.eu-de.containers.appdomain.cloud patient-health-frontend 8080-tcp None
{: screen}
-
Define a variable with the host:
HOST=$(oc get routes -o json | jq -r '.items[0].spec.host')
{: pre}
-
Verify access to the application. It outputs patient information:
curl -s -L http://$HOST/info
{: pre}
Output should look like:
$ curl -s -L http://$HOST/info {"personal":{"name":"Ralph DAlmeida","age":38,"gender":"male","street":"34 Main Street","city":"Toronto","zipcode":"M5H 1T1"},"medications":["Metoprolol","ACE inhibitors","Vitamin D"],"appointments":["2018-01-15 1:00 - Dentist","2018-02-14 4:00 - Internal Medicine","2018-09-30 8:00 - Pediatry"]}
{: screen}
-
Run the following script which will endlessly send requests to the application and generates traffic:
while sleep 0.2; do curl --max-time 2 -s -L http://$HOST/info >/dev/null; echo -n "." done
{: pre}
To stop the script, hit
CTRL + c
on your keyboard {: tip}
{: #openshift-microservices-10}
Since there is only one pod, viewing the application logs will be straight forward.
-
Ensure that you're in the Topology view of the Developer perspective.
-
Navigate to your Pod by selecting your app.
-
Click on View Logs next to the name of the Pod under Pods to see streaming logs from your running application. If you're still generating traffic, you should see log messages for every request being made.
{: #openshift-microservices-11}
One of the great things about Kubernetes is the ability to quickly debug your application pods with SSH terminals. This is great for development, but generally is not recommended in production environments. {{site.data.keyword.redhat_openshift_notm}} makes it even easier by allowing you to launch a terminal directly in the dashboard.
- Switch from the Logs tab to the Terminal tab.
- Run the following Shell commands:
Command | Description |
---|---|
ls |
List the project files. |
ps aux |
List the running processes. |
cat /etc/redhat-release |
Show the underlying OS. |
curl localhost:8080/info |
output from the node app.js process |
{: caption="Examples of Shell commands to run" caption-side="bottom"} |
{: #openshift-microservices-12}
When deploying new apps, making configuration changes, or simply inspecting the state of your cluster, the project-scope dashboard gives a Developer clear insights.
- Access the dashboard in the Developer perspective by clicking Observe on the left side menu.
- You can also dive in a bit deeper by clicking the Events tab. Events are useful for identifying the timeline of events and finding potential error messages. When tracking the state of a new rollout, managing existing assets, or even something simple like exposing a route, the Events view is critical in identifying the timeline of activity. This becomes even more useful when considering that multiple operators may be working against a single cluster.
Almost all actions in {{site.data.keyword.redhat_openshift_notm}} result in an event being fired in this view. As it is updated real-time, it's a great way to track changes to state.
{: #openshift-microservices-metrics} {: step}
In this section explore the monitoring and metrics dashboards included in {{site.data.keyword.redhat_openshift_notm}}.
{: #openshift-microservices-14}
{{site.data.keyword.redhat_openshift_notm}} comes with predefined dashboards to monitor your projects.
- Get started by switching from the Developer perspective to the Administrator perspective:
- Navigate to Observe > Dashboards in the left-hand bar.
- Select Kubernetes / Compute Resources / Namespace (Pods) from the dropdown and Namespace to example-health.
- Notice the CPU and Memory usage for your application. In production environments, this is helpful for identifying the average amount of CPU or Memory your application uses, especially as it can fluctuate through the day. Auto-scaling is one way to handle fluctuations and will be demonstrated a little later.
{: #openshift-microservices-15}
{{site.data.keyword.redhat_openshift_notm}} provides a web interface to run queries and examine the metrics visualized on a plot. This functionality provides an extensive overview of the cluster state and enables you to troubleshoot problems.
-
Navigate to Observe > Metrics.
-
Enter the following expression and click Run queries. You should see the value and the graph associated with the query.
sum(container_cpu_usage_seconds_total{container="patient-health-frontend"})
{: codeblock}
{: #openshift-microservices-scaling} {: step}
In this section, the metrics observed in the previous section can be used to scale the UI application in response to load.
{: #openshift-microservices-17}
Before autoscaling maximum CPU and memory resource limits must be established.
The dashboards earlier showed you that the load was consuming anywhere between ".002" to ".02" cores. This translates to 2-20 "millicores". To be safe, let's bump the higher-end up to 30 millicores. In addition, the data showed that the app consumes about 25
-65
MB of RAM. The following steps will set the resource limits in the deploymentConfig
-
Make sure the script to generate traffic is running.
-
Switch to the Administrator perspective.
-
Navigate to Workloads > DeploymentConfigs.
-
Select the example-health project.
-
From the Actions menu (the three vertical dots) of
patient-health-frontend
, choose Edit DeploymentConfig.{: caption="Deployments" caption-side="bottom"}
-
Under the YAML view, find the section spec > template > spec > containers, add the following resource limits into the empty resources. Replace the
resources {}
, and ensure the spacing is correct -- YAML uses strict indentation.resources: limits: cpu: 30m memory: 100Mi requests: cpu: 3m memory: 40Mi
{: codeblock}
Here is a snippet after you have made the changes:
ports: - containerPort: 8080 protocol: TCP resources: limits: cpu: 30m memory: 100Mi requests: cpu: 3m memory: 40Mi terminationMessagePath: /dev/termination-log
-
Save to apply the changes.
-
Verify that the replication controller has been changed by navigating to Events tab:
{: caption="Resource Limits" caption-side="bottom"}
{: #openshift-microservices-18}
Now that resource limits are configured, the pod autoscaler can be enabled.
By default, the autoscaler allows you to scale based on CPU or Memory. Pods are balanced between the minimum and maximum number of pods that you specify. With the autoscaler, pods are automatically created or deleted to ensure that the average CPU usage of the pods is below the CPU request target as defined. In general, you probably want to start scaling up when you get near 50
-90
% of the CPU usage of a pod. In our case, 1
% can be used with the load being provided.
-
Navigate to Administrator perspective Workloads > HorizontalPodAutoscalers, then click Create HorizontalPodAutoscaler.
{: caption="HPA" caption-side="bottom"}
Replace the contents of the editor with this yaml:
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: patient-hpa namespace: example-health spec: scaleTargetRef: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig name: patient-health-frontend minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: averageUtilization: 1 type: Utilization
{: codeblock}
-
Click Create.
{: #openshift-microservices-19}
If you're not running the script to simulate load, the number of pods should stay at 1.
-
Check by opening the Overview page of the deployment config. Click Workloads > DeploymentConfigs and click patient-health-frontend and make sure the Details panel is selected.
-
Start simulating load (see previous section to simulate load on the application).
{: caption="Scaled to 4/10 pods" caption-side="bottom"}
It can take a few minutes for the autoscaler to make adjustments. {: note}
That's it! You now have a highly available and automatically scaled front-end Node.js application. {{site.data.keyword.redhat_openshift_notm}} is automatically scaling your application pods since the CPU usage of the pods greatly exceeded 1
% of the resource limit, 30
millicores.
{: #openshift-microservices-20}
You can also can delete and create resources like autoscalars with the command line.
-
Start by verifying the context is your project:
oc project example-health
{: pre}
-
Get the autoscaler that was created earlier:
oc get hpa
{: pre}
-
Delete the autoscaler made earlier:
oc delete hpa/patient-hpa
{: pre}
-
Create a new autoscaler with a max of 9 pods:
oc autoscale deploymentconfig/patient-health-frontend --name patient-hpa --min 1 --max 9 --cpu-percent=1
{: pre}
-
Revisit the Workloads > DeploymentConfigs Details page for
patient-health-frontend
deployment and watch it work.
{: #openshift-microservices-operator} {: step}
Currently, the Example Health patient-health-frontend
app is using a dummy in-memory patient. In this exercise, you'll create a Cloudant service in IBM Cloud and populate it with patient data. Cloudant is a NoSQL database-as-a-service, based on CouchDB.
{: #openshift-microservices-22}
Let's understand exactly how Operators work. In the first exercise, you used a builder to deploy a simple application using a DeploymentConfig a default resource type that come with {{site.data.keyword.redhat_openshift_notm}}. A custom resource definition allows you to create resource types that do not come preinstalled with {{site.data.keyword.openshiftshort}} such an IBM Cloud service. Operators manage the lifecycle of resources and create Custom Resource Descriptors, CRDs, allowing you to manage custom resources the native "Kubernetes" way.
- In the Administrator perspective, and click Operators > OperatorHub.
- Find the IBM Cloud Operator, and click Install.
- Keep the default options and click Install.
- After a few seconds
installed operator - ready for use
should be displayed.
{: #openshift-microservices-23}
Click to open it. Scroll down to the Prerequisites section.
An API key with the appropriate permissions to create a {{site.data.keyword.cloudant_short_notm}} database is required in this section. The API key is going to be stored in a Kubernetes Secret resource. This will need to be created using the shell. There are instructions in the Prerequisites section of the installed operator. Steps:
-
Use the same resource group and region that is associated with your cluster.
ibmcloud target -g <resource_group> -r <region>
{: pre}
To see the the resource groups in your account, run
ibmcloud resource groups
command {: tip} -
Verify that the resource group and region matches your cluster. The following command should return your cluster.
ibmcloud oc cluster ls
{: pre}
Output looks something like this:
$ ibmcloud oc cluster ls
OK
Name ID State Created Workers Location Version Resource Group Name Provider
osmicro ck68svdd0vvcfs6ad9ag normal 18 hours ago 2 Dallas 4.12.26_1562_openshift default vpc-gen2
{: screen}
-
Use the helper script provided by IBM to create the following resources:
- {{site.data.keyword.Bluemix_notm}} API key that represents you and your permissions to use {{site.data.keyword.Bluemix_notm}}
- Kubernetes Secret named
secret-ibm-cloud-operator
in thedefault
namespace. This secret has the keysapi-key
andregion
. The operator will use this data to create the cloudant service instance. - Kubernetes ConfigMap resource with the name
config-ibm-cloud-operator
in thedefault
namespace to hold the region and resource group
Use the supplied curl command:
curl -sL https://raw.githubusercontent.com/IBM/cloud-operators/master/hack/configure-operator.sh | bash
{: pre}
-
Back in the {{site.data.keyword.redhat_openshift_notm}} web console, click the Create service under the Service tab on the Installed Operators of the IBM Cloud Operator page and select YAML view to bring up the yaml editor.
-
Make the suggested substitutions where the serviceClass is cloudantnosqldb and the plan can be lite or standard (only one lite plan is allowed per account). Replace
<your-initials>
:apiVersion: ibmcloud.ibm.com/v1 kind: Service metadata: annotations: ibmcloud.ibm.com/self-healing: enabled name: <your-initials>-cloudant-service namespace: example-health spec: serviceClass: cloudantnosqldb plan: standard
{: codeblock}
-
Click Create to create a {{site.data.keyword.cloudant_short_notm}} database instance. Your context should be Operators > Installed Operators > IBM Cloud Operator in the Administrator perspective with Project: example-health in the Service panel.
-
Click on the service just created, <your-initials>-cloudant-service and over time the state field will change from provisioning to Online meaning it is good to go.
-
Create a Binding resource and a Secret resource for the cloudant Service resource just created. Navigate back to Operators > Installed Operators > IBM Cloud Operator > Binding tab. Open the Binding tab, click Create Binding and select YAML View. Create a cloudant-binding associated with the serviceName
<your-initials>-cloudant-service
, (this is the the name provided for the Service created earlier).apiVersion: ibmcloud.ibm.com/v1 kind: Binding metadata: name: cloudant-binding namespace: example-health spec: serviceName: <your-initials>-cloudant-service
{: codeblock}
-
Optionally dig a little deeper to understand the relationship between the {{site.data.keyword.redhat_openshift_notm}} resources: Service, service Binding, binding Secret and the {{site.data.keyword.cloud_notm}} resources: Service, service Instance and the instance's Service credentials. Using the cloud shell:
ibmcloud resource service-instances --service-name cloudantnosqldb
{: pre}
YOURINITIALS=<your-initials>
{: pre}
ibmcloud resource service-instance $YOURINITIALS-cloudant-service
{: pre}
ibmcloud resource service-keys --instance-name $YOURINITIALS-cloudant-service --output json
{: pre}
Output looks something like this:
youyou@cloudshell:~$ ibmcloud resource service-instances --service-name cloudantnosqldb Retrieving instances with type service_instance in all resource groups in all locations under .. OK Name Location State Type <your-initials>-cloudant-service us-south active service_instance youyou@cloudshell:~$ ibmcloud resource service-instance <your-initials>-cloudant-service Retrieving service instance <your-initials>-cloudant-service in all resource groups under ... OK Name: <your-initials>-cloudant-service ID: crn:v1:bluemix:public:cloudantnosqldb:us-south:a/0123456789507a53135fe6793c37cc74:SECRET GUID: SECRET Location: us-south Service Name: cloudantnosqldb Service Plan Name: standard Resource Group Name: Default State: active Type: service_instance Sub Type: Created at: 2020-05-06T22:39:25Z Created by: [email protected] Updated at: 2020-05-06T22:40:03Z Last Operation: Status create succeeded Message Provisioning is complete Updated At 2020-05-06 22:40:03.04469305 +0000 UTC youyou@cloudshell:~$ ibmcloud resource service-keys --instance-name $YOURINITIALS-cloudant-service --output json [ { "guid": "01234560-902d-4078-9a7f-20446a639aeb", "id": "crn:v1:bluemix:public:cloudantnosqldb:us-south:a/0123456789507a53135fe6793c37cc74:SECRET", "url": "/v2/resource_keys/01234560-902d-4078-9a7f-20446a639aeb", "created_at": "2020-05-06T23:03:43.484872077Z", "updated_at": "2020-05-06T23:03:43.484872077Z", "deleted_at": null, "name": "cloudant-binding", "account_id": "0123456789507a53135fe6793c37cc74", "resource_group_id": "01234567836d49029966ab5be7fe50b5", "source_crn": "crn:v1:bluemix:public:cloudantnosqldb:us-south:a/0123456789507a53135fe6793c37cc74:SECRET", "state": "active", "credentials": { "apikey": "SECRET", "host": "SECRET", "iam_apikey_description": "Auto-generated for key SECRET", "iam_apikey_name": "cloudant-binding", "iam_role_crn": "SECRET", "iam_serviceid_crn": "SECRET", "password": "SECRET", "port": 443, "url": "https://01234SECRET", "username": "01234567-SECRET" }, "iam_compatible": true, "resource_instance_url": "/v2/resource_instances/SECRET", "crn": "crn:v1:bluemix:public:cloudantnosqldb:us-south:a/0123456789507a53135fe6793c37cc74:SECRET" } ]
{: screen}
{: #openshift-microservices-24}
Now you'll create the Node.js app that will populate your Cloudant DB with patient data. It will also serve data to the front-end application deployed earlier.
-
Make sure you're your context is the project example-health:
oc project example-health
{: pre}
-
The following new-app commmand will make a build configuration and Deployment Configuration. The following demonstrates the CLI invocation of the add application (remember using the GUI console for the frontend):
oc new-app --name=patient-health-backend --as-deployment-config centos/nodejs-10-centos7~https://github.com/IBM-Cloud/patient-health-backend
{: pre}
-
Back in the console, and in the Topology view of the Developer perspective, open the patient-health-backend app and wait for the build to complete. Notice that the Pod is failing to start. Click on the Pod logs to see:
> node app.js /opt/app-root/src/app.js:23 throw("Cannot find Cloudant credentials, set CLOUDANT_URL.") ^ Cannot find Cloudant credentials, set CLOUDANT_URL.
-
Let's fix this by setting the environment variable of the DeploymentConfig to the cloudant-binding secret created earlier in the operator binding section. Navigate to the deployment config for the
patient-health-backend
app by clicking the app, and then selecting the name next to DC:{: caption="Deployment Config" caption-side="bottom"}
-
Go to the Environment tab, click Add from ConfigMap or Secret and create a new environment variable named CLOUDANT_URL. Choose the cloudant-binding secret, then choose url for the Key. Click Save.
{: caption="Environment from Secret" caption-side="bottom"}
-
Go back to the Topology tab, and click the patient-health-backend. Check out the Pods section, which should indicate Running shortly. Click on View logs next to the running pod and notice the databases created.
{: #openshift-microservices-25}
The patient-health-frontend
application has an environment variable for the backend microservice url.
-
Set the API_URL environment variable to default in the frontend DeploymentConfig. Navigate to the deployment config for the
patient-health-frontend
app by clicking the frontend app in the Topology view, and then selecting the name next to DC: -
Go to the Environment tab, and in the Single values (env) section add a name
API_URL
and valuedefault
. Click Save then Reload. This will result in a connection tohttp://patient-health-backend:8080/
which you can verify by looking at the pod logs. You can verify this is the correct port by scanning for thePod Template / Containers / Port
output of this command:oc describe dc/patient-health-backend
{: pre}
Your application is now backed by the mock patient data in the Cloudant DB! You can now log-in using any user-id/password in the Cloudant DB, use "opall:opall".
- In a real-world application, these passwords should not be stored as plain-text. To review the patients (and alternate logins) in the Cloudant DB, navigate to your
services
in IBM Cloud Resource List. Click <your-initials>-cloudant-service. - Launch the Cloudant dashboard by clicking on Launch Dashboard button and then click the
patients
db. - Click through the different patients you can log in as.
Connect both {{site.data.keyword.la_short}} and {{site.data.keyword.mon_short}} to the {{site.data.keyword.openshiftshort}} cluster
{: #openshift-microservices-connect-logging-metrics} {: step}
It can take a few minutes for logging and metric data to flow through the analysis systems so it is best to connect both at this time for later use.
- Navigate to {{site.data.keyword.openshiftshort}} clusters
- Click on your cluster and verify the Overview tab on the left is selected
- Scroll to Integrations and if Logging is not connected, click the Logging Connect button. Use an existing {{site.data.keyword.la_short}} instance or create a new instance as shown below:
- Click Create an instance.
- Select same location as to where your cluster is created.
- Select 7 day Log Search as your plan.
- Enter a unique Service name such as
<your-initials>-logging
. - Use the resource group associated with your cluster and click Create.
- Back on the cluster Overview tab, follow the same procedure to Connect Monitoring. Use an existing {{site.data.keyword.mon_short}} instance or create a new instance as shown below:
- Click Create an instance.
- Select the same location as to where your cluster is created.
- Select Graduated Tier as your plan.
- Enter a unique Service name such as
<your-initials>-monitoring
. - Use the resource group associated with your cluster.
- Leave IBM platform metrics to Disable and click Create.
{: #openshift-microservices-use-logdna} {: step}
{{site.data.keyword.la_full_notm}} is a cloud native service that you can include as part of your IBM Cloud architecture to add log management capabilities. You can use {{site.data.keyword.la_short}} to manage system and application logs in IBM Cloud. Learn more.
This section of the tutorial goes deep into the IBM logging service. You can stop this section at any time and successfully begin the next section. {: note}
{: #openshift-microservices-28}
Verify that the {{site.data.keyword.la_short}}-agent
pods on each node are in a Running status.
oc get pods -n ibm-observe
{: pre}
The deployment is successful when you see one or more {{site.data.keyword.la_short}} pods:
someone@cloudshell:~$ oc get pods -n ibm-observe
NAME READY STATUS RESTARTS AGE
logdna-agent-mdgdz 1/1 Running 0 86s
logdna-agent-qlqwc 1/1 Running 0 86s
{: screen}
The number of {{site.data.keyword.la_short}} pods equals the number of worker nodes in your cluster.
- All pods must be in a
Running
state - stdout and stderr are automatically collected and forwarded from all containers. Log data includes application logs and worker logs.
- By default, the {{site.data.keyword.la_short}} agent pod that runs on a worker collects logs from all namespaces on that node.
After the agent is configured logs from this cluster will be visible in the {{site.data.keyword.la_short}} web UI covered in the next section. If after a period of time you cannot see logs, check the agent logs.
To check the logs that are generated by a {{site.data.keyword.la_short}} agent, run the following command:
oc logs logdna-agent-<ID> -n ibm-observe
{: pre}
Where ID is the ID for a {{site.data.keyword.la_short}} agent pod.
For example,
oc logs logdna-agent-mdgdz -n ibm-observe
{: #openshift-microservices-29}
Launch the web UI within the context of a {{site.data.keyword.la_short}} instance, from the IBM Cloud UI.
- Navigate to {{site.data.keyword.openshiftshort}} clusters
- Click on your cluster and verify the Overview tab on the left is selected
- In the Integrations section next to Logging, click the Launch button.
The {{site.data.keyword.la_short}} UI should open in a new tab.
{: #openshift-microservices-30}
In {{site.data.keyword.la_short}}, you can configure custom views to monitor a subset of data. You can also attach an alert to a view to be notified of the presence or absence of log lines.
In the {{site.data.keyword.la_short}} web UI notice the log entries are displayed with a predefined format. You can modify in the User Preferences section how the information in each log line is displayed. You can also filter logs and modify search settings, then bookmark the result as a view. You can attach and detach one or more alerts to a view. You can define a custom format for how your lines are shown in the view. You can expand a log line and see the data parsed.
{: #openshift-microservices-46}
With the application now connected to a database for its data, to simulate load we will generate requests to the database using a patient id that we added to the database ef5335dd-db17-491e-8150-20ce24712b06
.
-
Make sure you're connected to the project where you deployed your app.
oc project example-health
{: pre}
-
Define a variable with the host:
HOST=$(oc get routes -o json | jq -r '.items[0].spec.host')
{: pre}
-
Verify access to the application. It outputs patient information:
curl -s -L "http://$HOST/info?id=ef5335dd-db17-491e-8150-20ce24712b06"
{: pre}
Output should look like:
$ curl -L "http://$HOST/info?id=ef5335dd-db17-491e-8150-20ce24712b06" {"personal":{"name":"Opal Larkin","age":22,"street":"805 Bosco Vale","city":"Lincoln","zipcode":"68336"},"medications":["Cefaclor ","Amoxicillin ","Ibuprofen ","Trinessa ","Mirena ","Naproxen sodium "],"appointments":["2009-01-29 10:46 - GENERAL PRACTICE","1999-07-01 10:46 - GENERAL PRACTICE","2001-12-27 10:46 - GENERAL PRACTICE","2005-01-06 10:46 - GENERAL PRACTICE","2004-01-01 10:46 - GENERAL PRACTICE","1999-09-30 10:46 - GENERAL PRACTICE","2018-10-29 10:46 - GENERAL PRACTICE","2012-02-16 10:46 - GENERAL PRACTICE","2015-11-23 10:46 - GENERAL PRACTICE","2000-03-30 10:46 - GENERAL PRACTICE","1999-04-29 10:46 - GENERAL PRACTICE","2015-01-07 10:46 - GENERAL PRACTICE","1999-02-25 10:46 - GENERAL PRACTICE","2010-07-23 10:46 - GENERAL PRACTICE","2008-01-24 10:46 - GENERAL PRACTICE","2004-05-24 10:46 - GENERAL PRACTICE","1999-01-21 10:46 - GENERAL PRACTICE","2015-03-05 10:46 - GENERAL PRACTICE","2002-06-27 10:46 - GENERAL PRACTICE","2000-06-29 10:46 - GENERAL PRACTICE","2005-01-06 10:46 - GENERAL PRACTICE","2015-01-10 10:46 - GENERAL PRACTICE","2000-12-28 10:46 - GENERAL PRACTICE","2016-06-02 10:46 - GENERAL PRACTICE","2016-03-10 10:46 - GENERAL PRACTICE","2013-09-08 10:46 - GENERAL PRACTICE","2011-02-10 10:46 - GENERAL PRACTICE","2013-02-21 10:46 - GENERAL PRACTICE","2003-04-30 10:46 - GENERAL PRACTICE","2004-07-23 10:46 - GENERAL PRACTICE","2006-01-12 10:46 - GENERAL PRACTICE","2002-12-26 10:46 - GENERAL PRACTICE","1999-12-30 10:46 - GENERAL PRACTICE","2017-01-04 10:46 - GENERAL PRACTICE","2018-03-22 10:46 - GENERAL PRACTICE","2010-02-04 10:46 - GENERAL PRACTICE","2009-11-29 10:46 - GENERAL PRACTICE","2013-02-26 10:46 - GENERAL PRACTICE","2003-02-04 10:46 - GENERAL PRACTICE","2003-03-01 10:46 - GENERAL PRACTICE","2000-04-15 10:46 - GENERAL PRACTICE","2001-06-28 10:46 - GENERAL PRACTICE","2007-01-18 10:46 - GENERAL PRACTICE","2018-08-30 10:46 - GENERAL PRACTICE","2017-03-16 10:46 - GENERAL PRACTICE","2014-02-27 10:46 - GENERAL PRACTICE","2000-09-27 10:46 - GENERAL PRACTICE"]}
{: screen}
-
Run the following script which will endlessly send requests to the application and generates traffic:
while sleep 0.2; do curl --max-time 2 -s -L "http://$HOST/info?id=ef5335dd-db17-491e-8150-20ce24712b06" >/dev/null; echo -n "." done
{: pre}
To stop the script, hit
CTRL + c
on your keyboard {: tip}
{: #openshift-microservices-31}
- In the {{site.data.keyword.la_short}} web UI, click the Views icon.
- Select Everything to see all the events. It can take a few minutes for the load on the application to be visible.
{: caption="View Logs" caption-side="bottom"}
{: #openshift-microservices-32}
In the User Preferences, you can modify the order of the data fields that are displayed per line.
- Click your profile icon in the bottom left and select User Preferences.
- Select Log Format.
- Modify the Line Format section to match your requirements. Drag boxes around. Click Done.
For example, add %app after the timestamp.
{: caption="Log Format" caption-side="bottom"}
{: #openshift-microservices-33}
You can select the events that are displayed through a view by applying a search query in the search bar, selecting values in the search area, or a combination of both. You can save that view for reuse later.
-
In the {{site.data.keyword.la_short}} web UI, filter out the logs for the sample app that you have deployed in the cluster in previous steps. Click in the search bar at the bottom and enter the following query:
app:patient-health-frontend
. -
Filter out log lines to display only lines that are tagged as debug lines. Add in the search bar the following query:
level:debug
and hit enter. The view will show lines that meet the filter and search criteria. -
Click Unsaved view. Select Save as new view.
{: caption="Save View" caption-side="bottom"}
- Enter the name of the view. Use the following format:
<Enter your user name> patientUI
. For example,yourname patientui
. - Enter a category. Use the following format:
<Enter your user name>
. For example,yourname
Then click Add this as a new view category. - Click Save view.
- Enter the name of the view. Use the following format:
-
A new view appears on the left navigation panel.
{: #openshift-microservices-34}
Generate logs by opening the application and logging in with different names (see previous section for simulate load on the application for instructions).
{: #openshift-microservices-35}
At any time, you can view each log line in context.
Complete the following steps:
-
Click the Views icon.
-
Select Everything or a view.
-
Identify a line in the log that you want to explore.
-
Expand the log line to display information about line identifiers, tags, and labels.
-
Click View in Context to see the log line in context of other log lines from that host, app, or both. This is a very useful feature when you want to troubleshoot a problem.
{: caption="View in context" caption-side="bottom"}
-
A new pop up window opens. In the window, choose one of the following options:
- By Everything to see the log line in the context of all log records (everything) that are available in the {{site.data.keyword.la_short}} instance
- By source to see the log line in the context of the log lines for the same source
- By App to see the log line in the context of the log lines of the app
- By Source and App to see the log line in the combined context of the app and source
Then click Continue in New Viewer to get the view in a different page. You might need to scroll down to get this option.
Tip: Open a view per type of context to troubleshoot problems.
-
Expand the selected log and click Copy to clipboard to copy the message field to the clipboard. Notice that when you copy the log record you get less information than what it is displayed in the view. To get a line with all the fields, you must export data from a custom view.
-
When you are finished, close the line.
{: #openshift-microservices-36}
In a view, you can search events that are displayed through a view for a specific timeframe.
You can apply a timestamp by specifying an absolute time, a relative time, or a time range.
Complete the following steps to jump to a specific time:
- Launch the {{site.data.keyword.la_short}} web UI.
- Click the Views icon.
- Select your custom view.
- Enter a time query. Choose any of the following options:
- Enter a relative time such as
1 hour ago
. Type ENTER{: caption="1 hour ago" caption-side="bottom"}
- Enter an absolute time to jump to a point in time in your events such as
January 27 10:00am
- You can also enter a time range such as
yesterday 10am to yesterday 11am
,last fri 4:30pm to 11/12 1 AM
,last wed 4:30pm to 23/05 1 AM
, orMay 20 10am to May 22 10am
. Make sure to includeto
to separate the initial timestamp from the end timestamp
- Enter a relative time such as
You might get the error message: Your request is taking longer than expected
, try refreshing your browser after a few minutes of delay to allow logs to flow into the service. Also, ensure that the the timeframe selected is likely to have events available for display. It may be required to change the time query, and retry.
{: #openshift-microservices-37}
You can create a dashboard to monitor your app graphically through interactive graphs. For example, you can use graphs to analyze patterns and trends over a period of time.
Index fields are created on a regular schedule. Currently it is done at 00:01 UTC (midnight). The following steps that require fields will not be possible until this process completes. {: note}
Complete the following steps to create a dashboard to monitor logs from the lab's sample app:
-
In the {{site.data.keyword.la_short}} web UI, click the Boards icon.
-
Select NEW BOARD to create a new dashboard.
-
Select the Field All lines under Graph a field.
-
Select the Filter app:patient-health-frontend.
-
Click Add Graph.
-
Note the view that displays the count of logs lines for the frontend app. Click the graph in a peak of data at the time that you want to see logs, and then click Show logs.
A new page opens with the relevant log entries. Click the browser's back button when done with the log lines to return to the graph.
-
Add breakdowns to analyze the data by applying additonal filtering criteria.
{: caption="Show subplots" caption-side="bottom"}
- Click View breakdowns.
- Select Histogram and level.Click Add Breakdown.
-
Name the dashboard by hitting the pencil Edit Board button next to the New Board N name".
- Enter
patientui
as the name of the dashboard - Enter a category, for example,
yourname
then click Add this as a new board category - Click Save
- Enter
A new category appears on the left navigation panel.
{: #openshift-microservices-38}
You can create a screen to monitor your app graphically through metrics (counters), operational KPIs (gauges), tables, and time-shifted graphs (graphs that you can use to analyze patterns and trends for comparison analysis).
Complete the following steps to create a dashboard to monitor logs from the lab's sample app:
-
In the {{site.data.keyword.la_short}} web UI, click the screens icon.
-
Select NEW SCREEN.
-
Add a count of the patient health frontend log lines for the last two weeks:
- Click Add Widget at the top and select Count
- Click the newly created widget to reveal the configuration fields for the widget on the right
- In the Data section
- Select the field app, and set the value to patient-health-frontend
- Keep Operation at the default Counts
- In the Duration drop down select
Last 2 Weeks
- In the Appearance section
- In the Label text box type
patient-health-frontend
- In the Label text box type
The widget should look similar to the following one:
-
Add a gauge that records the debug lines for the patient-health-frontend for the last day.
- Click Add Widget at the top and select Gauge
- Click the newly created widget to reveal the configuration fields for the widget on the right
- In the Data section
- Select the field app, and set the value to patient-health-frontend
- Click Advanced Filtering and in the text box type
level:debug
- Keep the Duration set to the default
Last 1 day
- In the Appearance section
- In the Label text box type
patient-health-frontend
- In the Label text box type
-
Add a table of logs by namespace.
- Click Add Widget at the top and select Table
- Click the newly created widget to reveal the configuration fields for the widget on the right
- In the Data section
- Select the field Group By and choose
namespace
from the drop down
- Select the field Group By and choose
- In the Data Format section
- Select the field Number of Rows and choose
10
from the drop down
- Select the field Number of Rows and choose
-
Drag the table to improve the presentation. Verify the screen resembles the following:
{: caption="Another widget" caption-side="bottom"}
-
Save the screen. Select Save Screen.
If you do not save the screen, you lose all your widgets. {: important}
Find more about {{site.data.keyword.la_short}} in the IBM Cloud documentation. {: note}
{: #openshift-microservices-configure-sysdig} {: step}
{{site.data.keyword.cloud_notm}} provides a fully managed monitoring service. Let's create a monitoring instance and then integrate it with your {{site.data.keyword.openshiftshort}} cluster using a script that creates a project and privileged service account for the {{site.data.keyword.mon_short}} agent.
{: #openshift-microservices-40}
Verify that the sysdig-agent
pods on each node have a Running status.
Run the following command:
oc get pods -n ibm-observe
{: pre}
Example output:
NAME READY STATUS RESTARTS AGE
sysdig-agent-qrbcq 1/1 Running 0 1m
sysdig-agent-rhrgz 1/1 Running 0 1m
{: screen}
{: #openshift-microservices-use-sysdig} {: step}
{{site.data.keyword.mon_full_notm}} is a cloud-native, and container- intelligence management system that you can include as part of your IBM Cloud architecture. Use it to gain operational visibility into the performance and health of your applications, services, and platforms. It offers administrators, DevOps teams, and developers full stack telemetry with advanced features to monitor and troubleshoot performance issues, define alerts, and design custom dashboards. Learn more.
In the next steps, you will learn how to use dashboards and metrics to monitor the health of your application.
{: #openshift-microservices-42}
Use views and dashboards to monitor your infrastructure, applications, and services. You can use pre-defined dashboards. You can also create custom dashboards through the Web UI or programmatically. You can backup and restore dashboards by using Python scripts.
The following table lists the different types of pre-defined dashboards:
Type | Description |
---|---|
Workload Status and Performance | Dashboards that you can use to monitor your pods. |
Node Status and Performance | Dashboards that you can use to monitor resource utilization and system activity on your hosts and in your containers. |
Network | Dashboards that you can use to monitor your network connections and activity. |
{: caption="Subset of existing pre-defined dashboards" caption-side="bottom"} |
{: #openshift-microservices-43}
- Navigate to {{site.data.keyword.openshiftshort}} clusters and notice the {{site.data.keyword.redhat_openshift_notm}} clusters
- Click on your cluster and verify the Overview tab on the left is selected
- In the Integrations section next to Monitoring, click the Launch button.
Initial data may NOT be available on newly created Monitoring instances.
- After a few minutes, raw data will be displayed
- After about an hour, indexing will provides the detail required to proceed with this tutorial
- Under the Dashboards section, select Kubernetes > Pod Status & Performance to view raw metrics for all workloads running on the cluster.
- Set the namespace filter to example-health to focus on the pods of your application.
- Under Dashboards on the left pane, expand Applications in Dashboard Templates. Then select HTTP to get a global view of the cluster HTTP load.
{: #openshift-microservices-44}
-
Select Dashboards, check out the two dashboard templates:
- Containers > Container Resource Usage
- Host Infrastructure > Host Resource Usage
-
Select Kubernetes > Pod Rightsizing & Workload Capacity Optimization template. This dashboard helps you to optimize your infrastructure and better control cluster spend by ensure pods are sized correctly. Understand if you can free up resources by reducing memory and/or CPU requests.
{: #openshift-microservices-45}
-
Select Dashboards and the template Kubernetes > Workload Status & Performance.
A detailed dashboard showing all the pods in the cluster.
-
Create a customized dashboard and then scope it to a specific namespace.
- In the upper right click Copy to my Dashboards and name it
Workload Status & Performanceapp example-health
- Click Create and Open to create your own dashboard.
- Edit the dashboard scope.
- Set the filter for
kube_namespace_name
,is
,example-health
. - Click Save.
The dashboard now shows information focused on the example-health namespace.
Scroll down to the TimeCharts for HTTP Requests, Latency, Error, ... to understand the performance of the application.
{: caption="Custom Network Traffic and Bandwidth" caption-side="bottom"}
- In the upper right click Copy to my Dashboards and name it
Find more about {{site.data.keyword.mon_full_notm}} in the IBM Cloud documentation.
{: #openshift-microservices-cleanup} {: step}
In the Resource List locate and delete the resources you wish to remove:
-
Delete the {{site.data.keyword.openshiftshort}} cluster
-
To delete the {{site.data.keyword.redhat_openshift_notm}} resources without deleting the cluster, run the below commands:
oc delete all --all --namespace example-health oc delete project/example-health
{: pre}
-
Delete {{site.data.keyword.la_short}} instance
-
Delete {{site.data.keyword.mon_full_notm}}
-
Delete {{site.data.keyword.cloudant_short_notm}} and bind to a microservice
-
{{site.data.keyword.cloudant_short_notm}} service
Depending on the resource it might not be deleted immediately, but retained (by default for 7 days). You can reclaim the resource by deleting it permanently or restore it within the retention period. See this document on how to use resource reclamation. {: tip}
{: #openshift-microservices-13}