MENU

Challenge Lab

Qwiklabs [GSP340]



GSP340

Task 1:

Create a table partitioned by date

code (A):


Task 2: Add new columns to your table

code (B)


Task 3: Add country population data to the population column

code (C):


Task 4: Add country area data to the country_area column

code (D):


Task 5: Populate the mobility record data

code (E)


Task 6: Query missing data in population & country_area columns

code (F):


Challenge Lab

Qwiklabs [GSP342]


GSP342

Task 1:

Create a custom security role.

Execute this command on cloud shell to create yaml file for custom role:

code (A):

Replace the below content in the file:

code (B)

Ctrl+x --> Y--> Enter

code (C):

commands for task2 & task3:

Another Method in video if you get error:

code (D):

Task 4: Create and configure a new Kubernetes Engine private cluster

code (E)

Task 5: Deploy an application to a private Kubernetes Engine cluster.

code 5(a):

code 5(b):

Challenge Lab

Qwiklabs [GSP787]

Task 1:

Total Confirmed Cases



Build a query that will answer "What was the total count of confirmed cases on Apr 15, 2020?" The query needs to return a single row containing the sum of confirmed cases across all countries. The name of the column should be total_cases_worldwide.
code (A):

Query 2: Worst Affected Areas

Build a query for answering "How many states in the US had more than 100 deaths on Apr 10, 2020?" The query needs to list the output in the field count_of_states. Hint: Don't include NULL values.
code (B)

Query 3: Identifying Hotspots

Build a query that will answer "List all the states in the United States of America that had more than 1000 confirmed cases on Apr 10, 2020?" The query needs to return the State Name and the corresponding confirmed cases arranged in descending order. Name of the fields to return state and total_confirmed_cases.
code (C):

Query 4: Fatality Ratio

Build a query that will answer "What was the case-fatality ratio in Italy for the month of April 2020?" Case-fatality ratio here is defined as (total deaths / total confirmed cases) * 100. Write a query to return the ratio for the month of April 2020 and containing the following fields in the output: total_confirmed_cases, total_deaths, case_fatality_ratio.
code (D):

Query 5: Identifying specific day

Build a query that will answer: "On what day did the total number of deaths cross 10000 in Italy?" The query should return the date in the format yyyy-mm-dd.
code (E)

Query 6: Finding days with zero net new cases

The following query is written to identify the number of days in India between 21 Feb 2020 and 15 March 2020 when there were zero increases in the number of confirmed cases. However it is not executing properly.
code (F):

Query 7: Doubling rate

Using the previous query as a template, write a query to find out the dates on which the confirmed cases increased by more than 10% compared to the previous day (indicating doubling rate of ~ 7 days) in the US between the dates March 22, 2020 and April 20, 2020. The query needs to return the list of dates, the confirmed cases on that day, the confirmed cases the previous day, and the percentage increase in cases between the days. Use the following names for the returned fields: Date, Confirmed_Cases_On_Day, Confirmed_Cases_Previous_Day and Percentage_Increase_In_Cases.
code (G):

Query 8: Recovery rate

Build a query to list the recovery rates of countries arranged in descending order (limit to 10) upto the date May 10, 2020. Restrict the query to only those countries having more than 50K confirmed cases. The query needs to return the following fields: country, recovered_cases, confirmed_cases, recovery_rate.
code (H):

Query 9: CDGR - Cumulative Daily Growth Rate

The following query is trying to calculate the CDGR on May 10, 2020(Cumulative Daily Growth Rate) for France since the day the first case was reported. The first case was reported on Jan 24, 2020.
code (I):

Create a Datastudio report

Create a Datastudio report that plots the following for the United States
code (J):

step-by-step instructions, you will use the skills learned from the labs in the quest to figure out how to complete the tasks on your own! An automated scoring system (shown on this page) will provide feedback on whether you have completed your tasks correctly.

Challenge Lab

Qwiklabs [GSP324]

Task 1:



TODO (1):

TODO (2):

Fill out this information:

TODO:

Now create a version. This will take a couple of minutes to deploy..
TODO:

Create your second AI Platform model: limited_model
TODO:

TODO :


Deploy and Manage Cloud Environments with Google Cloud: Challenge Lab
















Task 1: Create Production Environment

cd /work/dm


sed -i s/SET_REGION/us-east1/g prod-network.yaml

gcloud deployment-manager deployments create prod-network --config=prod-network.yaml


gcloud config set compute/zone us-east1-b


gcloud container clusters create kraken-prod \

          --num-nodes 2 \

          --network kraken-prod-vpc \

          --subnetwork kraken-prod-subnet\

          --zone us-east1-b


gcloud container clusters get-credentials kraken-prod


cd /work/k8s


for F in $(ls *.yaml); do kubectl create -f $F; done





Task 2: Configure the admin host



Create kraken-admin



gcloud config set compute/zone us-east1-b


gcloud compute instances create kraken-admin --network-interface="subnet=kraken-mgmt-subnet" --network-interface="subnet=kraken-prod-subnet"



Create alert:

Open monitoring


Create an alert


Configure the policy to email your email when jumphost is cpu utilization is above 50% for 1 min.





Task 3: Verify the Spinnaker deployment


Use cloudshell and run


gcloud config set compute/zone us-east1-b


gcloud container clusters get-credentials spinnaker-tutorial


DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")


kubectl port-forward --namespace default $DECK_POD 8080:9000 >> /dev/null &






#Go to cloudshell webpreview and go to applications->sample


#Open pipelines and manually run the pipeline if it has not already running.  Approve the deployment to production.  Check the production frontend endpoint (use http, not the default https)


#Back in cloudshell run these commands to push a change






gcloud config set compute/zone us-east1-b


gcloud source repos clone sample-app


cd sample-app


touch a


git config --global user.email "$(gcloud config get-value account)"


git config --global user.name "Student"


git commit -a -m "change"


git tag v1.0.1


git push --tags



 

GSP304 | Build and Deploy a Docker Image to a Kubernetes Cluster f.txt



gsutil cp gs://sureskills-ql/challenge-labs/ch04-kubernetes-app-deployment/echo-web.tar.gz .


gsutil cp gs://$DEVSHELL_PROJECT_ID/echo-web.tar.gz .

tar -xvf echo-web.tar.gz

gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v1 .


gcloud container clusters create echo-cluster --num-nodes 2 --zone us-central1-a --machine-type n1-standard-2



kubectl create deployment echo-web --image=gcr.io/qwiklabs-resources/echo-app:v1


kubectl expose deployment echo-web --type=LoadBalancer --port=80 --target-port=8000


kubectl get svc

 

Integrate with Machine Learning APIs: Challenge Lab |
GSP329



 Task " 1 & 2 " 

export SANAME=challenge

gcloud iam service-accounts create $SANAME

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=serviceAccount:$SANAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role=roles/bigquery.admin

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member=serviceAccount:$SANAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role=roles/storage.admin

gcloud iam service-accounts keys create sa-key.json --iam-account $SANAME@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com

export GOOGLE_APPLICATION_CREDENTIALS=${PWD}/sa-key.json

gsutil cp gs://$DEVSHELL_PROJECT_ID/analyze-images.py .




############### Task 3 ###########################################



# DONT CHANGE ANYTHING


# Dataset: image_classification_dataset

# Table name: image_text_detail

import os

import sys


# Import Google Cloud Library modules

from google.cloud import storage, bigquery, language, vision, translate_v2


if ('GOOGLE_APPLICATION_CREDENTIALS' in os.environ):

    if (not os.path.exists(os.environ['GOOGLE_APPLICATION_CREDENTIALS'])):

        print ("The GOOGLE_APPLICATION_CREDENTIALS file does not exist.\n")

        exit()

else:

    print ("The GOOGLE_APPLICATION_CREDENTIALS environment variable is not defined.\n")

    exit()


if len(sys.argv)<3:

    print('You must provide parameters for the Google Cloud project ID and Storage bucket')

    print ('python3 '+sys.argv[0]+ '[PROJECT_NAME] [BUCKET_NAME]')

    exit()


project_name = sys.argv[1]

bucket_name = sys.argv[2]


# Set up our GCS, BigQuery, and Natural Language clients

storage_client = storage.Client()

bq_client = bigquery.Client(project=project_name)

nl_client = language.LanguageServiceClient()


# Set up client objects for the vision and translate_v2 API Libraries

vision_client = vision.ImageAnnotatorClient()

translate_client = translate_v2.Client()


# Setup the BigQuery dataset and table objects

dataset_ref = bq_client.dataset('image_classification_dataset')

dataset = bigquery.Dataset(dataset_ref)

table_ref = dataset.table('image_text_detail')

table = bq_client.get_table(table_ref)


# Create an array to store results data to be inserted into the BigQuery table

rows_for_bq = []


# Get a list of the files in the Cloud Storage Bucket

files = storage_client.bucket(bucket_name).list_blobs()

bucket = storage_client.bucket(bucket_name)


print('Processing image files from GCS. This will take a few minutes..')


# Process files from Cloud Storage and save the result to send to BigQuery

for file in files:    

    if file.name.endswith('jpg') or  file.name.endswith('png'):

        file_content = file.download_as_string()

        

        # TBD: Create a Vision API image object called image_object 

        # Ref: https://googleapis.dev/python/vision/latest/gapic/v1/types.html#google.cloud.vision_v1.types.Image

        from google.cloud import vision_v1

        import io

        client = vision.ImageAnnotatorClient()



        # TBD: Detect text in the image and save the response data into an object called response

        # Ref: https://googleapis.dev/python/vision/latest/gapic/v1/api.html#google.cloud.vision_v1.ImageAnnotatorClient.document_text_detection

        image = vision_v1.types.Image(content=file_content)

        response = client.text_detection(image=image)

    

        # Save the text content found by the vision API into a variable called text_data

        text_data = response.text_annotations[0].description


        # Save the text detection response data in <filename>.txt to cloud storage

        file_name = file.name.split('.')[0] + '.txt'

        blob = bucket.blob(file_name)

        # Upload the contents of the text_data string variable to the Cloud Storage file 

        blob.upload_from_string(text_data, content_type='text/plain')


        # Extract the description and locale data from the response file

        # into variables called desc and locale

        # using response object properties e.g. response.text_annotations[0].description

        desc = response.text_annotations[0].description

        locale = response.text_annotations[0].locale

        

        # if the locale is English (en) save the description as the translated_txt

        if locale == 'en':

            translated_text = desc

        else:

            # TBD: For non EN locales pass the description data to the translation API

            # ref: https://googleapis.dev/python/translation/latest/client.html#google.cloud.translate_v2.client.Client.translate

            # Set the target_language locale to 'en')

            from google.cloud import translate_v2 as translate

            

            client = translate.Client()

            translation = translate_client.translate(text_data, target_language='en')

            translated_text = translation['translatedText']

        print(translated_text)

        

        # if there is response data save the original text read from the image, 

        # the locale, translated text, and filename

        if len(response.text_annotations) > 0:

            rows_for_bq.append((desc, locale, translated_text, file.name))


print('Writing Vision API image data to BigQuery...')

# Write original text, locale and translated text to BQ

# TBD: When the script is working uncomment the next line to upload results to BigQuery

errors = bq_client.insert_rows(table, rows_for_bq)


assert errors == []





#############################################################################################################################3


-------------->>>>>>>>>>>run this in cloud shell:


python3 analyze-images.py $DEVSHELL_PROJECT_ID $DEVSHELL_PROJECT_ID



################################################################################################################################3



------------------>>>>>>>>>>>>>>>>>>>>Go to BigQuery, run:

SELECT locale,COUNT(locale) as lcount FROM image_classification_dataset.image_text_detail GROUP BY locale ORDER BY lcount DESC


Build-a-Website-on-Google-Cloud-Challenge-Lab



########################################################################################

Task 1: Download the monolith code and build your container



git clone https://github.com/googlecodelabs/monolith-to-microservices.git



cd ~/monolith-to-microservices

./setup.sh


cd ~/monolith-to-microservices/monolith

npm start


gcloud services enable cloudbuild.googleapis.com

gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/fancytest:1.0.0 .



##########################################################################################


Task 2: Create a kubernetes cluster and deploy the application



gcloud config set compute/zone us-central1-a

gcloud services enable container.googleapis.com

gcloud container clusters create fancy-cluster --num-nodes 3


kubectl create deployment fancytest --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/fancytest:1.0.0

kubectl expose deployment fancytest --type=LoadBalancer --port 80 --target-port 8080


###############################################################################################

Task 3: Create a containerized version of your Microservices



cd ~/monolith-to-microservices/microservices/src/orders

gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/orders:1.0.0 .


cd ~/monolith-to-microservices/microservices/src/products

gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/products:1.0.0 .


#################################################################################################


Task 4: Deploy the new microservices


kubectl create deployment orders --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/orders:1.0.0

kubectl expose deployment orders --type=LoadBalancer --port 80 --target-port 8081


kubectl create deployment products --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/products:1.0.0

kubectl expose deployment products --type=LoadBalancer --port 80 --target-port 8082



###################################################################################################


Task 5: Configure the Frontend microservice


cd ~/monolith-to-microservices/react-app

nano .env


###################################################################################################

Task 6: Create a containerized version of the Frontend microservice



cd ~/monolith-to-microservices/microservices/src/frontend

gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/frontend:1.0.0 .



#####################################################################################################


Task 7: Deploy the Frontend microservice


kubectl create deployment frontend --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/frontend:1.0.0


kubectl expose deployment frontend --type=LoadBalancer --port 80 --target-port 8080