[PMLE CERTIFICATE - EXAMTOPIC] DUMPS Q57-Q60

EXAMTOPIC DUMPS Q56-Q60 ; Proper Metrics - Video Recommendation, Non-Convergence of GD, KFP for repeatable experiments regarding model architectures, Imbalaned data

Q 57.

Your company manages a video sharing website where users can watch and upload videos. You need to create an ML model to predict which newly uploaded videos will be the most popular so that those videos can be prioritized on your company's website. Which result should you use to determine whether the model is successful?

PROPER Metrics - Video Recommendation
  • ❌ A. The model predicts videos as popular if the user who uploads them has over 10,000 likes.
    → IMPROPER CRITERION - THE ABSOLUTE NUMBER OF LIKES
  • B. The model predicts 97.5% of the most popular clickbait videos measured by number of clicks.
    → IMPROPER CRITERION - CLICKBAIT VIDEOS : A SUBSET OF UPLOADED VIDEOS
    Maximize Click Rate ; Users may click on something but then not stay on it very long. This optimizes clickbait, so maybe we should try something else.
  • C. The model predicts 95% of the most popular videos measured by watch time within 30 days of being uploaded.
  • ❌ D. The Pearson correlation coefficient between the log-transformed number of views after 7 days and 30 days after publication is equal to 0.
    → Pearson's Correlation Coefficient is a linear correlation coefficient that returns a value of between -1 and +1

Quantify the Metrics - Video Recommendation

Success and Failure Metrics - Video Recommendations
- A success metric : the number of popular videos properly predicted by the model.
- Success means predicting 95% of the most popular videos as measured by watch time within 28 days of being uploaded.
- Failure means the number of popular videos properly predicted is no better than current heuristics.
Failure Scenarios : Watch for failures that are not related to your success metric. : For instance, a video suggestion system that always recommends "clickbait" videos is not successful at providing a high-quality viewing experience.
Bad Objectives
(1) Maximize Click Rate : Users may click on something but then not stay on it very long. This optimizes clickbait, so maybe we should try something else.
(2) Maximize Watch Time :_Users may watch a long time, but then exit the session.
(2-e.g) A Minecraft video gets a 0.1% audience that watches video for 3 hours, 8% of whom watch another video for 5 minutes, while the rest quit watching altogether. The system is maximizing watch time, so the users' "watch next" list will consist solely of long Minecraft videos.
It is important to note that "multiple short watches" can be just as good as one long watch and can even increase the overall session watch time. On to the next objective!
(3) Maximize Session Watch Time : This model still favors longer videos, which is still a problem. This model does mine particular interests really well.
(3-e.g) if a user watches a video of LeBron James dunking, the system will show them every LeBron dunking video. Ever. The video recommendation system is really good at that, but user experience suffers. Each person can see YouTube as the place to go to watch a specific type of video. Diversity suffers.
(4) Increase Diversity & Maximize Session Watch Time : What problems do you think might arise from this objective? Keep in mind Goodhart's law : "When a measure becomes a target, it ceases to be a good measure."

Q 58.

You are working on a Neural Network-based project. The dataset provided to you has columns with different ranges. While preparing the data for model training, you discover that gradient optimization is having difficulty moving weights to a good solution. What should you do?

Non-Convergence of GD
  • ❌ A. Use feature construction to combine the strongest features.
  • B. Use the representation transformation (normalization) technique.
  • ❌ C. Improve the data cleaning step by removing features with missing values.
  • ❌ D. Change the partitioning step ~to reduce the dimension of the test set and have a larger training set.
  • Convergence : A state reached during the training of a model when the loss changes very little between each iteration.

NON-CONVERGENCE - (Batch) Normalization

  • Normalization changes the values of dataset's numeric fields to be in a common scale, without impacting differences in the ranges of values. Normalization is required only when features have different ranges.

  • Normalizing inputs helps in boosting the training of a neural network. : Leads to faster convergence 피쳐가 양수와 음수 모두 가지게 만들어, 가중치 벡터가 방향을 쉽게 변경할 수 있어 학습속도를 가속화 시킬 수 있다. 손실의 최소화 시키는 방향으로 gradients가 감소하는 데 필요한 # of epochs or iterations가 줄어든다.

Activation Function regarding Speed

Easy and fast convergence of the network can be the first criterion.
ReLU : advantageous in terms of speed. You’re gonna have to let the gradients die/vanish. It is usually used in intermediate layers rather than an output.
Leaky ReLU : the first solution to the gradients’ vanish. For DNN, advisable to start experiments with ReLU.
Softmax : usually used in output layers.

Q 59.

Your data science team needs to rapidly experiment with various features, model architectures, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort?

KFP for repeatable experiments regarding model architectures
  • A. Use Kubeflow Pipelines to execute the experiments. Export the metrics file, and query the results using the Kubeflow Pipelines API.
    KFP UI for managing training experiments, jobs, and runs, plus an engine for scheduling multi-step ML workflows
    KFP supports the export of scalar metrics. You can write a list of metrics to a local file to describe the performance of the model. The pipeline agent uploads the local file as your run-time metrics. You can view the uploaded metrics as a visualization in the Runs page for a particular experiment in the Kubeflow Pipelines UI.

  • ❌ B. Use AI Platform Training to execute the experiments. Write the accuracy metrics to BigQuery, and query the results using the BigQuery API.
    → TOOOOOO MUCH MANUAL EFFORT ; every new experiment you need to access your BQ.

  • ❌ C. Use AI Platform Training to execute the experiments. Write the accuracy metrics to Cloud Monitoring, and query the results using the Monitoring API.
    KFP ALREADY HAVE A LINK TO Cloud Monitoring

  • ❌ D. Use AI Platform Notebooks to execute the experiments. Collect the results in a shared Google Sheets file, and query the results using the Google Sheets API.
    → TOOOOOO MUCH MANUAL EFFORT.

Providing a way to deploy robust, repeatable ML pipelines along with monitoring, auditing, version tracking, and reproducibility

  • mostly utilized when we need to deploy our training models to production.
  • Because of different configuration in our local env. and production env. many of our ML tasks breaks when moving to production env. for deployment and to serve our ML model. THAT'S WHAY KUBEFLOW WAS FORMED.
Kubeflow Pipelines (KFP) : Providing a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility.
For building and deploying portable, scalable machine learning workflows based on Docker containers.
UI for managing training experiments, jobs, and runs, plus an engine for scheduling multi-step ML workflows.
2 SDKs ; one to define and manipulate pipelines, the other for Notebooks to interact with the system.
Cloud AI Platform Pipelines makes it easy to set up a KFP installation.
While a given pipeline step is running, you can click on it to get more information about it, including viewing its pod logs. (also view the logs for a pipeline step via the link to its Cloud Logging (Stackdriver) logs, even if the cluster node has been torn down)
View model training information in TensorBoard))))))))))
Explore the Artifacts and Executions dashboard
The last step in the pipeline deploys a web app, which provides a UI for querying the trained model — served via TF Serving — to make predictions.
Kubeflow Components

Q 60.

You work for a bank and are building a random forest model for fraud detection. You have a dataset that includes transactions, of which 1% are identified as fraudulent. Which data transformation strategy would likely improve the performance of your classifier?

Handling imbalaned data

A. Write your data in TFRecords.
B. Z-normalize all the numeric features.
C. Oversample the fraudulent transaction 10 times.
D. Use one-hot encoding on all categorical features.