You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud
Dataproc cluster, and depositing the results into Google BigQuery.
How should you securely run this workload?
digvijay
Highly Voted 4 years, 1 month agoretep007
2 years, 7 months agodambilwa
3 years, 10 months agorickywck
Highly Voted 4 years, 1 month agoMathew106
Most Recent 9 months, 1 week agoAdswerve
1 year agoPolyMoe
1 year, 3 months agoMkumar43
1 year, 4 months agoKrish6488
1 year, 4 months agoDGames
1 year, 4 months agoThomasChoy
2 years agoBhawantha
2 years, 3 months agomedeis_jar
2 years, 3 months agoprasanna77
2 years, 4 months agoMaxNRG
2 years, 4 months agoJG123
2 years, 5 months agoanji007
2 years, 6 months agoBlobby
2 years, 7 months agosumanshu
2 years, 10 months agosumanshu
2 years, 9 months ago