banner banner banner
Облачная экосистема
Облачная экосистема
Оценить:
Рейтинг: 0

Полная версия:

Облачная экосистема

скачать книгу бесплатно


}

#https://www.terraform.io/docs/providers/google/r/app_engine_application.html

resource "google_project" "my_project" {

name = "My Project"

project_id = "your-project-id"

org_id = "1234567"

}

resource "google_app_engine_application" "app" {

project = "${google_project.my_project.project_id}"

location_id = "us-central"

}

# google_compute_instance

resource "google_compute_instance" "default" {

name = "test"

machine_type = "n1-standard-1"

zone = "us-central1-a"

tags = ["foo", "bar"]

boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}

// Local SSD disk

scratch_disk {

}

network_interface {

network = "default"

access_config {

// Ephemeral IP

}

}

metadata = {

foo = "bar"

}

metadata_startup_script = "echo hi > /test.txt"

service_account {

scopes = ["userinfo-email", "compute-ro", "storage-ro"]

}

}

Расширяемость с помощью external resource, в качестве которого может быть скрипт на BASH:

data "external" "python3" {

program = ["Python3"]

}

Создание кластера машин с помощью Terraform

Создание кластера с помощью Terraform рассматривается в Создание инфраструктуры в GCP. Сейчас уделим больше внимания самому кластеру, а не инструментам по его созданию. Создам через панель администратора GCE проект (отображается в шапке интерфейса) node-cluster. Ключ для Kubernetes я скачал IAM и администрирование –> Сервисные аккаунты –> Создать сервисный аккаунт и при создании выбрал роль Владелец и положил в проект под названием kubernetes_key.JSON:

eSSH@Kubernetes-master:~/node-cluster$ cp ~/Downloads/node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON

Скачал terraform:

essh@kubernetes-master:~/node-cluster$ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip >/dev/null 2>/dev/null

essh@kubernetes-master:~/node-cluster$ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip

Archive: terraform_0.12.2_linux_amd64.zip

inflating: terraform

essh@kubernetes-master:~/node-cluster$ ./terraform version

Terraform v0.12.2

Добавили провайдера GCE и запустил скачивание "драйверов" к нему:

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster"

region = "us-central1"

}essh@kubernetes-master:~/node-cluster$ ./terraform init

Initializing the backend…

Initializing provider plugins…

– Checking for available provider plugins…

– Downloading plugin for provider "google" (terraform-providers/google) 2.8.0…

The following providers do not have any version constraints in configuration,

so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking

changes, it is recommended to add version = "…" constraints to the

corresponding provider blocks in configuration, with the constraint strings

suggested below.

* provider.google: version = "~> 2.8"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.

If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

Добавлю виртуальную машину:

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"

boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}

network_interface {

network = "default"

access_config {}

}

essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

An execution plan has been generated and is shown below.