AKAI TSUKI

System development or Technical something

アイコンを作成する際に指定可能なデザインを聞いてみた

ChatGPTに聞いてみた。

以下、回答。

アイコンのデザインには、多様なスタイルや指針が存在します。以下に一般的なデザインの種類をいくつか挙げます。

  1. マテリアルデザイン (Material Design): Googleが提唱するデザイン言語で、シャドウ、動き、深さの概念を用いてより直感的でタッチフレンドリーなインターフェースを実現します。

  2. フラットデザイン (Flat Design): テクスチャ、グラデーション、シャドウを排除したシンプルなデザイン。クリーンでミニマリスティックな外観が特徴です。

  3. ニューモーフィズム (Neumorphism): ソフトな内部シャドウと外部シャドウを組み合わせて、ページ上でオブジェクトが浮かび上がっているように見せるデザイン手法です。

  4. スケウモーフィズム (Skeuomorphism): リアルなテクスチャや影を用いて、アイコンが現実世界の物体のように見えるようにデザインする手法。リアリズムを追求します。

  5. アウトラインデザイン (Outline Design): アイコンや要素が輪郭線で描かれ、シンプルかつ洗練された外観を持ちます。軽やかで開放的な印象を与えます。

  6. グラデーションデザイン (Gradient Design): 色彩のグラデーションを利用して、アイコンに深みや立体感を持たせるデザイン。鮮やかで目を引く効果があります。

  7. ピクセルアート (Pixel Art): 小さなピクセルを一つ一つ配置してデザインを作り上げる手法。レトロな雰囲気を演出できます。

これらはアイコンをデザインする際の基本的なスタイルの一部ですが、プロジェクトの目的やブランドのアイデンティティに応じて、これらのスタイルを組み合わせたり、独自の要素を加えたりすることで、ユニークなアイコンを作成することが可能です。

KVM with cloud-init

CentOS 8 VM Image

I get CentOS 8 VM Image.

[root@pm01 images]# wget https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2
[root@pm01 ~]# qemu-img info images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2
image: images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.1G
cluster_size: 65536
Format specific information:
    compat: 0.10
[root@pm01 ~]#
[root@pm01 ~]# cp -vp images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2 /var/lib/libvirt/images/vm05.qcow2
‘images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2’ -> ‘/var/lib/libvirt/images/vm05.qcow2’
[root@pm01 ~]#

I resize VM Image from 10G to 20G.

[root@pm01 ~]# cd /var/lib/libvirt/images/
[root@pm01 images]# qemu-img resize vm05.qcow2 20G
Image resized.
[root@pm01 images]#

prepare for cloud-init

I get hashed pass by python command like below.

[root@pm01 images]# python -c 'import crypt,getpass; print(crypt.crypt(getpass.getpass(), crypt.mksalt(crypt.METHOD_SHA512)))'
Password: <input password>
<hashed pass>
[root@pm01 images]#

I create cloud-confg file.
This file makes hostname "testnode" and creates "centos" user in a VM.

[root@pm01 images]# vi config.yaml
[root@pm01 images]# cat config.yaml
#cloud-config
fqdn: testnode
users:
  - name: centos
    groups: wheel
    lock_passwd: false
    passwd: '<hashed pass>'
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
[root@pm01 images]#
[root@pm01 images]# yum install -y cloud-utils

 *snip*

Installed:
  cloud-utils.x86_64 0:0.27-20.el7.centos

Dependency Installed:
  cloud-utils-growpart.noarch 0:0.29-5.el7 euca2ools.noarch 0:3.4.1-1.el7 python-lxml.x86_64 0:3.2.1-4.el7 python-progressbar.noarch 0:2.3-4.el7 python-requestbuilder.noarch 0:0.7.1-1.el7
  rsync.x86_64 0:3.1.2-10.el7

Complete!
[root@pm01 images]#

I generate config.iso file by cloud-localds command.

[root@pm01 images]# cloud-localds config.iso config.yaml
wrote config.iso with filesystem=iso9660 and diskformat=raw
[root@pm01 images]#

Unfortunantly, I could not find "CentOS 8".

[root@pm01 images]# osinfo-query  os | grep -i centos
 centos5.0            | CentOS 5.0                                         | 5.0      | http://centos.org/centos/5.0
 centos5.1            | CentOS 5.1                                         | 5.1      | http://centos.org/centos/5.1
 centos5.10           | CentOS 5.10                                        | 5.10     | http://centos.org/centos/5.10
 centos5.11           | CentOS 5.11                                        | 5.11     | http://centos.org/centos/5.11
 centos5.2            | CentOS 5.2                                         | 5.2      | http://centos.org/centos/5.2
 centos5.3            | CentOS 5.3                                         | 5.3      | http://centos.org/centos/5.3
 centos5.4            | CentOS 5.4                                         | 5.4      | http://centos.org/centos/5.4
 centos5.5            | CentOS 5.5                                         | 5.5      | http://centos.org/centos/5.5
 centos5.6            | CentOS 5.6                                         | 5.6      | http://centos.org/centos/5.6
 centos5.7            | CentOS 5.7                                         | 5.7      | http://centos.org/centos/5.7
 centos5.8            | CentOS 5.8                                         | 5.8      | http://centos.org/centos/5.8
 centos5.9            | CentOS 5.9                                         | 5.9      | http://centos.org/centos/5.9
 centos6.0            | CentOS 6.0                                         | 6.0      | http://centos.org/centos/6.0
 centos6.1            | CentOS 6.1                                         | 6.1      | http://centos.org/centos/6.1
 centos6.10           | CentOS 6.10                                        | 6.10     | http://centos.org/centos/6.10
 centos6.2            | CentOS 6.2                                         | 6.2      | http://centos.org/centos/6.2
 centos6.3            | CentOS 6.3                                         | 6.3      | http://centos.org/centos/6.3
 centos6.4            | CentOS 6.4                                         | 6.4      | http://centos.org/centos/6.4
 centos6.5            | CentOS 6.5                                         | 6.5      | http://centos.org/centos/6.5
 centos6.6            | CentOS 6.6                                         | 6.6      | http://centos.org/centos/6.6
 centos6.7            | CentOS 6.7                                         | 6.7      | http://centos.org/centos/6.7
 centos6.8            | CentOS 6.8                                         | 6.8      | http://centos.org/centos/6.8
 centos6.9            | CentOS 6.9                                         | 6.9      | http://centos.org/centos/6.9
 centos7.0            | CentOS 7                                           | 7        | http://centos.org/centos/7.0
[root@pm01 images]#

Excute command(virt-install) to create CentOS 8 with cloud-init on KVM.

I use above all information.
and then I prepare script file to create VM.

[root@pm01 kvm_work]# vi create_vm05.sh
[root@pm01 kvm_work]#
[root@pm01 kvm_work]# cat create_vm05.sh
#!/bin/bash

name=vm05
cpu=1
memory=4096
# disk=30

# cd /var/lib/libvirt/images/
# cp -vp CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2 ${name}.qcow2
# qemu-img resize ${name}.qcow2 ${disk}G

virt-install \
 --name=${name} \
 --disk /var/lib/libvirt/images/${name}.qcow2,device=disk \
 --disk /var/lib/libvirt/images/config.iso,device=cdrom \
 --network network=default,model=virtio \
 --vcpus=${cpu} \
 --ram=${memory} \
 --accelerate \
 --hvm \
 --virt-type kvm \
 --graphics none \
 --os-type Linux \
 --arch=x86_64 \
 --import

[root@pm01 kvm_work]#

Finally, CentOS 8 VM was created by cloud-init with cloud config on KVM.
and I can log in to this VM.

[root@pm01 kvm_work]# bash create_vm05.sh
 *snip*

CentOS Linux 8 (Core)
Kernel 4.18.0-193.6.3.el8_2.x86_64 on an x86_64

Activate the web console with: systemctl enable --now cockpit.socket

testnode login: centos
Password:
[centos@testnode ~]$ id
uid=1000(centos) gid=1000(centos) groups=1000(centos),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[centos@testnode ~]$

build and install tmux 3.1b on CentOS 7.

github.com

install some packages "Run Packages" and "Build Packages"

[root@vm02 ~]# yum install libevent ncurses libevent-devel ncurses-devel gcc make bison pkg-config

*snip*

Installed:
  bison.x86_64 0:3.0.4-2.el7             gcc.x86_64 0:4.8.5-39.el7                      libevent.x86_64 0:2.0.21-4.el7
  libevent-devel.x86_64 0:2.0.21-4.el7   ncurses-devel.x86_64 0:5.9-14.20130511.el7_4

Dependency Installed:
  cpp.x86_64 0:4.8.5-39.el7                   glibc-devel.x86_64 0:2.17-307.el7.1 glibc-headers.x86_64 0:2.17-307.el7.1
  kernel-headers.x86_64 0:3.10.0-1127.8.2.el7 libmpc.x86_64 0:1.0.1-3.el7         m4.x86_64 0:1.4.16-10.el7
  mpfr.x86_64 0:3.1.1-4.el7

Dependency Updated:
  glibc.x86_64 0:2.17-307.el7.1                           glibc-common.x86_64 0:2.17-307.el7.1

Complete!
[root@vm02 ~]#

make and make install

[root@vm02 ~]# wget https://github.com/tmux/tmux/releases/download/3.1b/tmux-3.1b.tar.gz
[root@vm02 ~]# ls -l tmux-3.1b.tar.gz
-rw-r--r-- 1 root root 561152 May  4 17:10 tmux-3.1b.tar.gz
[root@vm02 ~]#

[root@vm02 ~]# tar -zxf tmux-3.1b.tar.gz
[root@vm02 ~]#

[root@vm02 ~]# cd tmux-3.1b
-rwxr-xr-x 1 1000 1000 239664 May  4 17:08 configure
[root@vm02 tmux-3.1b]# ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking for gcc option to accept ISO C99... -std=gnu99
checking how to run the C preprocessor... gcc -std=gnu99 -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for bison... bison -y
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking minix/config.h usability... no
checking minix/config.h presence... no
checking for minix/config.h... no
checking whether it is safe to define __EXTENSIONS__... yes
checking bitstring.h usability... no
checking bitstring.h presence... no
checking for bitstring.h... no
checking dirent.h usability... yes
checking dirent.h presence... yes
checking for dirent.h... yes
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking for inttypes.h... (cached) yes
checking libutil.h usability... no
checking libutil.h presence... no
checking for libutil.h... no
checking ndir.h usability... no
checking ndir.h presence... no
checking for ndir.h... no
checking paths.h usability... yes
checking paths.h presence... yes
checking for paths.h... yes
checking pty.h usability... yes
checking pty.h presence... yes
checking for pty.h... yes
checking for stdint.h... (cached) yes
checking sys/dir.h usability... yes
checking sys/dir.h presence... yes
checking for sys/dir.h... yes
checking sys/ndir.h usability... no
checking sys/ndir.h presence... no
checking for sys/ndir.h... no
checking sys/tree.h usability... no
checking sys/tree.h presence... no
checking for sys/tree.h... no
checking util.h usability... no
checking util.h presence... no
checking for util.h... no
checking for library containing flock... none required
checking for dirfd... yes
checking for flock... yes
checking for prctl... yes
checking for sysconf... yes
checking for asprintf... yes
checking for cfmakeraw... yes
checking for closefrom... no
checking for explicit_bzero... no
checking for fgetln... no
checking for freezero... no
checking for getdtablecount... no
checking for getprogname... no
checking for memmem... yes
checking for recallocarray... no
checking for reallocarray... no
checking for setenv... yes
checking for setproctitle... no
checking for strcasestr... yes
checking for strlcat... no
checking for strlcpy... no
checking for strndup... yes
checking for strsep... yes
checking for strtonum... no
checking for working strnlen... yes
checking for library containing clock_gettime... none required
checking for LIBEVENT... yes
checking event.h usability... yes
checking event.h presence... yes
checking for event.h... yes
checking for LIBTINFO... yes
checking for b64_ntop... no
checking for b64_ntop with -lresolv... yes
checking for library containing inet_ntoa... none required
checking for library containing socket... none required
checking for socket in -lxnet... no
checking for CMSG_DATA... yes
checking for err... yes
checking for errx... yes
checking for warn... yes
checking for warnx... yes
checking err.h usability... yes
checking err.h presence... yes
checking for err.h... yes
checking for library containing imsg_init... no
checking for daemon... yes
checking whether daemon is declared... yes
checking for stravis... no
checking for library containing fdforkpty... no
checking for library containing forkpty... -lutil
checking for library containing kinfo_getfile... no
checking whether TAILQ_CONCAT is declared... yes
checking whether TAILQ_PREV is declared... yes
checking whether TAILQ_REPLACE is declared... no
checking for __progname... yes
checking for program_invocation_short_name... yes
checking whether PR_SET_NAME is declared... yes
checking whether F_CLOSEM is declared... no
checking for /proc/$$... yes
checking platform... linux
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: executing depfiles commands
[root@vm02 tmux-3.1b]#

execute "make" and "make install" command

[root@vm02 tmux-3.1b]# make

*snip*

[root@vm02 tmux-3.1b]# make install
make[1]: Entering directory `/root/tmux-3.1b'
 /usr/bin/mkdir -p '/usr/local/bin'
  /usr/bin/install -c tmux '/usr/local/bin'
make  install-exec-hook
make[2]: Entering directory `/root/tmux-3.1b'
if test xmdoc = xmdoc; then \
        sed -e "s|@SYSCONFDIR@|/etc|g" ./tmux.1 \
                >./tmux.1.mdoc; \
else \
        sed -e "s|@SYSCONFDIR@|/etc|g" ./tmux.1| \
                gawk -f ./mdoc2man.awk >./tmux.1.man; \
fi
/usr/bin/mkdir -p /usr/local/share/man/man1
/usr/bin/install -c -m 644 ./tmux.1.mdoc \
        /usr/local/share/man/man1/tmux.1
make[2]: Leaving directory `/root/tmux-3.1b'
make[1]: Nothing to be done for `install-data-am'.
make[1]: Leaving directory `/root/tmux-3.1b'
[root@vm02 tmux-3.1b]#

check version

[root@vm02 tmux-3.1b]# tmux -V
tmux 3.1b
[root@vm02 tmux-3.1b]#

Try to use kind to create kubernetes cluster

Let's try to use kind

docker

I have already installed docker.

[root@vm01 ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:27:04 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:25:42 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
[root@vm01 ~]#

install kubectl

please see https://kubernetes.io/docs/tasks/tools/install-kubectl/

[root@vm01 ~]# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.9M  100 41.9M    0     0  57.4M      0 --:--:-- --:--:-- --:--:-- 57.5M
[root@vm01 ~]# ls -l kubectl
-rw-r--r-- 1 root root 44032000 May  5 10:33 kubectl
[root@vm01 ~]
[root@vm01 ~]# chmod +x ./kubectl
[root@vm01 ~]# mv ./kubectl /usr/local/bin/kubectl
[root@vm01 ~]# ls -l /usr/local/bin/kubectl
-rwxr-xr-x 1 root root 44032000 May  5 10:33 /usr/local/bin/kubectl
[root@vm01 ~]#
[root@vm01 ~]# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
[root@vm01 ~]#

install kind

and I install kind command.

[root@vm01 ~]# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.8.1/kind-$(uname)-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    97  100    97    0     0     84      0  0:00:01  0:00:01 --:--:--    85
100   629  100   629    0     0    283      0  0:00:02  0:00:02 --:--:--  614k
100 9900k  100 9900k    0     0  1134k      0  0:00:08  0:00:08 --:--:-- 2081k
[root@vm01 ~]#
[root@vm01 ~]# ls -l kind
-rw-r--r-- 1 root root 10133844 May  5 11:14 kind
[root@vm01 ~]# chmod +x ./kind
[root@vm01 ~]# mv ./kind /usr/local/bin/
[root@vm01 ~]# ls -l /usr/local/bin/
total 52900
-rwxr-xr-x 1 root root 10133844 May  5 11:14 kind
-rwxr-xr-x 1 root root 44032000 May  5 10:33 kubectl
[root@vm01 ~]#

[root@vm01 ~]# kind version
kind v0.8.1 go1.14.2 linux/amd64
[root@vm01 ~]#

create cluster by kind

I execute "kind create cluster" command.

[root@vm01 ~]# kind create cluster
Creating cluster "kind" ...
?? Ensuring node image (kindest/node:v1.18.2) ?
 ? Ensuring node image (kindest/node:v1.18.2) ?
 ? Preparing nodes ?
 ? Writing configuration ?
 ? Starting control-plane ??
 ? Installing CNI ?
 ? Installing StorageClass ?
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! ?
[root@vm01 ~]#
[root@vm01 ~]# kind version
kind v0.8.0 go1.14.2 linux/amd64
[root@vm01 ~]#
[root@vm01 ~]# kubectl cluster-info --context kind-kind
Kubernetes master is running at https://127.0.0.1:40867
KubeDNS is running at https://127.0.0.1:40867/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@vm01 ~]#

multi node

[root@vm01 ~]# vi kind-cluster.yaml
[root@vm01 ~]# cat kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
[root@vm01 ~]#
[root@vm01 ~]# kind create cluster --config kind-cluster.yaml
Creating cluster "kind" ...
 ? Ensuring node image (kindest/node:v1.18.2) ?
 ? Preparing nodes ? ? ?
 ? Writing configuration ?
 ? Starting control-plane ??
 ? Installing CNI ?
 ? Installing StorageClass ?
 ? Joining worker nodes ?
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ?
[root@vm01 ~]#
[root@vm01 ~]# kind get clusters
kind
[root@vm01 ~]#

[root@vm01 ~]# kubectl cluster-info --context kind-kind
Kubernetes master is running at https://127.0.0.1:37921
KubeDNS is running at https://127.0.0.1:37921/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@vm01 ~]#
[root@vm01 ~]# kubectl config current-context
kind-kind
[root@vm01 ~]#
[root@vm01 ~]# kubectl get node
NAME                 STATUS   ROLES    AGE     VERSION
kind-control-plane   Ready    master   2m34s   v1.18.2
kind-worker          Ready    <none>   119s    v1.18.2
kind-worker2         Ready    <none>   119s    v1.18.2
[root@vm01 ~]#

[root@vm01 ~]# kubectl get pod
No resources found in default namespace.
[root@vm01 ~]#

[root@vm01 ~]# kubectl get namespace
NAME                 STATUS   AGE
default              Active   2m47s
kube-node-lease      Active   2m48s
kube-public          Active   2m48s
kube-system          Active   2m48s
local-path-storage   Active   2m44s
[root@vm01 ~]#

from docker view, I check processes

[root@vm01 ~]# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                       NAMES
43ca4ecef777        kindest/node:v1.18.2   "/usr/local/bin/entr…"   30 minutes ago      Up 30 minutes                                   kind-worker
a883230bf683        kindest/node:v1.18.2   "/usr/local/bin/entr…"   30 minutes ago      Up 30 minutes       127.0.0.1:37921->6443/tcp   kind-control-plane
4b819e4548b2        kindest/node:v1.18.2   "/usr/local/bin/entr…"   30 minutes ago      Up 30 minutes                                   kind-worker2
[root@vm01 ~]#

[root@vm01 ~]# docker exec -it kind-worker bash
root@kind-worker:/# ps -aef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 01:21 ?        00:00:00 /sbin/init
root       113     1  0 01:21 ?        00:00:00 /lib/systemd/systemd-journald
root       122     1  0 01:21 ?        00:00:07 /usr/local/bin/containerd
root       359     1  1 01:22 ?        00:00:28 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runti
root       405     1  0 01:22 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7b08e0c8ee21ecc616d07790193a1bcde61430db81ae90d0b9eccd25b5a9c8f4 -address /run/containerd/containerd.sock
root       431     1  0 01:22 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id f4794a89cee3d1e946e1dad73ca9b902368c6cfe328d2fa06272eb9a6d6fc0b7 -address /run/containerd/containerd.sock
root       437   405  0 01:22 ?        00:00:00 /pause
root       464   431  0 01:22 ?        00:00:00 /pause
root       546   405  0 01:22 ?        00:00:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-worker
root       623   431  0 01:22 ?        00:00:00 /bin/kindnetd
root      1669     0  0 01:53 pts/1    00:00:00 bash
root      1681  1669  0 01:53 pts/1    00:00:00 ps -aef
root@kind-worker:/#

[root@vm01 ~]# docker exec -it kind-control-plane bash
root@kind-control-plane:/# ps -aef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 01:21 ?        00:00:00 /sbin/init
root       110     1  0 01:21 ?        00:00:00 /lib/systemd/systemd-journald
root       122     1  0 01:21 ?        00:00:11 /usr/local/bin/containerd
root       280     1  0 01:21 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 66d1f69a65a1e056cd943aaea92fe761c57dc8b41dd6ae12676bc47b0dfdef23 -address /run/containerd/containerd.sock
root       297     1  0 01:21 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id fac6ebe22ca25ff939288745306ddc0a10c20dbd9875d4165eac9316be183c40 -address /run/containerd/containerd.sock
root       317     1  0 01:21 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id b4ceca7e4c7cab8f363a25c26ba6c2ba41570b778e7c07f7728d5d80cb1e222d -address /run/containerd/containerd.sock
root       331   280  0 01:21 ?        00:00:00 /pause
root       369   297  0 01:21 ?        00:00:00 /pause
root       379     1  0 01:21 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id fdfb2d6c2ff48980c0387c578477bef6a1a6a21e3f6d5c39c598fce959b30a8c -address /run/containerd/containerd.sock
root       382   317  0 01:21 ?        00:00:00 /pause
root       420   379  0 01:21 ?        00:00:00 /pause
root       463   317  1 01:21 ?        00:00:29 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=12
root       500   280  5 01:21 ?        00:01:41 kube-apiserver --advertise-address=172.18.0.4 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootst
root       522   379  0 01:21 ?        00:00:09 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --
root       559   297  2 01:21 ?        00:00:49 etcd --advertise-client-urls=https://172.18.0.4:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://172.18.0.4:2380
root       601     1  2 01:21 ?        00:00:47 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runti
root       687     1  0 01:22 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 97a4af0d01714ef45d223eaeea315316a01bff53916dc45627b18441672098e3 -address /run/containerd/containerd.sock
root       706   687  0 01:22 ?        00:00:00 /pause
root       728     1  0 01:22 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 571b39bdfdddb0fa8ac8f605b74765292455bddcce45e7ffe9f744e97218aedd -address /run/containerd/containerd.sock
root       749   728  0 01:22 ?        00:00:00 /pause
root       786   728  0 01:22 ?        00:00:00 /bin/kindnetd
root       807   687  0 01:22 ?        00:00:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-control-plane
root      3463     0  0 01:54 pts/1    00:00:00 bash
root      3483  3463  0 01:54 pts/1    00:00:00 ps -aef
root@kind-control-plane:/#

study pacemaker(resouce setting, failover)

cluster resource for pacemaker

There is a cluster by pacemaker/corosync. The cluster consist of 3 node and doesn't have any cluster resources.

Let's try to set dummy resource.

Before load crm file, initial status is below

[root@vm01 ~]# crm_mon -1fA
Stack: corosync
Current DC: vm03.localdomain (version 1.1.21-1.el7-f14e36f) - partition with quorum
Last updated: Wed Apr 29 17:10:15 2020
Last change: Wed Apr 29 16:58:02 2020 by hacluster via crmd on vm03.localdomain

3 nodes configured
0 resources configured

Online: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

No active resources


Node Attributes:
* Node vm01.localdomain:
* Node vm02.localdomain:
* Node vm03.localdomain:

Migration Summary:
* Node vm03.localdomain:
* Node vm02.localdomain:
* Node vm01.localdomain:
[root@vm01 ~]#

This is crm configuration file.

[root@vm01 ~]# cat dummy.crm
### Cluster Option ###
property stonith-enabled="false"

### Resource Defaults ###
rsc_defaults resource-stickiness="INFINITY" \
    migration-threshold="1"

### Group Configuration ###
group grp \
    resource1 \
    resource2

### Clone Configuration ###
clone clnResource \
    resource3

### Primitive Configuration ###
primitive resource1 ocf:heartbeat:Dummy \
    op start interval="0s" timeout="300s" on-fail="restart" \
    op monitor interval="10s" timeout="60s" on-fail="restart" \
    op stop interval="0s" timeout="300s" on-fail="block"

primitive resource2 ocf:heartbeat:Dummy \
    op start interval="0s" timeout="300s" on-fail="restart" \
    op monitor interval="10s" timeout="60s" on-fail="restart" \
    op stop interval="0s" timeout="300s" on-fail="block"

primitive resource3 ocf:heartbeat:Dummy \
    op start interval="0s" timeout="300s" on-fail="restart" \
    op monitor interval="10s" timeout="60s" on-fail="restart" \
    op stop interval="0s" timeout="300s" on-fail="block"

### Resource Location ###
location rsc_location-1 grp \
    rule 300: #uname eq vm01.localdomain \
    rule 200: #uname eq vm02.localdomain \
    rule 100: #uname eq vm03.localdomain

### Resource Colocation ###
colocation rsc_colocation-1 INFINITY: grp clnResource

### Resource Order ###
order rsc_order-1 0: clnResource grp symmetrical=false

[root@vm01 ~]#

Load crm configuration file

[root@vm01 ~]# crm configure load update dummy.crm
[root@vm01 ~]#

After load crm file, Let's check.

[root@vm01 ~]# crm_mon -1fA
Stack: corosync
Current DC: vm03.localdomain (version 1.1.21-1.el7-f14e36f) - partition with quorum
Last updated: Wed Apr 29 17:10:30 2020
Last change: Wed Apr 29 17:10:27 2020 by root via cibadmin on vm01.localdomain

3 nodes configured
5 resources configured

Online: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

Active resources:

 Resource Group: grp
     resource1  (ocf::heartbeat:Dummy): Started vm01.localdomain
     resource2  (ocf::heartbeat:Dummy): Started vm01.localdomain
 Clone Set: clnResource [resource3]
     Started: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

Node Attributes:
* Node vm01.localdomain:
* Node vm02.localdomain:
* Node vm03.localdomain:

Migration Summary:
* Node vm03.localdomain:
* Node vm02.localdomain:
* Node vm01.localdomain:
[root@vm01 ~]#

and confirm configuration by "crm configure show" command.

[root@vm01 ~]# crm configure show
node 1: vm01.localdomain
node 2: vm02.localdomain
node 3: vm03.localdomain
### Primitive Configuration ###
primitive resource1 Dummy \
        op start interval=0s timeout=300s on-fail=restart \
        op monitor interval=10s timeout=60s on-fail=restart \
        op stop interval=0s timeout=300s on-fail=block
primitive resource2 Dummy \
        op start interval=0s timeout=300s on-fail=restart \
        op monitor interval=10s timeout=60s on-fail=restart \
        op stop interval=0s timeout=300s on-fail=block
primitive resource3 Dummy \
        op start interval=0s timeout=300s on-fail=restart \
        op monitor interval=10s timeout=60s on-fail=restart \
        op stop interval=0s timeout=300s on-fail=block
### Group Configuration ###
group grp resource1 resource2
### Clone Configuration ###
clone clnResource resource3
### Resource Location ###
location rsc_location-1 grp \
        rule 300: #uname eq vm01.localdomain \
        rule 200: #uname eq vm02.localdomain \
        rule 100: #uname eq vm03.localdomain
### Resource Colocation ###
colocation rsc_colocation-1 inf: grp clnResource
### Resource Order ###
order rsc_order-1 0: clnResource grp symmetrical=false
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=1.1.21-1.el7-f14e36f \
        cluster-infrastructure=corosync \
        stonith-enabled=false
### Resource Defaults ###
rsc_defaults rsc-options: \
        resource-stickiness=INFINITY \
        migration-threshold=1
[root@vm01 ~]#

try failover

delete a file to occur failure of cluster resouce as followings.

[root@vm01 ~]# rm -f /var/run/resource-agents/Dummy-resource1.state
[root@vm01 ~]#

We can found failed action.

[root@vm01 ~]# crm_mon -f1A
Stack: corosync
Current DC: vm03.localdomain (version 1.1.21-1.el7-f14e36f) - partition with quorum
Last updated: Wed Apr 29 17:27:35 2020
Last change: Wed Apr 29 17:10:27 2020 by root via cibadmin on vm01.localdomain

3 nodes configured
5 resources configured

Online: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

Active resources:

 Resource Group: grp
     resource1  (ocf::heartbeat:Dummy): Started vm02.localdomain
     resource2  (ocf::heartbeat:Dummy): Started vm02.localdomain
 Clone Set: clnResource [resource3]
     Started: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

Node Attributes:
* Node vm01.localdomain:
* Node vm02.localdomain:
* Node vm03.localdomain:

Migration Summary:
* Node vm03.localdomain:
* Node vm02.localdomain:
* Node vm01.localdomain:
   resource1: migration-threshold=1 fail-count=1 last-failure='Wed Apr 29 17:27:19 2020'

Failed Resource Actions:
* resource1_monitor_10000 on vm01.localdomain 'not running' (7): call=18, status=complete, exitreason='No process state file found',
    last-rc-change='Wed Apr 29 17:27:19 2020', queued=0ms, exec=0ms
[root@vm01 ~]#

A point is "Failed Resource Actions".

Migration Summary:
* Node vm03.localdomain:
* Node vm02.localdomain:
* Node vm01.localdomain:
   resource1: migration-threshold=1 fail-count=1 last-failure='Wed Apr 29 17:27:19 2020'

Failed Resource Actions:
* resource1_monitor_10000 on vm01.localdomain 'not running' (7): call=18, status=complete, exitreason='No process state file found',
    last-rc-change='Wed Apr 29 17:27:19 2020', queued=0ms, exec=0ms
[root@vm01 ~]#

check fail-count for resource1 -> but value=0

[root@vm01 ~]# crm resource failcount resource1 show vm01.localdomain
scope=status  name=fail-count-resource1 value=0
[root@vm01 ~]#

try to cleanup -> OK.

[root@vm01 ~]# crm resource cleanup resource1 vm01.localdomain
Cleaned up resource1 on vm01.localdomain
.Cleaned up resource2 on vm01.localdomain
Waiting for 1 reply from the CRMd. OK
[root@vm01 ~]#

[root@vm01 ~]# crm_mon -f1A
Stack: corosync
Current DC: vm03.localdomain (version 1.1.21-1.el7-f14e36f) - partition with quorum
Last updated: Wed Apr 29 17:43:27 2020
Last change: Wed Apr 29 17:43:17 2020 by hacluster via crmd on vm01.localdomain

3 nodes configured
5 resources configured

Online: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

Active resources:

 Resource Group: grp
     resource1  (ocf::heartbeat:Dummy): Started vm01.localdomain
     resource2  (ocf::heartbeat:Dummy): Started vm01.localdomain
 Clone Set: clnResource [resource3]
     Started: [ vm01.localdomain vm02.localdomain vm03.localdomain ]

Node Attributes:
* Node vm01.localdomain:
* Node vm02.localdomain:
* Node vm03.localdomain:

Migration Summary:
* Node vm03.localdomain:
* Node vm02.localdomain:
* Node vm01.localdomain:
[root@vm01 ~]#

That's all.

Create a new VM in CentOS 7.x(KVM)

create Hash Password

1) execute below command and input root's Password

[root@pm01 ~]# python -c 'import crypt,getpass; pw=getpass.getpass(); print(crypt.crypt(pw) if (pw==getpass.getpass("Re-Password: ")) else exit())'
Password: <<input password>>
Re-Password: <<re-input password>>
<<can see Hash password>>
[root@pm01 ~]#

kickstart config file

1) creat kickstart config file
2) change "< Hash root's Password >" to actual hash password

[root@pm01 ~]# cat vm03.ks
#version=DEVEL

# Use CDROM installation media
cdrom

# Use text install
text
cmdline
skipx

# # Run the Setup Agent on first boot
# firstboot --enable
ignoredisk --only-use=vda

# Keyboard layouts
keyboard --vckeymap=jp --xlayouts='jp','us'
# System language
lang en_US.UTF-8

# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on --noipv6 --activate
# network  --bootproto=static --device=eth0 --ip=192.168.122.102 --netmask=255.255.255.0 --gateway=192.168.122.1 --nameserver=192.168.122.1 --noipv6 --activate
# network  --bootproto=dhcp --device=eth0 --onboot=on --activate
network  --hostname=vm03.localdomain

# System authorization information
auth --enableshadow --passalgo=sha512
# Root password
rootpw --iscrypted < Hash root's Password >

# # System services
# services --disabled="chronyd"

# System timezone
timezone Asia/Tokyo --isUtc --nontp


# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=vda
autopart --type=lvm
# Partition clearing information
clearpart --all --initlabel --drives=vda

selinux --disabled

%packages
@^minimal
@core
kexec-tools

%end

%addon com_redhat_kdump --enable --reserve-mb='auto'

%end

reboot --eject
[root@pm01 ~]#

execute script for virt-install

1) create script file using virt-install

[root@pm01 ~]# cat create_vm03.sh
#!/bin/bash

name=vm03
cpu=1
memory=4096
disk=30

qemu-img create -f qcow2 /var/lib/libvirt/images/${name}.qcow2 ${disk}G

# virt-install --connect=qemu:///system \
virt-install \
 --name=${name} \
 --disk /var/lib/libvirt/images/${name}.qcow2,format=qcow2,bus=virtio \
 --network network=default,model=virtio \
 --initrd-inject=./${name}.ks \
 --extra-args="ks=file:/${name}.ks biosdevname=0 net.ifnames=0 console=tty0 console=ttyS0,115200n8" \
 --vcpus=${cpu} \
 --ram=${memory} \
 --accelerate \
 --hvm \
 --virt-type kvm \
 --location='/var/lib/libvirt/images/CentOS-7-x86_64-Minimal-1908.iso' \
 --nographics \
 --os-type=linux \
 --os-variant=centos7.0 \
 --arch=x86_64

[root@pm01 ~]#

2) start to create a new VM by below command

[root@pm01 ~]# sh create_vm03.sh

Try to use Karate Netty as rest api mock

check help

The command is java -jar .\karate-0.9.5.jar -h

PS C:\desk\karate> java -jar .\karate-0.9.5.jar -h
00:27:14.852 [main] INFO  com.intuit.karate.Main - Karate version: 0.9.5
Usage: <main class> [-Chsw] [-d[=<debugPort>]] [-c=<cert>] [-e=<env>]
                    [-i=<importFile>] [-j=<jobServerUrl>] [-k=<key>]
                    [-m=<mock>] [-n=<name>] [-o=<output>] [-p=<port>]
                    [-T=<threads>] [-t=<tags>]... [<tests>...]
      [<tests>...]          one or more tests (features) or search-paths to run
  -c, --cert=<cert>         ssl certificate (default: cert.pem)
  -C, --clean               clean output directory
  -d, --debug[=<debugPort>] debug mode (optional port else dynamically chosen)
  -e, --env=<env>           value of 'karate.env'
  -h, --help                display this help message
  -i, --import=<importFile> import and convert a file
  -j, --jobserver=<jobServerUrl>
                            job server url
  -k, --key=<key>           ssl private key (default: key.pem)
  -m, --mock=<mock>         mock server file
  -n, --name=<name>         scenario name
  -o, --output=<output>     directory where logs and reports are output
                              (default 'target')
  -p, --port=<port>         mock server port (required for --mock)
  -s, --ssl                 use ssl / https, will use 'cert.pem' and 'key.pem'
                              if they exist in the working directory, or
                              generate them
  -t, --tags=<tags>         cucumber tags - e.g. '@smoke,~@ignore'
  -T, --threads=<threads>   number of threads when running tests
  -w, --watch               watch (and hot-reload) mock server file for changes
PS C:\desk\karate>

try API Test-Doubles(Karate Netty) demo

1) check https://intuit.github.io/karate/karate-netty/
2) get demo-mock.feature from https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/mock/proxy/demo-mock.feature
3) start test-double as below
The command is java -jar .\karate-0.9.5.jar -m .\demo-mock.feature -p 18080

PS C:\desk\karate> java -jar .\karate-0.9.5.jar -m .\demo-mock.feature -p 18080
01:04:12.892 [main] INFO  com.intuit.karate.Main - Karate version: 0.9.5
01:04:12.992 [main] ERROR com.intuit.karate.FileUtils - file system creation failed for URI: jar:file:/./karate-0.9.5.jar!/ - C:\.\karate-0.9.5.jar
Warning: Nashorn engine is planned to be removed from a future JDK release
01:04:13.493 [main] INFO  com.intuit.karate - backend initialized
01:04:13.881 [main] INFO  c.intuit.karate.netty.FeatureServer - server started - http://127.0.0.1:18080

4) send request "/greeting"

f:id:akai_tsuki:20200322014645p:plain
postman

5) we can find below log in console

01:25:26.555 [nioEventLoopGroup-3-2] DEBUG com.intuit.karate - handling method: GET, uri: /greeting
01:25:26.600 [nioEventLoopGroup-3-2] DEBUG com.intuit.karate - scenario skipped: pathMatches('/greeting') && paramValue('name') != null
01:25:26.608 [nioEventLoopGroup-3-2] DEBUG com.intuit.karate - scenario matched: pathMatches('/greeting')