การเตรียมโครงสร้างพื้นฐานสำหรับ Software Factory เพื่อให้ระบบทำงานได้อย่างมีประสิทธิภาพและเสถียร
โครงสร้างพื้นฐาน (Infrastructure) สำหรับระบบ DevOps ครอบคลุมตั้งแต่
เครื่องมือเหล่านี้จำเป็นต้องรันอยู่บนระบบคอมพิวเตอร์บางประเภท เช่น
ในการออกแบบ Software Factory สิ่งแรกที่ต้องตัดสินใจคือ
"เราจะเอาเครื่องมือ DevOps ไปวางไว้ที่ไหน?"
ตำแหน่งที่เลือกจะส่งผลโดยตรงต่อ
ตัวอย่างตำแหน่งที่สามารถวาง DevOps Infrastructure
บทความนี้จะแบ่งแนวทางออกเป็น 2 แบบ
| หัวข้อ | Dedicated DevOps (แยกเครื่อง) | In-Cluster DevOps |
|---|---|---|
| การจัดการ | Docker Compose | Kubernetes |
| ทรัพยากร | แยกทรัพยากรชัดเจน | ใช้ทรัพยากรร่วมกับ Production |
| ความปลอดภัย | ลด Blast Radius | ต้องกำหนด Policy อย่างรัดกุม |
| ความซับซ้อน | ต่ำ | สูง |
ข้อดี
หาก Kubernetes Cluster หลัก (Production) ล่ม
ระบบ DevOps เช่น CI/CD จะยังทำงานต่อได้
ทีมสามารถ
ลด Blast Radius
หมายถึง
ปัญหาใน Production จะไม่กระทบ DevOps
และปัญหา DevOps จะไม่กระทบ Production
ความเสี่ยงที่อาจเกิดขึ้น Resource Starvation เช่น
ตัวอย่าง
CI/CD Build Job
กิน CPU 80%
Production Pod อาจช้าลงทันที หากไม่ได้ตั้งค่า
อย่างถูกต้อง อาจเกิด Domino Effect
ใช้ Docker Compose บน Server แยก
ข้อดี
ไม่ต้องจัดการระบบ Kubernetes เช่น
เหมาะกับทีมที่ต้องการ
DevOps ที่เสถียรและดูแลง่าย
ใช้ความสามารถของ Kubernetes เช่น
แต่ต้องมีความรู้ในเรื่อง
เหมาะกับทีมที่มีประสบการณ์ Kubernetes แล้ว
เหมาะสำหรับ
ข้อดี
ตัวอย่างเครื่องมือ
เหมาะกับ
เหมาะกับองค์กรขนาดใหญ่
ข้อดี Kubernetes สามารถ
ข้อดีระยะยาว
ลดค่าใช้จ่าย เพราะไม่ต้องเปิด Build Server ตลอดเวลา
สำหรับทีมขนาด เล็ก – กลาง แนวทางที่เหมาะที่สุดคือ
Dedicated DevOps Stack
ข้อดี
Cloud VPS
vCPU : 16
RAM : 48GB
| Service | RAM |
|---|---|
| GitLab | 8 – 12 GB |
| SonarQube + Nexus | 8 GB |
| SigNoz | 4 – 6 GB |
| Jenkins | 4 – 8 GB |
ข้อดี
สามารถกำหนด
Memory Limit
CPU Limit
ป้องกัน Container แย่ง Resource กัน
เครื่องมือทั้งหมดอยู่ใน VPS เดียว
GitLab
Jenkins
Nexus
SonarQube
SigNoz
ข้อดี
กำหนด A Record ให้ Subdomain ชี้ไปยัง Public IP ของ VPS
| Domain | Service |
|---|---|
| git.example.com | GitLab |
| ci.example.com | Jenkins |
| sonar.example.com | SonarQube |
| nexus.example.com | Nexus |
| monitor.example.com | SigNoz |
| collector.example.com | OTLP Collector |
| Domain | Service |
|---|---|
| argocd.example.com | ArgoCD |
| rancher.example.com | Rancher |
ต้องติดตั้ง DevOps Stack ให้สำเร็จ
ต้องสามารถ
ตัวอย่าง
git.example.com
ci.example.com
sslip.ioข้อดี
หมายเหตุ Magic DNS (เช่น sslip.io) ไม่ได้ส่งข้อมูลจาก Intranet ออกไป มีแค่ DNS Query ไปหา sslip.io เพื่อ “แปลงชื่อโดเมน → IP” ทำให้ไม่ต้องซื้อ Domain ไม่ต้องตั้งค่า DNS A Record และใช้ร่วมกับ Traefik เพื่อออก SSL Certificate อัตโนมัติ (ได้ HTTPS)
services:
gitlab:
image: 'gitlab/gitlab-ce:latest'
environment:
- SERVICE_URL_GITLAB_80
- 'TZ=${TZ:-UTC}'
- 'GITLAB_TIMEZONE=${GITLAB_TIMEZONE:-UTC}'
- GITLAB_ROOT_PASSWORD=$SERVICE_PASSWORD_GITLAB
- EXTERNAL_URL=$SERVICE_URL_GITLAB
- GITLAB_HOST=$SERVICE_URL_GITLAB
- 'GITLAB_SMTP_ENABLE=${GITLAB_SMTP_ENABLE:-false}'
- GITLAB_SMTP_ADDRESS=$GITLAB_SMTP_ADDRESS
- 'GITLAB_SMTP_PORT=${GITLAB_SMTP_PORT:-587}'
- 'GITLAB_SMTP_USER_NAME=${GITLAB_SMTP_USER_NAME}'
- 'GITLAB_SMTP_PASSWORD=${GITLAB_SMTP_PASSWORD}'
- 'GITLAB_SMTP_DOMAIN=${GITLAB_SMTP_DOMAIN}'
- 'GITLAB_STARTTLS_AUTO=${GITLAB_STARTTLS_AUTO:-true}'
- 'GITLAB_SMTP_TLS=${GITLAB_SMTP_TLS:-false}'
- 'GITLAB_EMAIL_FROM=${GITLAB_EMAIL_FROM}'
- GITLAB_EMAIL_REPLY_TO=$GITLAB_EMAIL_REPLY_TO
- 'GITLAB_OMNIBUS_CONFIG=external_url "${SERVICE_URL_GITLAB}"; nginx["listen_https"] = false; nginx["listen_port"] = 80; gitlab_rails["gitlab_shell_ssh_port"] = 2222; gitlab_rails["smtp_enable"] = ${GITLAB_SMTP_ENABLE}; gitlab_rails["smtp_address"] = "${GITLAB_SMTP_ADDRESS}"; gitlab_rails["smtp_port"] = ${GITLAB_SMTP_PORT}; gitlab_rails["smtp_user_name"] = "${GITLAB_SMTP_USER_NAME}"; gitlab_rails["smtp_password"] = "${GITLAB_SMTP_PASSWORD}"; gitlab_rails["smtp_domain"] = "${GITLAB_SMTP_DOMAIN}"; gitlab_rails["smtp_authentication"] = "login"; gitlab_rails["smtp_enable_starttls_auto"] = ${GITLAB_STARTTLS_AUTO}; gitlab_rails["smtp_tls"] = ${GITLAB_SMTP_TLS}; gitlab_rails["gitlab_email_from"] = "${GITLAB_EMAIL_FROM}"; gitlab_rails["gitlab_email_reply_to"] = "${GITLAB_EMAIL_REPLY_TO}";'
ports:
- '2222:22'
volumes:
- 'gitlab-config:/etc/gitlab'
- 'gitlab-logs:/var/log/gitlab'
- 'gitlab-data:/var/opt/gitlab'
shm_size: 256m
deploy:
resources:
limits:
memory: 12G # จำกัด memory ไว้ประมาณ 12 gb
cpus: '4.0' # จำกัด CPU ไว้ประมาณ 4 cores จาก 16 cores เพื่อเผื่อให้ตัวอื่น
logging: # ป้องกันไม่ให้ไฟล์ Log ของ GitLab บวมจนเต็มพื้นที่ดิสก์
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
services:
jenkins:
image: 'jenkins/jenkins:latest'
environment:
- SERVICE_URL_JENKINS_8080
- JAVA_OPTS=-Xmx3g
- JENKINS_JAVA_OPTIONS=-Dotel.traces.exporter=otlp -Dotel.metrics.exporter=none -Dotel.logs.exporter=none -Dotel.exporter.otlp.protocol=http/protobuf -Dotel.exporter.otlp.endpoint=https://collector.panmodel.com -Dotel.exporter.otlp.traces.endpoint=https://collector.panmodel.com/v1/traces -Dotel.exporter.otlp.traces.protocol=http/protobuf -Dotel.exporter.otlp.metrics.endpoint=https://collector.panmodel.com/v1/metrics -Dotel.exporter.otlp.metrics.protocol=http/protobuf -Dotel.exporter.otlp.logs.endpoint=https://collector.panmodel.com/v1/logs -Dotel.exporter.otlp.logs.protocol=http/protobuf
volumes:
- 'jenkins-home:/var/jenkins_home'
- '/var/run/docker.sock:/var/run/docker.sock'
- '/usr/bin/docker:/usr/bin/docker'
deploy:
resources:
limits:
memory: 4G
cpus: '2.0'
logging:
driver: json-file
options:
max-size: 10m
max-file: '3'
healthcheck:
test:
- CMD
- curl
- '-f'
- 'http://localhost:8080/login'
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
services:
nexus:
image: sonatype/nexus3
platform: linux/amd64
environment:
- NEXUS_SECURITY_RANDOMPASSWORD=false
- 'INSTALL4J_ADD_VM_PARAMS=-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Djava.util.prefs.userRoot=/nexus-data/javaprefs'
volumes:
- 'nexus_data:/nexus-data'
networks:
- cool
labels:
- traefik.enable=true
- traefik.docker.network=cool
- traefik.http.routers.nexus.rule=Host(`nexus.panmodel.com`)
- traefik.http.routers.nexus.entrypoints=https
- traefik.http.routers.nexus.tls=true
- traefik.http.routers.nexus.service=nexus-n0ccw0k8swwsossg8wsok0cs-ui
- traefik.http.services.nexus-n0ccw0k8swwsossg8wsok0cs-ui.loadbalancer.server.port=8081
- traefik.http.routers.nexus-registry.rule=Host(`registry.panmodel.com`)
- traefik.http.routers.nexus-registry.entrypoints=https
- traefik.http.routers.nexus-registry.tls=true
- traefik.http.routers.nexus-registry.service=nexus-n0ccw0k8swwsossg8wsok0cs-reg
- traefik.http.services.nexus-n0ccw0k8swwsossg8wsok0cs-reg.loadbalancer.server.port=5000
deploy:
resources:
limits:
memory: 4G
cpus: '4.0'
logging:
driver: json-file
options:
max-size: 10m
max-file: '3'
networks:
cool:
external: true
volumes:
nexus_data: null
version: '3.8'
services:
sonarqube:
image: 'sonarqube:community'
container_name: sonarqube
depends_on:
db:
condition: service_healthy
environment:
- SONAR_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONAR_JDBC_USERNAME=sonar
- SONAR_JDBC_PASSWORD=sonar
# แก้ไขให้ตรงกับ Subdomain ที่เราตั้งไว้
- SONAR_WEB_BASE_URL=https://sonar.panmodel.com
volumes:
- 'sonarqube_data:/opt/sonarqube/data'
- 'sonarqube_extensions:/opt/sonarqube/extensions'
- 'sonarqube_logs:/opt/sonarqube/logs'
deploy:
resources:
limits:
memory: 4G # จำกัด RAM ตามแผน
cpus: '2.0'
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
db:
image: 'postgres:15'
container_name: postgresql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U sonar -d sonar"]
interval: 10s
timeout: 5s
retries: 5
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
- POSTGRES_DB=sonar
volumes:
- 'postgresql_data:/var/lib/postgresql/data'
deploy:
resources:
limits:
memory: 1G # แบ่ง RAM ให้ Database 1GB
cpus: '1.0'
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
postgresql_data:
admin / admin
services:
init-clickhouse:
image: 'clickhouse/clickhouse-server:25.5.6-alpine'
command:
- bash
- '-c'
- "version=\"v0.0.1\"\nnode_os=$$(uname -s | tr '[:upper:]' '[:lower:]')\nnode_arch=$$(uname -m | sed s/aarch64/arm64/ | sed s/x86_64/amd64/)\necho \"Fetching histogram-binary for $${node_os}/$${node_arch}\"\ncd /tmp\nwget -O histogram-quantile.tar.gz \"https://github.com/SigNoz/signoz/releases/download/histogram-quantile%2F$${version}/histogram-quantile_$${node_os}_$${node_arch}.tar.gz\"\ntar -xvzf histogram-quantile.tar.gz\nmkdir -p /var/lib/clickhouse/user_scripts/histogramQuantile\nmv histogram-quantile /var/lib/clickhouse/user_scripts/histogramQuantile\n"
restart: on-failure
exclude_from_hc: true
logging:
options:
max-size: 50m
max-file: '3'
zookeeper:
image: 'signoz/zookeeper:3.9.3'
user: root
healthcheck:
test:
- CMD-SHELL
- 'curl -s -m 2 http://localhost:8080/commands/ruok | grep error | grep null'
interval: 30s
timeout: 5s
retries: 3
logging:
options:
max-size: 50m
max-file: '3'
volumes:
- 'zookeeper:/bitnami/zookeeper'
environment:
- 'ALLOW_ANONYMOUS_LOGIN=${ZOO_ALLOW_ANONYMOUS_LOGIN:-yes}'
- 'ZOO_AUTOPURGE_INTERVAL=${ZOO_AUTOPURGE_INTERVAL:-1}'
- 'ZOO_ENABLE_PROMETHEUS_METRICS=${ZOO_ENABLE_PROMETHEUS_METRICS:-yes}'
- 'ZOO_PROMETHEUS_METRICS_PORT_NUMBER=${ZOO_PROMETHEUS_METRICS_PORT_NUMBER:-9141}'
clickhouse:
image: 'clickhouse/clickhouse-server:25.5.6-alpine'
tty: true
depends_on:
init-clickhouse:
condition: service_completed_successfully
zookeeper:
condition: service_healthy
healthcheck:
test:
- CMD
- wget
- '--spider'
- '-q'
- '127.0.0.1:8123/ping'
interval: 10s
timeout: 5s
retries: 5
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
logging:
options:
max-size: 50m
max-file: '3'
environment:
- CLICKHOUSE_SKIP_USER_SETUP=1
volumes:
-
type: volume
source: clickhouse
target: /var/lib/clickhouse/
-
type: bind
source: ./clickhouse/custom-function.xml
target: /etc/clickhouse-server/custom-function.xml
-
type: bind
source: ./clickhouse/cluster.xml
target: /etc/clickhouse-server/config.d/cluster.xml
-
type: bind
source: ./clickhouse/users.xml
target: /etc/clickhouse-server/users.xml
-
type: bind
source: ./clickhouse/config.xml
target: /etc/clickhouse-server/config.xml
signoz:
image: 'signoz/signoz:v0.97.1'
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
logging:
options:
max-size: 50m
max-file: '3'
command:
- '--config=/root/config/prometheus.yml'
volumes:
-
type: bind
source: ./prometheus.yml
target: /root/config/prometheus.yml
-
type: volume
source: sqlite
target: /var/lib/signoz/
environment:
- SERVICE_URL_SIGNOZ_8080
- 'SIGNOZ_JWT_SECRET=${SERVICE_REALBASE64_JWTSECRET}'
- 'SIGNOZ_TELEMETRYSTORE_CLICKHOUSE_DSN=tcp://clickhouse:9000'
- SIGNOZ_SQLSTORE_SQLITE_PATH=/var/lib/signoz/signoz.db
- DASHBOARDS_PATH=/root/config/dashboards
- STORAGE=clickhouse
- GODEBUG=netdns=go
- DEPLOYMENT_TYPE=docker-standalone-amd
- 'SIGNOZ_STATSREPORTER_ENABLED=${SIGNOZ_STATSREPORTER_ENABLED:-true}'
- 'SIGNOZ_EMAILING_ENABLED=${SIGNOZ_EMAILING_ENABLED:-false}'
- 'SIGNOZ_EMAILING_SMTP_ADDRESS=${SIGNOZ_EMAILING_SMTP_ADDRESS}'
- 'SIGNOZ_EMAILING_SMTP_FROM=${SIGNOZ_EMAILING_SMTP_FROM}'
- 'SIGNOZ_EMAILING_SMTP_AUTH_USERNAME=${SIGNOZ_EMAILING_SMTP_AUTH_USERNAME}'
- 'SIGNOZ_EMAILING_SMTP_AUTH_PASSWORD=${SIGNOZ_EMAILING_SMTP_AUTH_PASSWORD}'
- SIGNOZ_ALERTMANAGER_PROVIDER=signoz
- 'SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__AUTH__PASSWORD=${SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__AUTH__PASSWORD}'
- 'SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__AUTH__USERNAME=${SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__AUTH__USERNAME}'
- 'SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__FROM=${SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__FROM}'
- 'SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__SMARTHOST=${SIGNOZ_ALERTMANAGER_SIGNOZ_GLOBAL_SMTP__SMARTHOST}'
- DOT_METRICS_ENABLED=true
healthcheck:
test:
- CMD
- wget
- '--spider'
- '-q'
- 'localhost:8080/api/v1/health'
interval: 30s
timeout: 5s
retries: 3
otel-collector:
image: 'signoz/signoz-otel-collector:v0.129.7'
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
signoz:
condition: service_healthy
logging:
options:
max-size: 50m
max-file: '3'
command:
- '--config=/etc/otel-collector-config.yaml'
- '--manager-config=/etc/manager-config.yaml'
- '--copy-path=/var/tmp/collector-config.yaml'
- '--feature-gates=-pkg.translator.prometheus.NormalizeName'
volumes:
-
type: bind
source: ./otel-collector-config.yaml
target: /etc/otel-collector-config.yaml
-
type: bind
source: ./otel-collector-opamp-config.yaml
target: /etc/manager-config.yaml
environment:
- 'SERVICE_URL_OTELCOLLECTORHTTP=https://collector.panmodel.com'
- 'OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux'
- LOW_CARDINAL_EXCEPTION_GROUPING=false
labels:
- traefik.enable=true
- traefik.docker.network=cool
- traefik.http.routers.otel.rule=Host(`collector.panmodel.com`)
- traefik.http.routers.otel.entrypoints=https
- traefik.http.routers.otel.tls=true
- traefik.http.routers.otel.service=otel-svc
- traefik.http.services.otel-svc.loadbalancer.server.port=4318
healthcheck:
test: 'bash -c "exec 6<> /dev/tcp/localhost/13133"'
interval: 30s
timeout: 5s
retries: 3
schema-migrator-sync:
image: 'signoz/signoz-schema-migrator:v0.129.7'
command:
- sync
- '--dsn=tcp://clickhouse:9000'
- '--up='
depends_on:
clickhouse:
condition: service_healthy
restart: on-failure
exclude_from_hc: true
logging:
options:
max-size: 50m
max-file: '3'
schema-migrator-async:
image: 'signoz/signoz-schema-migrator:v0.129.7'
depends_on:
clickhouse:
condition: service_healthy
schema-migrator-sync:
condition: service_completed_successfully
restart: on-failure
exclude_from_hc: true
logging:
options:
max-size: 50m
max-file: '3'
command:
- async
- '--dsn=tcp://clickhouse:9000'
- '--up='
<clickhouse>
<listen_host>::</listen_host>
<listen_host>0.0.0.0</listen_host>
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<user_directories>
<users_xml>
<path>users.xml</path>
</users_xml>
</user_directories>
<logger>
<level>information</level>
<console>1</console>
</logger>
<zookeeper>
<node>
<host>zookeeper</host>
<port>2181</port>
</node>
</zookeeper>
<distributed_ddl>
<path>/clickhouse/task_queue/ddl</path>
</distributed_ddl>
</clickhouse>
<functions>
<function>
<type>executable</type>
<name>histogramQuantile</name>
<return_type>Float64</return_type>
<argument>
<type>Array(Float64)</type>
<name>buckets</name>
</argument>
<argument>
<type>Array(Float64)</type>
<name>counts</name>
</argument>
<argument>
<type>Float64</type>
<name>quantile</name>
</argument>
<format>CSV</format>
<command>./histogramQuantile</command>
</function>
</functions>
<?xml version="1.0"?>
<clickhouse>
<!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
Optional. If you don't use replicated tables, you could omit that.
See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/
-->
<zookeeper>
<node index="1">
<host>zookeeper</host>
<port>2181</port>
</node>
</zookeeper>
<!-- Configuration of clusters that could be used in Distributed tables.
https://clickhouse.com/docs/en/operations/table_engines/distributed/
-->
<remote_servers>
<cluster>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
Right now the protocol is pretty simple and it only takes into account:
- cluster name
- query
Also it will be nice if the following will be implemented:
- source hostname (see interserver_http_host), but then it will depends from DNS,
it can use IP address instead, but then the you need to get correct on the initiator node.
- target hostname / ip address (same notes as for source hostname)
- time-based security tokens
-->
<!-- <secret></secret> -->
<shard>
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
<!-- <internal_replication>false</internal_replication> -->
<!-- Optional. Shard weight when writing data. Default: 1. -->
<!-- <weight>1</weight> -->
<replica>
<host>clickhouse</host>
<port>9000</port>
<!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
<!-- <priority>1</priority> -->
</replica>
</shard>
<!-- <shard>
<replica>
<host>clickhouse-2</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>clickhouse-3</host>
<port>9000</port>
</replica>
</shard> -->
</cluster>
</remote_servers>
</clickhouse>
<?xml version="1.0"?>
<clickhouse>
<!-- See also the files in users.d directory where the settings can be overridden. -->
<!-- Profiles of settings. -->
<profiles>
<!-- Default settings. -->
<default>
<!-- Maximum memory usage for processing single query, in bytes. -->
<max_memory_usage>10000000000</max_memory_usage>
<!-- How to choose between replicas during distributed query processing.
random - choose random replica from set of replicas with minimum number of errors
nearest_hostname - from set of replicas with minimum number of errors, choose replica
with minimum number of different symbols between replica's hostname and local hostname
(Hamming distance).
in_order - first live replica is chosen in specified order.
first_or_random - if first replica one has higher number of errors, pick a random one from replicas with minimum number of errors.
-->
<load_balancing>random</load_balancing>
</default>
<!-- Profile that allows only read queries. -->
<readonly>
<readonly>1</readonly>
</readonly>
</profiles>
<!-- Users and ACL. -->
<users>
<!-- If user name was not specified, 'default' user is used. -->
<default>
<!-- See also the files in users.d directory where the password can be overridden.
Password could be specified in plaintext or in SHA256 (in hex format).
If you want to specify password in plaintext (not recommended), place it in 'password' element.
Example: <password>qwerty</password>.
Password could be empty.
If you want to specify SHA256, place it in 'password_sha256_hex' element.
Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).
If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>
If you want to specify a previously defined LDAP server (see 'ldap_servers' in the main config) for authentication,
place its name in 'server' element inside 'ldap' element.
Example: <ldap><server>my_ldap_server</server></ldap>
If you want to authenticate the user via Kerberos (assuming Kerberos is enabled, see 'kerberos' in the main config),
place 'kerberos' element instead of 'password' (and similar) elements.
The name part of the canonical principal name of the initiator must match the user name for authentication to succeed.
You can also place 'realm' element inside 'kerberos' element to further restrict authentication to only those requests
whose initiator's realm matches it.
Example: <kerberos />
Example: <kerberos><realm>EXAMPLE.COM</realm></kerberos>
How to generate decent password:
Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
In first line will be password and in second - corresponding SHA256.
How to generate double SHA1:
Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
In first line will be password and in second - corresponding double SHA1.
-->
<password></password>
<!-- List of networks with open access.
To open access from everywhere, specify:
<ip>::/0</ip>
To open access only from localhost, specify:
<ip>::1</ip>
<ip>127.0.0.1</ip>
Each element of list has one of the following forms:
<ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
<host> Hostname. Example: server01.clickhouse.com.
To check access, DNS query is performed, and all received addresses compared to peer address.
<host_regexp> Regular expression for host names. Example, ^server\d\d-\d\d-\d\.clickhouse\.com$
To check access, DNS PTR query is performed for peer address and then regexp is applied.
Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
Strongly recommended that regexp is ends with $
All results of DNS requests are cached till server restart.
-->
<networks>
<ip>::/0</ip>
</networks>
<!-- Settings profile for user. -->
<profile>default</profile>
<!-- Quota for user. -->
<quota>default</quota>
<!-- User can create other users and grant rights to them. -->
<!-- <access_management>1</access_management> -->
</default>
</users>
<!-- Quotas. -->
<quotas>
<!-- Name of quota. -->
<default>
<!-- Limits for time interval. You could specify many intervals with different limits. -->
<interval>
<!-- Length of interval. -->
<duration>3600</duration>
<!-- No limits. Just calculate resource usage for time interval. -->
<queries>0</queries>
<errors>0</errors>
<result_rows>0</result_rows>
<read_rows>0</read_rows>
<execution_time>0</execution_time>
</interval>
</default>
</quotas>
</clickhouse>
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
batch:
send_batch_size: 256
send_batch_max_size: 512
timeout: 2s
resourcedetection:
detectors: [env, system]
timeout: 2s
signozspanmetrics/delta:
metrics_exporter: signozclickhousemetrics
metrics_flush_interval: 60s
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
enable_exp_histogram: true
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
- name: signoz.collector.id
- name: service.version
- name: browser.platform
- name: browser.mobile
- name: k8s.cluster.name
- name: k8s.node.name
- name: k8s.namespace.name
- name: host.name
- name: host.type
- name: container.name
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource: tcp://clickhouse:9000/signoz_traces
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
use_new_schema: true
signozclickhousemetrics:
dsn: tcp://clickhouse:9000/signoz_metrics
clickhouselogsexporter:
dsn: tcp://clickhouse:9000/signoz_logs
timeout: 10s
use_new_schema: true
service:
telemetry:
logs:
encoding: json
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [signozclickhousemetrics]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [signozclickhousemetrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter]
server_endpoint: ws://signoz:4320/v1/opamp
docker run --rm ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest \
traces --service "test-over-https" \
--duration 5s \
--rate 5 \
--otlp-endpoint collector.example.com:443 \
--otlp-http
หากเห็น service ชื่อ test-over-https ใน SigNoz
แสดงว่า
ระบบ Telemetry ทำงานสมบูรณ์แล้ว