Jun 21 06:08:53.179709 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 06:08:53.179760 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:08:53.179779 kernel: BIOS-provided physical RAM map: Jun 21 06:08:53.179793 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jun 21 06:08:53.179806 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jun 21 06:08:53.179820 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jun 21 06:08:53.179849 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jun 21 06:08:53.179864 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jun 21 06:08:53.179879 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jun 21 06:08:53.179893 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jun 21 06:08:53.179908 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jun 21 06:08:53.179922 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jun 21 06:08:53.179937 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jun 21 06:08:53.179952 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jun 21 06:08:53.179974 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jun 21 06:08:53.179990 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jun 21 06:08:53.180006 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jun 21 06:08:53.180022 kernel: NX (Execute Disable) protection: active Jun 21 06:08:53.180038 kernel: APIC: Static calls initialized Jun 21 06:08:53.180053 kernel: efi: EFI v2.7 by EDK II Jun 21 06:08:53.180070 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jun 21 06:08:53.180086 kernel: random: crng init done Jun 21 06:08:53.180105 kernel: secureboot: Secure boot disabled Jun 21 06:08:53.180121 kernel: SMBIOS 2.4 present. Jun 21 06:08:53.180137 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jun 21 06:08:53.180153 kernel: DMI: Memory slots populated: 1/1 Jun 21 06:08:53.180168 kernel: Hypervisor detected: KVM Jun 21 06:08:53.180184 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 06:08:53.180200 kernel: kvm-clock: using sched offset of 15309823685 cycles Jun 21 06:08:53.180217 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 06:08:53.180234 kernel: tsc: Detected 2299.998 MHz processor Jun 21 06:08:53.180250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 06:08:53.180271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 06:08:53.180287 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jun 21 06:08:53.180303 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jun 21 06:08:53.180320 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 06:08:53.180336 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jun 21 06:08:53.180351 kernel: Using GB pages for direct mapping Jun 21 06:08:53.180367 kernel: ACPI: Early table checksum verification disabled Jun 21 06:08:53.180384 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jun 21 06:08:53.180410 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jun 21 06:08:53.180428 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jun 21 06:08:53.180445 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jun 21 06:08:53.180462 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jun 21 06:08:53.180480 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jun 21 06:08:53.180497 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jun 21 06:08:53.180518 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jun 21 06:08:53.180535 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jun 21 06:08:53.180552 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jun 21 06:08:53.180569 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jun 21 06:08:53.180586 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jun 21 06:08:53.180604 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jun 21 06:08:53.180645 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jun 21 06:08:53.180661 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jun 21 06:08:53.180676 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jun 21 06:08:53.180696 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jun 21 06:08:53.180714 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jun 21 06:08:53.180730 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jun 21 06:08:53.180744 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jun 21 06:08:53.180758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 21 06:08:53.180774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jun 21 06:08:53.180791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jun 21 06:08:53.180809 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jun 21 06:08:53.180839 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jun 21 06:08:53.180861 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Jun 21 06:08:53.180879 kernel: Zone ranges: Jun 21 06:08:53.180897 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 06:08:53.180915 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 06:08:53.180933 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jun 21 06:08:53.180950 kernel: Device empty Jun 21 06:08:53.180968 kernel: Movable zone start for each node Jun 21 06:08:53.180985 kernel: Early memory node ranges Jun 21 06:08:53.181003 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jun 21 06:08:53.181025 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jun 21 06:08:53.181042 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jun 21 06:08:53.181060 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jun 21 06:08:53.181078 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jun 21 06:08:53.181096 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jun 21 06:08:53.181113 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jun 21 06:08:53.181132 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 06:08:53.181149 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jun 21 06:08:53.181167 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jun 21 06:08:53.181183 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jun 21 06:08:53.181205 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 21 06:08:53.181223 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jun 21 06:08:53.181240 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 21 06:08:53.181258 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 06:08:53.181274 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 06:08:53.181290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 06:08:53.181308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 06:08:53.181325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 06:08:53.181343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 06:08:53.181364 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 06:08:53.181382 kernel: CPU topo: Max. logical packages: 1 Jun 21 06:08:53.181399 kernel: CPU topo: Max. logical dies: 1 Jun 21 06:08:53.181417 kernel: CPU topo: Max. dies per package: 1 Jun 21 06:08:53.181434 kernel: CPU topo: Max. threads per core: 2 Jun 21 06:08:53.181452 kernel: CPU topo: Num. cores per package: 1 Jun 21 06:08:53.181470 kernel: CPU topo: Num. threads per package: 2 Jun 21 06:08:53.181488 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 06:08:53.181505 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jun 21 06:08:53.181527 kernel: Booting paravirtualized kernel on KVM Jun 21 06:08:53.181545 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 06:08:53.181562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 06:08:53.181579 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 06:08:53.181596 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 06:08:53.181630 kernel: pcpu-alloc: [0] 0 1 Jun 21 06:08:53.181646 kernel: kvm-guest: PV spinlocks enabled Jun 21 06:08:53.181669 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 06:08:53.181686 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:08:53.181710 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 06:08:53.181727 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 21 06:08:53.181744 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 06:08:53.181760 kernel: Fallback order for Node 0: 0 Jun 21 06:08:53.181775 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jun 21 06:08:53.181792 kernel: Policy zone: Normal Jun 21 06:08:53.181809 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 06:08:53.181834 kernel: software IO TLB: area num 2. Jun 21 06:08:53.181870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 06:08:53.181889 kernel: Kernel/User page tables isolation: enabled Jun 21 06:08:53.181908 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 06:08:53.181930 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 06:08:53.181949 kernel: Dynamic Preempt: voluntary Jun 21 06:08:53.181968 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 06:08:53.181987 kernel: rcu: RCU event tracing is enabled. Jun 21 06:08:53.182006 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 06:08:53.182026 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 06:08:53.182049 kernel: Rude variant of Tasks RCU enabled. Jun 21 06:08:53.182068 kernel: Tracing variant of Tasks RCU enabled. Jun 21 06:08:53.182087 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 06:08:53.182106 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 06:08:53.182126 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:08:53.182145 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:08:53.182164 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:08:53.182182 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 06:08:53.182205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 06:08:53.182225 kernel: Console: colour dummy device 80x25 Jun 21 06:08:53.182244 kernel: printk: legacy console [ttyS0] enabled Jun 21 06:08:53.182262 kernel: ACPI: Core revision 20240827 Jun 21 06:08:53.182281 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 06:08:53.182300 kernel: x2apic enabled Jun 21 06:08:53.182319 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 06:08:53.182338 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jun 21 06:08:53.182358 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 21 06:08:53.182381 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jun 21 06:08:53.182399 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jun 21 06:08:53.182418 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jun 21 06:08:53.182437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 06:08:53.182456 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jun 21 06:08:53.182474 kernel: Spectre V2 : Mitigation: IBRS Jun 21 06:08:53.182493 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 06:08:53.182512 kernel: RETBleed: Mitigation: IBRS Jun 21 06:08:53.182531 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 06:08:53.182553 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jun 21 06:08:53.182572 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 06:08:53.182590 kernel: MDS: Mitigation: Clear CPU buffers Jun 21 06:08:53.182609 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 21 06:08:53.182651 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 06:08:53.182668 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 06:08:53.182687 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 06:08:53.182705 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 06:08:53.182727 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 06:08:53.182746 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 21 06:08:53.182764 kernel: Freeing SMP alternatives memory: 32K Jun 21 06:08:53.182782 kernel: pid_max: default: 32768 minimum: 301 Jun 21 06:08:53.182801 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 06:08:53.182818 kernel: landlock: Up and running. Jun 21 06:08:53.182846 kernel: SELinux: Initializing. Jun 21 06:08:53.182865 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 06:08:53.182883 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 06:08:53.182905 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jun 21 06:08:53.182924 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jun 21 06:08:53.182942 kernel: signal: max sigframe size: 1776 Jun 21 06:08:53.182960 kernel: rcu: Hierarchical SRCU implementation. Jun 21 06:08:53.182979 kernel: rcu: Max phase no-delay instances is 400. Jun 21 06:08:53.182997 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 06:08:53.183016 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 06:08:53.183035 kernel: smp: Bringing up secondary CPUs ... Jun 21 06:08:53.183057 kernel: smpboot: x86: Booting SMP configuration: Jun 21 06:08:53.183080 kernel: .... node #0, CPUs: #1 Jun 21 06:08:53.183099 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 21 06:08:53.183119 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 21 06:08:53.183137 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 06:08:53.183155 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jun 21 06:08:53.183174 kernel: Memory: 7564260K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 290712K reserved, 0K cma-reserved) Jun 21 06:08:53.183192 kernel: devtmpfs: initialized Jun 21 06:08:53.183210 kernel: x86/mm: Memory block size: 128MB Jun 21 06:08:53.183229 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jun 21 06:08:53.183251 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 06:08:53.183269 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 06:08:53.183288 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 06:08:53.183306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 06:08:53.183325 kernel: audit: initializing netlink subsys (disabled) Jun 21 06:08:53.183343 kernel: audit: type=2000 audit(1750486127.986:1): state=initialized audit_enabled=0 res=1 Jun 21 06:08:53.183361 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 06:08:53.183379 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 06:08:53.183401 kernel: cpuidle: using governor menu Jun 21 06:08:53.183419 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 06:08:53.183438 kernel: dca service started, version 1.12.1 Jun 21 06:08:53.183456 kernel: PCI: Using configuration type 1 for base access Jun 21 06:08:53.183475 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 06:08:53.183493 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 06:08:53.183512 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 06:08:53.183530 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 06:08:53.183549 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 06:08:53.183571 kernel: ACPI: Added _OSI(Module Device) Jun 21 06:08:53.183589 kernel: ACPI: Added _OSI(Processor Device) Jun 21 06:08:53.183607 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 06:08:53.186674 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 21 06:08:53.186696 kernel: ACPI: Interpreter enabled Jun 21 06:08:53.186715 kernel: ACPI: PM: (supports S0 S3 S5) Jun 21 06:08:53.186733 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 06:08:53.186752 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 06:08:53.186771 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 21 06:08:53.186796 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 21 06:08:53.186815 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 06:08:53.187080 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 21 06:08:53.187269 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 21 06:08:53.187451 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 21 06:08:53.187474 kernel: PCI host bridge to bus 0000:00 Jun 21 06:08:53.189718 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 06:08:53.189940 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 06:08:53.190119 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 06:08:53.190291 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jun 21 06:08:53.190456 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 06:08:53.190714 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 21 06:08:53.190932 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jun 21 06:08:53.191126 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 21 06:08:53.191317 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 21 06:08:53.191510 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jun 21 06:08:53.193759 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jun 21 06:08:53.193971 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jun 21 06:08:53.194174 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 06:08:53.194362 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jun 21 06:08:53.194553 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jun 21 06:08:53.194784 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 06:08:53.194981 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jun 21 06:08:53.195164 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jun 21 06:08:53.195187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 06:08:53.195207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 06:08:53.195225 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 06:08:53.195249 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 06:08:53.195268 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 21 06:08:53.195287 kernel: iommu: Default domain type: Translated Jun 21 06:08:53.195306 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 06:08:53.195324 kernel: efivars: Registered efivars operations Jun 21 06:08:53.195344 kernel: PCI: Using ACPI for IRQ routing Jun 21 06:08:53.195363 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 06:08:53.195381 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jun 21 06:08:53.195400 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jun 21 06:08:53.195422 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jun 21 06:08:53.195439 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jun 21 06:08:53.195457 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jun 21 06:08:53.195476 kernel: vgaarb: loaded Jun 21 06:08:53.195495 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 06:08:53.195513 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 06:08:53.195532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 06:08:53.195550 kernel: pnp: PnP ACPI init Jun 21 06:08:53.195569 kernel: pnp: PnP ACPI: found 7 devices Jun 21 06:08:53.195593 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 06:08:53.196651 kernel: NET: Registered PF_INET protocol family Jun 21 06:08:53.196681 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 06:08:53.196701 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 21 06:08:53.196720 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 06:08:53.196739 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 06:08:53.196760 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 21 06:08:53.196779 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 21 06:08:53.196808 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 06:08:53.196833 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 06:08:53.196851 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 06:08:53.196869 kernel: NET: Registered PF_XDP protocol family Jun 21 06:08:53.197066 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 06:08:53.197244 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 06:08:53.197417 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 06:08:53.197584 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jun 21 06:08:53.197809 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 21 06:08:53.197845 kernel: PCI: CLS 0 bytes, default 64 Jun 21 06:08:53.197866 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 06:08:53.197887 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jun 21 06:08:53.197907 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 21 06:08:53.197928 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 21 06:08:53.197948 kernel: clocksource: Switched to clocksource tsc Jun 21 06:08:53.197967 kernel: Initialise system trusted keyrings Jun 21 06:08:53.197988 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 21 06:08:53.198012 kernel: Key type asymmetric registered Jun 21 06:08:53.198032 kernel: Asymmetric key parser 'x509' registered Jun 21 06:08:53.198052 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 06:08:53.198072 kernel: io scheduler mq-deadline registered Jun 21 06:08:53.198091 kernel: io scheduler kyber registered Jun 21 06:08:53.198112 kernel: io scheduler bfq registered Jun 21 06:08:53.198131 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 06:08:53.198153 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 21 06:08:53.198378 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jun 21 06:08:53.198408 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 21 06:08:53.203246 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jun 21 06:08:53.203286 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 21 06:08:53.203509 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jun 21 06:08:53.203533 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 06:08:53.203552 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 06:08:53.203572 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 21 06:08:53.203591 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jun 21 06:08:53.203628 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jun 21 06:08:53.203851 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jun 21 06:08:53.203877 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 06:08:53.203895 kernel: i8042: Warning: Keylock active Jun 21 06:08:53.203912 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 06:08:53.203929 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 06:08:53.204122 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 21 06:08:53.204293 kernel: rtc_cmos 00:00: registered as rtc0 Jun 21 06:08:53.204470 kernel: rtc_cmos 00:00: setting system clock to 2025-06-21T06:08:52 UTC (1750486132) Jun 21 06:08:53.204778 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 21 06:08:53.204817 kernel: intel_pstate: CPU model not supported Jun 21 06:08:53.204838 kernel: pstore: Using crash dump compression: deflate Jun 21 06:08:53.204858 kernel: pstore: Registered efi_pstore as persistent store backend Jun 21 06:08:53.204878 kernel: NET: Registered PF_INET6 protocol family Jun 21 06:08:53.204898 kernel: Segment Routing with IPv6 Jun 21 06:08:53.204918 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 06:08:53.204938 kernel: NET: Registered PF_PACKET protocol family Jun 21 06:08:53.204964 kernel: Key type dns_resolver registered Jun 21 06:08:53.204980 kernel: IPI shorthand broadcast: enabled Jun 21 06:08:53.204997 kernel: sched_clock: Marking stable (4062004066, 335792742)->(4818438739, -420641931) Jun 21 06:08:53.205014 kernel: registered taskstats version 1 Jun 21 06:08:53.205031 kernel: Loading compiled-in X.509 certificates Jun 21 06:08:53.205046 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 06:08:53.205064 kernel: Demotion targets for Node 0: null Jun 21 06:08:53.205081 kernel: Key type .fscrypt registered Jun 21 06:08:53.205098 kernel: Key type fscrypt-provisioning registered Jun 21 06:08:53.205122 kernel: ima: Allocated hash algorithm: sha1 Jun 21 06:08:53.205141 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 21 06:08:53.205156 kernel: ima: No architecture policies found Jun 21 06:08:53.205172 kernel: clk: Disabling unused clocks Jun 21 06:08:53.205189 kernel: Warning: unable to open an initial console. Jun 21 06:08:53.205208 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 06:08:53.205228 kernel: Write protecting the kernel read-only data: 24576k Jun 21 06:08:53.205246 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 06:08:53.205269 kernel: Run /init as init process Jun 21 06:08:53.205288 kernel: with arguments: Jun 21 06:08:53.205307 kernel: /init Jun 21 06:08:53.205323 kernel: with environment: Jun 21 06:08:53.205604 kernel: HOME=/ Jun 21 06:08:53.205638 kernel: TERM=linux Jun 21 06:08:53.205656 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 06:08:53.205807 systemd[1]: Successfully made /usr/ read-only. Jun 21 06:08:53.205833 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 06:08:53.205864 systemd[1]: Detected virtualization google. Jun 21 06:08:53.205889 systemd[1]: Detected architecture x86-64. Jun 21 06:08:53.205914 systemd[1]: Running in initrd. Jun 21 06:08:53.206074 systemd[1]: No hostname configured, using default hostname. Jun 21 06:08:53.206093 systemd[1]: Hostname set to . Jun 21 06:08:53.206111 systemd[1]: Initializing machine ID from random generator. Jun 21 06:08:53.206130 systemd[1]: Queued start job for default target initrd.target. Jun 21 06:08:53.206154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:08:53.206309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:08:53.206333 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 06:08:53.206353 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 06:08:53.206372 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 06:08:53.206517 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 06:08:53.206538 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 06:08:53.206558 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 06:08:53.206577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:08:53.206597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:08:53.206752 systemd[1]: Reached target paths.target - Path Units. Jun 21 06:08:53.206772 systemd[1]: Reached target slices.target - Slice Units. Jun 21 06:08:53.206798 systemd[1]: Reached target swap.target - Swaps. Jun 21 06:08:53.206821 systemd[1]: Reached target timers.target - Timer Units. Jun 21 06:08:53.206841 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 06:08:53.206967 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 06:08:53.206987 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 06:08:53.207006 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 06:08:53.207026 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:08:53.207045 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 06:08:53.207065 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:08:53.207088 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 06:08:53.207107 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 06:08:53.207127 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 06:08:53.207146 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 06:08:53.207169 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 06:08:53.207189 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 06:08:53.207208 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 06:08:53.207228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 06:08:53.207247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:08:53.207270 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 06:08:53.207291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:08:53.207311 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 06:08:53.207331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 06:08:53.207391 systemd-journald[207]: Collecting audit messages is disabled. Jun 21 06:08:53.207433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 06:08:53.207453 systemd-journald[207]: Journal started Jun 21 06:08:53.207496 systemd-journald[207]: Runtime Journal (/run/log/journal/95c7ae6aac2941d6952a46f244592e5a) is 8M, max 148.9M, 140.9M free. Jun 21 06:08:53.168113 systemd-modules-load[208]: Inserted module 'overlay' Jun 21 06:08:53.214738 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 06:08:53.212233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:08:53.224639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 06:08:53.230081 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 06:08:53.232640 kernel: Bridge firewalling registered Jun 21 06:08:53.233444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 06:08:53.233719 systemd-modules-load[208]: Inserted module 'br_netfilter' Jun 21 06:08:53.248797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 06:08:53.253669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 06:08:53.260807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:08:53.262934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:08:53.275819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 06:08:53.281761 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 06:08:53.289699 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 06:08:53.291058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:08:53.301044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:08:53.306947 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 06:08:53.333149 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:08:53.380443 systemd-resolved[245]: Positive Trust Anchors: Jun 21 06:08:53.380866 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 06:08:53.380943 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 06:08:53.385788 systemd-resolved[245]: Defaulting to hostname 'linux'. Jun 21 06:08:53.387451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 06:08:53.405861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:08:53.456663 kernel: SCSI subsystem initialized Jun 21 06:08:53.468662 kernel: Loading iSCSI transport class v2.0-870. Jun 21 06:08:53.480653 kernel: iscsi: registered transport (tcp) Jun 21 06:08:53.506002 kernel: iscsi: registered transport (qla4xxx) Jun 21 06:08:53.506080 kernel: QLogic iSCSI HBA Driver Jun 21 06:08:53.529335 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 06:08:53.560214 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:08:53.566548 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 06:08:53.626025 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 06:08:53.628441 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 06:08:53.689653 kernel: raid6: avx2x4 gen() 17961 MB/s Jun 21 06:08:53.706647 kernel: raid6: avx2x2 gen() 18187 MB/s Jun 21 06:08:53.724792 kernel: raid6: avx2x1 gen() 13905 MB/s Jun 21 06:08:53.724878 kernel: raid6: using algorithm avx2x2 gen() 18187 MB/s Jun 21 06:08:53.743100 kernel: raid6: .... xor() 18377 MB/s, rmw enabled Jun 21 06:08:53.743179 kernel: raid6: using avx2x2 recovery algorithm Jun 21 06:08:53.765652 kernel: xor: automatically using best checksumming function avx Jun 21 06:08:53.952659 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 06:08:53.961004 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 06:08:53.963287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:08:53.995365 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 21 06:08:54.004419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:08:54.010194 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 06:08:54.044032 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jun 21 06:08:54.075909 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 06:08:54.082489 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 06:08:54.185718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:08:54.193726 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 06:08:54.301642 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 06:08:54.312639 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 21 06:08:54.314644 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jun 21 06:08:54.338641 kernel: AES CTR mode by8 optimization enabled Jun 21 06:08:54.410829 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:08:54.411032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:08:54.427774 kernel: scsi host0: Virtio SCSI HBA Jun 21 06:08:54.414839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:08:54.436057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:08:54.446756 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jun 21 06:08:54.449582 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:08:54.476243 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jun 21 06:08:54.476562 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jun 21 06:08:54.477573 kernel: sd 0:0:1:0: [sda] Write Protect is off Jun 21 06:08:54.477851 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jun 21 06:08:54.479997 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 21 06:08:54.483752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:08:54.491947 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 06:08:54.492001 kernel: GPT:17805311 != 25165823 Jun 21 06:08:54.492026 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 06:08:54.492776 kernel: GPT:17805311 != 25165823 Jun 21 06:08:54.494039 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 06:08:54.494070 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 06:08:54.496027 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jun 21 06:08:54.587893 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jun 21 06:08:54.588678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 06:08:54.615285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jun 21 06:08:54.629148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 21 06:08:54.643276 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jun 21 06:08:54.643533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jun 21 06:08:54.649066 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 06:08:54.653933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:08:54.658926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 06:08:54.664120 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 06:08:54.670996 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 06:08:54.692805 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 06:08:54.705642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 06:08:54.706026 disk-uuid[608]: Primary Header is updated. Jun 21 06:08:54.706026 disk-uuid[608]: Secondary Entries is updated. Jun 21 06:08:54.706026 disk-uuid[608]: Secondary Header is updated. Jun 21 06:08:55.744889 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 21 06:08:55.744970 disk-uuid[616]: The operation has completed successfully. Jun 21 06:08:55.820916 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 06:08:55.821110 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 06:08:55.873794 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 06:08:55.895794 sh[630]: Success Jun 21 06:08:55.919429 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 06:08:55.919508 kernel: device-mapper: uevent: version 1.0.3 Jun 21 06:08:55.920316 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 06:08:55.932655 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 21 06:08:56.022953 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 06:08:56.027747 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 06:08:56.048502 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 06:08:56.072039 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 06:08:56.072112 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (642) Jun 21 06:08:56.075804 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 06:08:56.075870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:08:56.075896 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 06:08:56.097204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 06:08:56.098093 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 06:08:56.101188 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 06:08:56.102680 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 06:08:56.114835 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 06:08:56.161687 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (677) Jun 21 06:08:56.163653 kernel: BTRFS info (device sda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:08:56.163717 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:08:56.165740 kernel: BTRFS info (device sda6): using free-space-tree Jun 21 06:08:56.176635 kernel: BTRFS info (device sda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:08:56.177272 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 06:08:56.181346 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 06:08:56.269345 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 06:08:56.280577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 06:08:56.412833 systemd-networkd[812]: lo: Link UP Jun 21 06:08:56.412851 systemd-networkd[812]: lo: Gained carrier Jun 21 06:08:56.416303 systemd-networkd[812]: Enumeration completed Jun 21 06:08:56.416449 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 06:08:56.417136 systemd[1]: Reached target network.target - Network. Jun 21 06:08:56.418689 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:08:56.418697 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 06:08:56.421454 systemd-networkd[812]: eth0: Link UP Jun 21 06:08:56.436315 ignition[734]: Ignition 2.21.0 Jun 21 06:08:56.421461 systemd-networkd[812]: eth0: Gained carrier Jun 21 06:08:56.436324 ignition[734]: Stage: fetch-offline Jun 21 06:08:56.421478 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:08:56.436363 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:56.437171 systemd-networkd[812]: eth0: DHCPv4 address 10.128.0.41/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 21 06:08:56.436384 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:56.439851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 06:08:56.436484 ignition[734]: parsed url from cmdline: "" Jun 21 06:08:56.444762 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 06:08:56.436490 ignition[734]: no config URL provided Jun 21 06:08:56.436497 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 06:08:56.436508 ignition[734]: no config at "/usr/lib/ignition/user.ign" Jun 21 06:08:56.436516 ignition[734]: failed to fetch config: resource requires networking Jun 21 06:08:56.436796 ignition[734]: Ignition finished successfully Jun 21 06:08:56.474734 ignition[821]: Ignition 2.21.0 Jun 21 06:08:56.485552 unknown[821]: fetched base config from "system" Jun 21 06:08:56.474742 ignition[821]: Stage: fetch Jun 21 06:08:56.485563 unknown[821]: fetched base config from "system" Jun 21 06:08:56.474949 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:56.485571 unknown[821]: fetched user config from "gcp" Jun 21 06:08:56.474972 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:56.488962 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 06:08:56.475220 ignition[821]: parsed url from cmdline: "" Jun 21 06:08:56.494311 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 06:08:56.475229 ignition[821]: no config URL provided Jun 21 06:08:56.475239 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 06:08:56.475256 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jun 21 06:08:56.475317 ignition[821]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jun 21 06:08:56.480195 ignition[821]: GET result: OK Jun 21 06:08:56.480253 ignition[821]: parsing config with SHA512: ceb22b0145db3f6cab471e5b9151b8d89b481336b7997bbdb6be1c79ca68099609efe3d695c03db0ccb7532bc8cbab9c9ab5c433bce10fbc386093c3c3beed50 Jun 21 06:08:56.485915 ignition[821]: fetch: fetch complete Jun 21 06:08:56.485921 ignition[821]: fetch: fetch passed Jun 21 06:08:56.485972 ignition[821]: Ignition finished successfully Jun 21 06:08:56.540702 ignition[828]: Ignition 2.21.0 Jun 21 06:08:56.540712 ignition[828]: Stage: kargs Jun 21 06:08:56.545400 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 06:08:56.540897 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:56.548498 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 06:08:56.540909 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:56.543133 ignition[828]: kargs: kargs passed Jun 21 06:08:56.543250 ignition[828]: Ignition finished successfully Jun 21 06:08:56.586474 ignition[835]: Ignition 2.21.0 Jun 21 06:08:56.586492 ignition[835]: Stage: disks Jun 21 06:08:56.587287 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:56.591365 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 06:08:56.587311 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:56.595157 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 06:08:56.589603 ignition[835]: disks: disks passed Jun 21 06:08:56.600765 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 06:08:56.589706 ignition[835]: Ignition finished successfully Jun 21 06:08:56.604744 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 06:08:56.608713 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 06:08:56.612750 systemd[1]: Reached target basic.target - Basic System. Jun 21 06:08:56.618340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 06:08:56.658576 systemd-fsck[844]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 21 06:08:56.668177 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 06:08:56.673724 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 06:08:56.856662 kernel: EXT4-fs (sda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 06:08:56.857728 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 06:08:56.861443 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 06:08:56.867253 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 06:08:56.882733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 06:08:56.884781 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 06:08:56.884880 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 06:08:56.884933 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 06:08:56.899647 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (852) Jun 21 06:08:56.904288 kernel: BTRFS info (device sda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:08:56.904341 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:08:56.904368 kernel: BTRFS info (device sda6): using free-space-tree Jun 21 06:08:56.905806 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 06:08:56.910778 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 06:08:56.920568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 06:08:57.041997 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 06:08:57.051168 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Jun 21 06:08:57.057712 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 06:08:57.063944 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 06:08:57.216592 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 06:08:57.219433 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 06:08:57.234817 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 06:08:57.246907 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 06:08:57.251053 kernel: BTRFS info (device sda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:08:57.289798 ignition[964]: INFO : Ignition 2.21.0 Jun 21 06:08:57.289798 ignition[964]: INFO : Stage: mount Jun 21 06:08:57.296747 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:57.296747 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:57.296747 ignition[964]: INFO : mount: mount passed Jun 21 06:08:57.296747 ignition[964]: INFO : Ignition finished successfully Jun 21 06:08:57.290410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 06:08:57.293386 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 06:08:57.300077 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 06:08:57.328597 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 06:08:57.359654 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (976) Jun 21 06:08:57.362293 kernel: BTRFS info (device sda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:08:57.362361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:08:57.362388 kernel: BTRFS info (device sda6): using free-space-tree Jun 21 06:08:57.370358 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 06:08:57.407550 ignition[993]: INFO : Ignition 2.21.0 Jun 21 06:08:57.407550 ignition[993]: INFO : Stage: files Jun 21 06:08:57.413757 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:57.413757 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:57.413757 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Jun 21 06:08:57.413757 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 06:08:57.413757 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 06:08:57.431802 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 06:08:57.431802 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 06:08:57.431802 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 06:08:57.431802 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 06:08:57.431802 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 21 06:08:57.416783 unknown[993]: wrote ssh authorized keys file for user: core Jun 21 06:08:57.574436 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 06:08:57.822558 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 06:08:57.827818 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 06:08:57.827818 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 06:08:57.980898 systemd-networkd[812]: eth0: Gained IPv6LL Jun 21 06:08:58.109014 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 06:08:58.253294 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 06:08:58.253294 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:08:58.261744 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 21 06:08:58.652940 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 06:08:59.004827 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:08:59.004827 ignition[993]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 06:08:59.012774 ignition[993]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 06:08:59.012774 ignition[993]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 06:08:59.012774 ignition[993]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 06:08:59.012774 ignition[993]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 21 06:08:59.012774 ignition[993]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 06:08:59.038770 ignition[993]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 06:08:59.038770 ignition[993]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 06:08:59.038770 ignition[993]: INFO : files: files passed Jun 21 06:08:59.038770 ignition[993]: INFO : Ignition finished successfully Jun 21 06:08:59.016105 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 06:08:59.023046 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 06:08:59.030121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 06:08:59.073859 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:08:59.073859 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:08:59.049798 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 06:08:59.090768 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:08:59.049954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 06:08:59.065052 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 06:08:59.070521 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 06:08:59.078281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 06:08:59.136471 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 06:08:59.136685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 06:08:59.142511 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 06:08:59.148817 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 06:08:59.152950 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 06:08:59.154586 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 06:08:59.193162 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 06:08:59.196251 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 06:08:59.241133 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:08:59.245911 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:08:59.246318 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 06:08:59.252371 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 06:08:59.252718 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 06:08:59.261216 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 06:08:59.264206 systemd[1]: Stopped target basic.target - Basic System. Jun 21 06:08:59.269358 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 06:08:59.274170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 06:08:59.279134 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 06:08:59.284083 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 06:08:59.289177 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 06:08:59.294199 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 06:08:59.298114 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 06:08:59.303130 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 06:08:59.307064 systemd[1]: Stopped target swap.target - Swaps. Jun 21 06:08:59.311098 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 06:08:59.311331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 06:08:59.321078 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:08:59.328004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:08:59.331076 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 06:08:59.331244 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:08:59.337236 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 06:08:59.337599 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 06:08:59.345022 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 06:08:59.345564 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 06:08:59.348120 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 06:08:59.348322 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 06:08:59.352540 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 06:08:59.363892 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 06:08:59.364267 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:08:59.376652 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 06:08:59.383800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 06:08:59.384065 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:08:59.387126 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 06:08:59.387328 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 06:08:59.402753 ignition[1047]: INFO : Ignition 2.21.0 Jun 21 06:08:59.402753 ignition[1047]: INFO : Stage: umount Jun 21 06:08:59.402753 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:08:59.402753 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 21 06:08:59.402753 ignition[1047]: INFO : umount: umount passed Jun 21 06:08:59.409162 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 06:08:59.417291 ignition[1047]: INFO : Ignition finished successfully Jun 21 06:08:59.409349 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 06:08:59.421970 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 06:08:59.426862 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 06:08:59.426981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 06:08:59.434442 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 06:08:59.434708 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 06:08:59.441920 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 06:08:59.442072 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 06:08:59.445831 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 06:08:59.445915 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 06:08:59.452984 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 06:08:59.453072 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 06:08:59.456030 systemd[1]: Stopped target network.target - Network. Jun 21 06:08:59.459963 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 06:08:59.460036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 06:08:59.464004 systemd[1]: Stopped target paths.target - Path Units. Jun 21 06:08:59.467913 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 06:08:59.473711 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:08:59.474947 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 06:08:59.478876 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 06:08:59.483991 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 06:08:59.484196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 06:08:59.488064 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 06:08:59.488133 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 06:08:59.491951 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 06:08:59.492152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 06:08:59.496959 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 06:08:59.497148 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 06:08:59.500935 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 06:08:59.501114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 06:08:59.505349 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 06:08:59.514932 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 06:08:59.521517 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 06:08:59.521807 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 06:08:59.524742 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 06:08:59.525701 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 06:08:59.531995 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 06:08:59.532063 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:08:59.539298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 06:08:59.544917 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 06:08:59.545006 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 06:08:59.554880 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:08:59.556936 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 06:08:59.557561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 06:08:59.566167 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 06:08:59.567812 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 06:08:59.567981 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:08:59.571278 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 06:08:59.571481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 06:08:59.577803 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 06:08:59.577890 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:08:59.588862 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 06:08:59.588935 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:08:59.589329 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 06:08:59.589484 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:08:59.593158 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 06:08:59.593296 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 06:08:59.600288 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 06:08:59.600404 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 06:08:59.602011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 06:08:59.602061 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:08:59.608735 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 06:08:59.608817 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 06:08:59.616720 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 06:08:59.616815 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 06:08:59.623780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 06:08:59.623883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 06:08:59.630165 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 06:08:59.639741 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 06:08:59.639863 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:08:59.642973 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 06:08:59.643049 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:08:59.648903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:08:59.648984 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:08:59.739785 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jun 21 06:08:59.655212 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 06:08:59.655445 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 06:08:59.655693 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:08:59.657222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 06:08:59.657327 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 06:08:59.661913 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 06:08:59.668972 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 06:08:59.694962 systemd[1]: Switching root. Jun 21 06:08:59.763699 systemd-journald[207]: Journal stopped Jun 21 06:09:01.907183 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 06:09:01.907379 kernel: SELinux: policy capability open_perms=1 Jun 21 06:09:01.907406 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 06:09:01.907425 kernel: SELinux: policy capability always_check_network=0 Jun 21 06:09:01.907445 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 06:09:01.907603 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 06:09:01.907653 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 06:09:01.907837 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 06:09:01.907861 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 06:09:01.907880 kernel: audit: type=1403 audit(1750486140.408:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 06:09:01.907905 systemd[1]: Successfully loaded SELinux policy in 48.536ms. Jun 21 06:09:01.908122 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.723ms. Jun 21 06:09:01.908154 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 06:09:01.908183 systemd[1]: Detected virtualization google. Jun 21 06:09:01.908207 systemd[1]: Detected architecture x86-64. Jun 21 06:09:01.908377 systemd[1]: Detected first boot. Jun 21 06:09:01.908403 systemd[1]: Initializing machine ID from random generator. Jun 21 06:09:01.908425 zram_generator::config[1090]: No configuration found. Jun 21 06:09:01.908454 kernel: Guest personality initialized and is inactive Jun 21 06:09:01.908476 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 06:09:01.908647 kernel: Initialized host personality Jun 21 06:09:01.908666 kernel: NET: Registered PF_VSOCK protocol family Jun 21 06:09:01.908693 systemd[1]: Populated /etc with preset unit settings. Jun 21 06:09:01.908834 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 06:09:01.908860 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 06:09:01.908890 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 06:09:01.908913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 06:09:01.908936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 06:09:01.908959 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 06:09:01.908983 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 06:09:01.909006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 06:09:01.909029 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 06:09:01.909058 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 06:09:01.909083 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 06:09:01.909107 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 06:09:01.909131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:09:01.909156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:09:01.909179 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 06:09:01.909203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 06:09:01.909226 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 06:09:01.909257 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 06:09:01.909286 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 06:09:01.909311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:09:01.909334 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:09:01.909357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 06:09:01.909382 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 06:09:01.909405 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 06:09:01.909429 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 06:09:01.909457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:09:01.909481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 06:09:01.909505 systemd[1]: Reached target slices.target - Slice Units. Jun 21 06:09:01.909529 systemd[1]: Reached target swap.target - Swaps. Jun 21 06:09:01.909553 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 06:09:01.909576 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 06:09:01.909601 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 06:09:01.909650 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:09:01.909672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 06:09:01.909701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:09:01.909722 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 06:09:01.909744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 06:09:01.909765 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 06:09:01.909790 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 06:09:01.909812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:01.909834 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 06:09:01.909857 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 06:09:01.909879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 06:09:01.909903 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 06:09:01.909925 systemd[1]: Reached target machines.target - Containers. Jun 21 06:09:01.909948 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 06:09:01.909977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:09:01.910002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 06:09:01.910025 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 06:09:01.910049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 06:09:01.910071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 06:09:01.910093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:09:01.910114 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 06:09:01.910135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 06:09:01.910157 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 06:09:01.910185 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 06:09:01.910206 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 06:09:01.910228 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 06:09:01.910250 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 06:09:01.910274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:09:01.910296 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 06:09:01.910318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 06:09:01.910341 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 06:09:01.910368 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 06:09:01.910390 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 06:09:01.910412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 06:09:01.910434 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 06:09:01.910455 systemd[1]: Stopped verity-setup.service. Jun 21 06:09:01.910478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:01.910499 kernel: ACPI: bus type drm_connector registered Jun 21 06:09:01.910520 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 06:09:01.910546 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 06:09:01.910568 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 06:09:01.910589 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 06:09:01.910632 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 06:09:01.910655 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 06:09:01.910685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:09:01.910707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 06:09:01.910730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 06:09:01.910752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 06:09:01.910777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 06:09:01.910799 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 06:09:01.910821 kernel: fuse: init (API version 7.41) Jun 21 06:09:01.910841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 06:09:01.910864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:09:01.910887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:09:01.910908 kernel: loop: module loaded Jun 21 06:09:01.910928 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 06:09:01.910954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 06:09:01.910976 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 06:09:01.910998 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 06:09:01.911020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 06:09:01.911042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 06:09:01.911064 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:09:01.911127 systemd-journald[1161]: Collecting audit messages is disabled. Jun 21 06:09:01.911190 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 06:09:01.911217 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 06:09:01.911240 systemd-journald[1161]: Journal started Jun 21 06:09:01.911286 systemd-journald[1161]: Runtime Journal (/run/log/journal/c1ba5fc38c6a4d7db8545dc2519ba88c) is 8M, max 148.9M, 140.9M free. Jun 21 06:09:01.327793 systemd[1]: Queued start job for default target multi-user.target. Jun 21 06:09:01.353956 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 21 06:09:01.354679 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 06:09:01.917797 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 06:09:01.937844 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 06:09:01.945755 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 06:09:01.950500 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 06:09:01.953801 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 06:09:01.953998 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 06:09:01.958603 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 06:09:01.962809 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 06:09:01.969586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:09:01.975058 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 06:09:01.979977 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 06:09:01.983772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 06:09:01.985791 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 06:09:01.989733 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 06:09:02.001453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:09:02.008240 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 06:09:02.016836 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 06:09:02.025549 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 06:09:02.031062 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 06:09:02.047718 kernel: loop0: detected capacity change from 0 to 113872 Jun 21 06:09:02.048950 systemd-journald[1161]: Time spent on flushing to /var/log/journal/c1ba5fc38c6a4d7db8545dc2519ba88c is 104.573ms for 956 entries. Jun 21 06:09:02.048950 systemd-journald[1161]: System Journal (/var/log/journal/c1ba5fc38c6a4d7db8545dc2519ba88c) is 8M, max 584.8M, 576.8M free. Jun 21 06:09:02.195174 systemd-journald[1161]: Received client request to flush runtime journal. Jun 21 06:09:02.062382 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 06:09:02.066286 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 06:09:02.078928 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 06:09:02.174697 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:09:02.200732 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 06:09:02.204742 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 06:09:02.205100 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 06:09:02.234726 kernel: loop1: detected capacity change from 0 to 221472 Jun 21 06:09:02.240418 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:09:02.256962 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 06:09:02.264870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 06:09:02.330072 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jun 21 06:09:02.330592 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jun 21 06:09:02.335902 kernel: loop2: detected capacity change from 0 to 146240 Jun 21 06:09:02.348473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:09:02.357315 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 06:09:02.408650 kernel: loop3: detected capacity change from 0 to 52072 Jun 21 06:09:02.502646 kernel: loop4: detected capacity change from 0 to 113872 Jun 21 06:09:02.541301 kernel: loop5: detected capacity change from 0 to 221472 Jun 21 06:09:02.582646 kernel: loop6: detected capacity change from 0 to 146240 Jun 21 06:09:02.629667 kernel: loop7: detected capacity change from 0 to 52072 Jun 21 06:09:02.661988 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jun 21 06:09:02.663980 (sd-merge)[1236]: Merged extensions into '/usr'. Jun 21 06:09:02.673087 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 06:09:02.673267 systemd[1]: Reloading... Jun 21 06:09:02.855819 zram_generator::config[1258]: No configuration found. Jun 21 06:09:03.085746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:09:03.203090 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 06:09:03.307968 systemd[1]: Reloading finished in 633 ms. Jun 21 06:09:03.326224 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 06:09:03.331698 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 06:09:03.351829 systemd[1]: Starting ensure-sysext.service... Jun 21 06:09:03.355124 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 06:09:03.390768 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Jun 21 06:09:03.390800 systemd[1]: Reloading... Jun 21 06:09:03.401444 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 06:09:03.403002 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 06:09:03.403432 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 06:09:03.406064 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 06:09:03.410237 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 06:09:03.412515 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jun 21 06:09:03.412748 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jun 21 06:09:03.423817 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 06:09:03.424194 systemd-tmpfiles[1303]: Skipping /boot Jun 21 06:09:03.459929 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 06:09:03.459954 systemd-tmpfiles[1303]: Skipping /boot Jun 21 06:09:03.553655 zram_generator::config[1336]: No configuration found. Jun 21 06:09:03.690810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:09:03.808982 systemd[1]: Reloading finished in 417 ms. Jun 21 06:09:03.833025 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 06:09:03.857014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:09:03.871134 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 06:09:03.878589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 06:09:03.886763 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 06:09:03.895968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 06:09:03.904655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:09:03.911001 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 06:09:03.921569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:03.922815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:09:03.930027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 06:09:03.937554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:09:03.945756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 06:09:03.948877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:09:03.949091 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:09:03.949268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:03.959063 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 06:09:03.962580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:09:03.963671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:09:03.984654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:03.985329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:09:03.997201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:09:04.001074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:09:04.001686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:09:04.002002 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:04.010692 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 06:09:04.029263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:04.029745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:09:04.039588 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 06:09:04.045199 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 21 06:09:04.047892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:09:04.048520 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:09:04.049892 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 06:09:04.052876 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:09:04.069205 systemd[1]: Finished ensure-sysext.service. Jun 21 06:09:04.073414 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 06:09:04.074776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 06:09:04.089390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 06:09:04.089727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 06:09:04.092980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 06:09:04.101159 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 06:09:04.116472 systemd-udevd[1377]: Using default interface naming scheme 'v255'. Jun 21 06:09:04.119086 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 21 06:09:04.127422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:09:04.127961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:09:04.138529 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 06:09:04.138849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 06:09:04.148547 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 06:09:04.170033 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jun 21 06:09:04.178754 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 06:09:04.182349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 06:09:04.189839 augenrules[1422]: No rules Jun 21 06:09:04.192307 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 06:09:04.192965 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 06:09:04.203201 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 06:09:04.217255 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 06:09:04.242648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:09:04.254115 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jun 21 06:09:04.265330 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 06:09:04.287960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 06:09:04.330573 systemd-resolved[1375]: Positive Trust Anchors: Jun 21 06:09:04.331670 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 06:09:04.331884 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 06:09:04.354257 systemd-resolved[1375]: Defaulting to hostname 'linux'. Jun 21 06:09:04.362883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 06:09:04.373853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:09:04.384800 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 06:09:04.394466 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 06:09:04.404855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 06:09:04.415463 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 06:09:04.426320 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 06:09:04.434938 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 06:09:04.445793 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 06:09:04.456791 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 06:09:04.456852 systemd[1]: Reached target paths.target - Path Units. Jun 21 06:09:04.464807 systemd[1]: Reached target timers.target - Timer Units. Jun 21 06:09:04.474200 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 06:09:04.487016 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 06:09:04.505000 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 06:09:04.516068 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 06:09:04.526886 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 06:09:04.548567 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 06:09:04.558571 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 06:09:04.579522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 06:09:04.599979 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jun 21 06:09:04.601218 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 06:09:04.611479 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 06:09:04.621773 systemd[1]: Reached target basic.target - Basic System. Jun 21 06:09:04.631832 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jun 21 06:09:04.635257 systemd-networkd[1454]: lo: Link UP Jun 21 06:09:04.635662 systemd-networkd[1454]: lo: Gained carrier Jun 21 06:09:04.639097 systemd-networkd[1454]: Enumeration completed Jun 21 06:09:04.640822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 06:09:04.640868 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 06:09:04.645317 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:09:04.645364 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 06:09:04.646581 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 06:09:04.647763 systemd-networkd[1454]: eth0: Link UP Jun 21 06:09:04.648870 systemd-networkd[1454]: eth0: Gained carrier Jun 21 06:09:04.648908 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:09:04.657440 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 06:09:04.663709 systemd-networkd[1454]: eth0: DHCPv4 address 10.128.0.41/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 21 06:09:04.672512 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 06:09:04.685480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 06:09:04.699975 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 06:09:04.708044 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 06:09:04.721303 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 06:09:04.733764 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 06:09:04.747810 systemd[1]: Started ntpd.service - Network Time Service. Jun 21 06:09:04.761837 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 06:09:04.777011 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 06:09:04.792952 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 06:09:04.810093 oslogin_cache_refresh[1486]: Refreshing passwd entry cache Jun 21 06:09:04.813201 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Refreshing passwd entry cache Jun 21 06:09:04.830923 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 06:09:04.835727 jq[1484]: false Jun 21 06:09:04.841195 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jun 21 06:09:04.842431 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 06:09:04.854972 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 06:09:04.856940 oslogin_cache_refresh[1486]: Failure getting users, quitting Jun 21 06:09:04.858245 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Failure getting users, quitting Jun 21 06:09:04.858245 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 06:09:04.858245 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Refreshing group entry cache Jun 21 06:09:04.856972 oslogin_cache_refresh[1486]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 06:09:04.857075 oslogin_cache_refresh[1486]: Refreshing group entry cache Jun 21 06:09:04.876571 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Failure getting groups, quitting Jun 21 06:09:04.876571 google_oslogin_nss_cache[1486]: oslogin_cache_refresh[1486]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 06:09:04.874880 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 06:09:04.865430 oslogin_cache_refresh[1486]: Failure getting groups, quitting Jun 21 06:09:04.865451 oslogin_cache_refresh[1486]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 06:09:04.885726 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 06:09:04.897539 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 21 06:09:04.898901 extend-filesystems[1485]: Found /dev/sda6 Jun 21 06:09:04.967058 kernel: ACPI: button: Power Button [PWRF] Jun 21 06:09:04.967123 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jun 21 06:09:04.967156 kernel: ACPI: button: Sleep Button [SLPF] Jun 21 06:09:04.967184 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 06:09:04.967249 coreos-metadata[1481]: Jun 21 06:09:04.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jun 21 06:09:04.903830 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 06:09:04.967825 extend-filesystems[1485]: Found /dev/sda9 Jun 21 06:09:04.967825 extend-filesystems[1485]: Checking size of /dev/sda9 Jun 21 06:09:04.920266 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.970 INFO Fetch successful Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.970 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.971 INFO Fetch successful Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.971 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.984 INFO Fetch successful Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.984 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jun 21 06:09:05.012901 coreos-metadata[1481]: Jun 21 06:09:04.985 INFO Fetch successful Jun 21 06:09:05.013204 extend-filesystems[1485]: Resized partition /dev/sda9 Jun 21 06:09:04.920696 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 06:09:05.027152 update_engine[1502]: I20250621 06:09:05.026336 1502 main.cc:92] Flatcar Update Engine starting Jun 21 06:09:04.921202 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 06:09:04.922815 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 06:09:04.932070 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 06:09:04.933696 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 06:09:04.957382 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 06:09:04.958805 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 06:09:05.046648 jq[1505]: true Jun 21 06:09:05.046905 extend-filesystems[1523]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 06:09:05.060942 jq[1526]: true Jun 21 06:09:05.095362 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jun 21 06:09:05.095455 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 21 06:09:05.112688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 21 06:09:05.125243 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jun 21 06:09:05.132529 systemd[1]: Reached target network.target - Network. Jun 21 06:09:05.145213 extend-filesystems[1523]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 21 06:09:05.145213 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 2 Jun 21 06:09:05.145213 extend-filesystems[1523]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jun 21 06:09:05.175159 extend-filesystems[1485]: Resized filesystem in /dev/sda9 Jun 21 06:09:05.148856 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 23:20:46 UTC 2025 (1): Starting Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: ---------------------------------------------------- Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: ntp-4 is maintained by Network Time Foundation, Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: corporation. Support and training for ntp-4 are Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: available at https://www.nwtime.org/support Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: ---------------------------------------------------- Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: proto: precision = 0.078 usec (-24) Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: basedate set to 2025-06-08 Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: gps base set to 2025-06-08 (week 2370) Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Listen and drop on 0 v6wildcard [::]:123 Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Listen normally on 2 lo 127.0.0.1:123 Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Listen normally on 3 eth0 10.128.0.41:123 Jun 21 06:09:05.193960 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Listen normally on 4 lo [::1]:123 Jun 21 06:09:05.147187 ntpd[1490]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 23:20:46 UTC 2025 (1): Starting Jun 21 06:09:05.188416 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 06:09:05.147222 ntpd[1490]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 21 06:09:05.208876 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: bind(21) AF_INET6 fe80::4001:aff:fe80:29%2#123 flags 0x11 failed: Cannot assign requested address Jun 21 06:09:05.208876 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:29%2#123 Jun 21 06:09:05.208876 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: failed to init interface for address fe80::4001:aff:fe80:29%2 Jun 21 06:09:05.208876 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: Listening on routing socket on fd #21 for interface updates Jun 21 06:09:05.147236 ntpd[1490]: ---------------------------------------------------- Jun 21 06:09:05.147250 ntpd[1490]: ntp-4 is maintained by Network Time Foundation, Jun 21 06:09:05.147264 ntpd[1490]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 21 06:09:05.147276 ntpd[1490]: corporation. Support and training for ntp-4 are Jun 21 06:09:05.147289 ntpd[1490]: available at https://www.nwtime.org/support Jun 21 06:09:05.147302 ntpd[1490]: ---------------------------------------------------- Jun 21 06:09:05.160097 ntpd[1490]: proto: precision = 0.078 usec (-24) Jun 21 06:09:05.169914 ntpd[1490]: basedate set to 2025-06-08 Jun 21 06:09:05.169944 ntpd[1490]: gps base set to 2025-06-08 (week 2370) Jun 21 06:09:05.189034 ntpd[1490]: Listen and drop on 0 v6wildcard [::]:123 Jun 21 06:09:05.189099 ntpd[1490]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 21 06:09:05.189326 ntpd[1490]: Listen normally on 2 lo 127.0.0.1:123 Jun 21 06:09:05.189391 ntpd[1490]: Listen normally on 3 eth0 10.128.0.41:123 Jun 21 06:09:05.189461 ntpd[1490]: Listen normally on 4 lo [::1]:123 Jun 21 06:09:05.199096 ntpd[1490]: bind(21) AF_INET6 fe80::4001:aff:fe80:29%2#123 flags 0x11 failed: Cannot assign requested address Jun 21 06:09:05.199143 ntpd[1490]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:29%2#123 Jun 21 06:09:05.199165 ntpd[1490]: failed to init interface for address fe80::4001:aff:fe80:29%2 Jun 21 06:09:05.199220 ntpd[1490]: Listening on routing socket on fd #21 for interface updates Jun 21 06:09:05.214483 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 06:09:05.226467 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 06:09:05.230403 ntpd[1490]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 21 06:09:05.232431 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 21 06:09:05.232431 ntpd[1490]: 21 Jun 06:09:05 ntpd[1490]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 21 06:09:05.230449 ntpd[1490]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 21 06:09:05.238825 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 06:09:05.239175 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 06:09:05.302072 tar[1513]: linux-amd64/helm Jun 21 06:09:05.331344 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 06:09:05.331806 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 06:09:05.346035 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 06:09:05.356212 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Jun 21 06:09:05.357367 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 06:09:05.374948 systemd[1]: Starting sshkeys.service... Jun 21 06:09:05.401043 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 06:09:05.431956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:09:05.490204 dbus-daemon[1482]: [system] SELinux support is enabled Jun 21 06:09:05.490484 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 06:09:05.506233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 06:09:05.506300 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 06:09:05.517130 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 06:09:05.521265 dbus-daemon[1482]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1454 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 21 06:09:05.517174 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 06:09:05.530798 update_engine[1502]: I20250621 06:09:05.530517 1502 update_check_scheduler.cc:74] Next update check in 8m43s Jun 21 06:09:05.534838 systemd[1]: Started update-engine.service - Update Engine. Jun 21 06:09:05.540170 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 21 06:09:05.564401 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 21 06:09:05.575903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 06:09:05.606603 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 06:09:05.617131 kernel: EDAC MC: Ver: 3.0.0 Jun 21 06:09:05.624066 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 06:09:05.635526 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 06:09:05.906157 coreos-metadata[1582]: Jun 21 06:09:05.905 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jun 21 06:09:05.909379 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 06:09:05.909423 systemd-logind[1496]: Watching system buttons on /dev/input/event3 (Sleep Button) Jun 21 06:09:05.909456 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 06:09:05.917777 systemd-networkd[1454]: eth0: Gained IPv6LL Jun 21 06:09:05.941774 coreos-metadata[1582]: Jun 21 06:09:05.929 INFO Fetch failed with 404: resource not found Jun 21 06:09:05.941774 coreos-metadata[1582]: Jun 21 06:09:05.932 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jun 21 06:09:05.932950 systemd-logind[1496]: New seat seat0. Jun 21 06:09:05.946752 coreos-metadata[1582]: Jun 21 06:09:05.946 INFO Fetch successful Jun 21 06:09:05.946752 coreos-metadata[1582]: Jun 21 06:09:05.946 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jun 21 06:09:05.958730 coreos-metadata[1582]: Jun 21 06:09:05.948 INFO Fetch failed with 404: resource not found Jun 21 06:09:05.958730 coreos-metadata[1582]: Jun 21 06:09:05.954 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jun 21 06:09:05.948480 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 06:09:05.963379 coreos-metadata[1582]: Jun 21 06:09:05.963 INFO Fetch failed with 404: resource not found Jun 21 06:09:05.963379 coreos-metadata[1582]: Jun 21 06:09:05.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jun 21 06:09:05.966384 coreos-metadata[1582]: Jun 21 06:09:05.965 INFO Fetch successful Jun 21 06:09:05.969697 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 06:09:05.978692 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 06:09:05.980719 unknown[1582]: wrote ssh authorized keys file for user: core Jun 21 06:09:05.997502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:09:06.006356 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 06:09:06.012853 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jun 21 06:09:06.114804 update-ssh-keys[1604]: Updated "/home/core/.ssh/authorized_keys" Jun 21 06:09:06.113700 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 06:09:06.120019 systemd[1]: Finished sshkeys.service. Jun 21 06:09:06.125237 init.sh[1603]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jun 21 06:09:06.125237 init.sh[1603]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jun 21 06:09:06.125237 init.sh[1603]: + /usr/bin/google_instance_setup Jun 21 06:09:06.166088 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 06:09:06.199215 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 06:09:06.206845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:09:06.217730 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 06:09:06.360853 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 06:09:06.375004 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 06:09:06.387821 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 21 06:09:06.392569 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 21 06:09:06.405300 dbus-daemon[1482]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1579 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 21 06:09:06.421993 systemd[1]: Starting polkit.service - Authorization Manager... Jun 21 06:09:06.468221 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 06:09:06.468556 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 06:09:06.483024 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 06:09:06.548604 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 06:09:06.566813 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 06:09:06.579317 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 06:09:06.589866 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 06:09:06.610217 containerd[1569]: time="2025-06-21T06:09:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 06:09:06.617640 containerd[1569]: time="2025-06-21T06:09:06.617164038Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 06:09:06.669642 containerd[1569]: time="2025-06-21T06:09:06.669379184Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.046µs" Jun 21 06:09:06.669642 containerd[1569]: time="2025-06-21T06:09:06.669430483Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 06:09:06.669642 containerd[1569]: time="2025-06-21T06:09:06.669471738Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 06:09:06.670751 containerd[1569]: time="2025-06-21T06:09:06.670405764Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 06:09:06.670751 containerd[1569]: time="2025-06-21T06:09:06.670461903Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 06:09:06.670751 containerd[1569]: time="2025-06-21T06:09:06.670504004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 06:09:06.670751 containerd[1569]: time="2025-06-21T06:09:06.670596461Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.671646313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.671993741Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672018929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672037862Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672052232Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672164288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672460156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672505231Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 06:09:06.672642 containerd[1569]: time="2025-06-21T06:09:06.672522413Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 06:09:06.675666 containerd[1569]: time="2025-06-21T06:09:06.674373938Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 06:09:06.676696 containerd[1569]: time="2025-06-21T06:09:06.676663835Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 06:09:06.677013 containerd[1569]: time="2025-06-21T06:09:06.676933535Z" level=info msg="metadata content store policy set" policy=shared Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.687890267Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688004039Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688032476Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688100199Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688120987Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688139898Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688159137Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688182544Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688211600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688239388Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688258714Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688280930Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688466568Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 06:09:06.688658 containerd[1569]: time="2025-06-21T06:09:06.688497742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 06:09:06.689319 containerd[1569]: time="2025-06-21T06:09:06.688520297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 06:09:06.689319 containerd[1569]: time="2025-06-21T06:09:06.688538425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 06:09:06.689319 containerd[1569]: time="2025-06-21T06:09:06.688557297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 06:09:06.689319 containerd[1569]: time="2025-06-21T06:09:06.688586291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.688605856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.689925521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.689955909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.689993761Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.690012911Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.690116330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.690141116Z" level=info msg="Start snapshots syncer" Jun 21 06:09:06.691386 containerd[1569]: time="2025-06-21T06:09:06.690787173Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 06:09:06.692643 containerd[1569]: time="2025-06-21T06:09:06.691188546Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 06:09:06.692643 containerd[1569]: time="2025-06-21T06:09:06.691286776Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.695757540Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.695970411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696052321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696077215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696098231Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696121624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696143110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696187485Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696247561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696273501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.696306608Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.697730925Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.697773397Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 06:09:06.698696 containerd[1569]: time="2025-06-21T06:09:06.697792859Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697825309Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697840536Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697858000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697877081Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697903357Z" level=info msg="runtime interface created" Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697914005Z" level=info msg="created NRI interface" Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697931141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697952722Z" level=info msg="Connect containerd service" Jun 21 06:09:06.699315 containerd[1569]: time="2025-06-21T06:09:06.697998977Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 06:09:06.704457 containerd[1569]: time="2025-06-21T06:09:06.702846138Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 06:09:06.812268 polkitd[1630]: Started polkitd version 126 Jun 21 06:09:06.833304 polkitd[1630]: Loading rules from directory /etc/polkit-1/rules.d Jun 21 06:09:06.836203 polkitd[1630]: Loading rules from directory /run/polkit-1/rules.d Jun 21 06:09:06.838897 polkitd[1630]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 21 06:09:06.839705 polkitd[1630]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jun 21 06:09:06.839753 polkitd[1630]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 21 06:09:06.839819 polkitd[1630]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 21 06:09:06.842361 polkitd[1630]: Finished loading, compiling and executing 2 rules Jun 21 06:09:06.842853 systemd[1]: Started polkit.service - Authorization Manager. Jun 21 06:09:06.848383 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 21 06:09:06.853372 polkitd[1630]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 21 06:09:06.898662 systemd-hostnamed[1579]: Hostname set to (transient) Jun 21 06:09:06.902891 systemd-resolved[1375]: System hostname changed to 'ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal'. Jun 21 06:09:07.018394 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 06:09:07.033821 systemd[1]: Started sshd@0-10.128.0.41:22-147.75.109.163:51374.service - OpenSSH per-connection server daemon (147.75.109.163:51374). Jun 21 06:09:07.125552 containerd[1569]: time="2025-06-21T06:09:07.125492924Z" level=info msg="Start subscribing containerd event" Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126732530Z" level=info msg="Start recovering state" Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126919956Z" level=info msg="Start event monitor" Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126946435Z" level=info msg="Start cni network conf syncer for default" Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126959520Z" level=info msg="Start streaming server" Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126973109Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126985518Z" level=info msg="runtime interface starting up..." Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.126996418Z" level=info msg="starting plugins..." Jun 21 06:09:07.128896 containerd[1569]: time="2025-06-21T06:09:07.127017388Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 06:09:07.134121 containerd[1569]: time="2025-06-21T06:09:07.131319843Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 06:09:07.134121 containerd[1569]: time="2025-06-21T06:09:07.131478618Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 06:09:07.139207 tar[1513]: linux-amd64/LICENSE Jun 21 06:09:07.140478 tar[1513]: linux-amd64/README.md Jun 21 06:09:07.142951 containerd[1569]: time="2025-06-21T06:09:07.141401414Z" level=info msg="containerd successfully booted in 0.532048s" Jun 21 06:09:07.141553 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 06:09:07.182653 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 06:09:07.383219 instance-setup[1610]: INFO Running google_set_multiqueue. Jun 21 06:09:07.406349 instance-setup[1610]: INFO Set channels for eth0 to 2. Jun 21 06:09:07.412159 instance-setup[1610]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jun 21 06:09:07.413792 instance-setup[1610]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jun 21 06:09:07.416296 instance-setup[1610]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jun 21 06:09:07.416358 instance-setup[1610]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jun 21 06:09:07.416851 instance-setup[1610]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jun 21 06:09:07.420905 instance-setup[1610]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jun 21 06:09:07.421448 instance-setup[1610]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jun 21 06:09:07.423610 instance-setup[1610]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jun 21 06:09:07.435769 instance-setup[1610]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 21 06:09:07.439777 instance-setup[1610]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 21 06:09:07.441717 instance-setup[1610]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jun 21 06:09:07.442239 instance-setup[1610]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jun 21 06:09:07.464550 init.sh[1603]: + /usr/bin/google_metadata_script_runner --script-type startup Jun 21 06:09:07.498463 sshd[1660]: Accepted publickey for core from 147.75.109.163 port 51374 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:07.500466 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:07.514222 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 06:09:07.527058 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 06:09:07.582697 systemd-logind[1496]: New session 1 of user core. Jun 21 06:09:07.595111 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 06:09:07.617350 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 06:09:07.653857 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 06:09:07.660552 systemd-logind[1496]: New session c1 of user core. Jun 21 06:09:07.689077 startup-script[1696]: INFO Starting startup scripts. Jun 21 06:09:07.694848 startup-script[1696]: INFO No startup scripts found in metadata. Jun 21 06:09:07.694939 startup-script[1696]: INFO Finished running startup scripts. Jun 21 06:09:07.726490 init.sh[1603]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jun 21 06:09:07.726490 init.sh[1603]: + daemon_pids=() Jun 21 06:09:07.726490 init.sh[1603]: + for d in accounts clock_skew network Jun 21 06:09:07.727153 init.sh[1603]: + daemon_pids+=($!) Jun 21 06:09:07.727153 init.sh[1603]: + for d in accounts clock_skew network Jun 21 06:09:07.727257 init.sh[1707]: + /usr/bin/google_accounts_daemon Jun 21 06:09:07.727573 init.sh[1603]: + daemon_pids+=($!) Jun 21 06:09:07.727573 init.sh[1603]: + for d in accounts clock_skew network Jun 21 06:09:07.727573 init.sh[1603]: + daemon_pids+=($!) Jun 21 06:09:07.727573 init.sh[1603]: + NOTIFY_SOCKET=/run/systemd/notify Jun 21 06:09:07.727573 init.sh[1603]: + /usr/bin/systemd-notify --ready Jun 21 06:09:07.728232 init.sh[1708]: + /usr/bin/google_clock_skew_daemon Jun 21 06:09:07.730833 init.sh[1709]: + /usr/bin/google_network_daemon Jun 21 06:09:07.746825 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jun 21 06:09:07.767448 init.sh[1603]: + wait -n 1707 1708 1709 Jun 21 06:09:08.074938 systemd[1701]: Queued start job for default target default.target. Jun 21 06:09:08.082181 systemd[1701]: Created slice app.slice - User Application Slice. Jun 21 06:09:08.082445 systemd[1701]: Reached target paths.target - Paths. Jun 21 06:09:08.082678 systemd[1701]: Reached target timers.target - Timers. Jun 21 06:09:08.086754 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 06:09:08.133987 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 06:09:08.134208 systemd[1701]: Reached target sockets.target - Sockets. Jun 21 06:09:08.134729 systemd[1701]: Reached target basic.target - Basic System. Jun 21 06:09:08.135026 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 06:09:08.135365 systemd[1701]: Reached target default.target - Main User Target. Jun 21 06:09:08.135426 systemd[1701]: Startup finished in 457ms. Jun 21 06:09:08.147792 ntpd[1490]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:29%2]:123 Jun 21 06:09:08.148645 ntpd[1490]: 21 Jun 06:09:08 ntpd[1490]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:29%2]:123 Jun 21 06:09:08.150392 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 06:09:08.358637 google-networking[1709]: INFO Starting Google Networking daemon. Jun 21 06:09:08.374826 groupadd[1720]: group added to /etc/group: name=google-sudoers, GID=1000 Jun 21 06:09:08.380003 groupadd[1720]: group added to /etc/gshadow: name=google-sudoers Jun 21 06:09:08.386165 google-clock-skew[1708]: INFO Starting Google Clock Skew daemon. Jun 21 06:09:08.399939 google-clock-skew[1708]: INFO Clock drift token has changed: 0. Jun 21 06:09:08.405520 systemd[1]: Started sshd@1-10.128.0.41:22-147.75.109.163:51384.service - OpenSSH per-connection server daemon (147.75.109.163:51384). Jun 21 06:09:08.460950 groupadd[1720]: new group: name=google-sudoers, GID=1000 Jun 21 06:09:08.491786 google-accounts[1707]: INFO Starting Google Accounts daemon. Jun 21 06:09:08.505149 google-accounts[1707]: WARNING OS Login not installed. Jun 21 06:09:08.507407 google-accounts[1707]: INFO Creating a new user account for 0. Jun 21 06:09:08.512368 init.sh[1734]: useradd: invalid user name '0': use --badname to ignore Jun 21 06:09:08.512702 google-accounts[1707]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jun 21 06:09:08.650749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:08.661532 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 06:09:08.670330 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:09:08.670985 systemd[1]: Startup finished in 4.281s (kernel) + 7.583s (initrd) + 8.307s (userspace) = 20.171s. Jun 21 06:09:08.738605 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 51384 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:08.740081 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:08.750363 systemd-logind[1496]: New session 2 of user core. Jun 21 06:09:08.756907 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 06:09:08.956737 sshd[1746]: Connection closed by 147.75.109.163 port 51384 Jun 21 06:09:08.958141 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jun 21 06:09:08.964799 systemd[1]: sshd@1-10.128.0.41:22-147.75.109.163:51384.service: Deactivated successfully. Jun 21 06:09:08.967863 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 06:09:08.970869 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Jun 21 06:09:08.972599 systemd-logind[1496]: Removed session 2. Jun 21 06:09:09.011127 systemd[1]: Started sshd@2-10.128.0.41:22-147.75.109.163:51396.service - OpenSSH per-connection server daemon (147.75.109.163:51396). Jun 21 06:09:09.000633 google-clock-skew[1708]: INFO Synced system time with hardware clock. Jun 21 06:09:09.016077 systemd-journald[1161]: Time jumped backwards, rotating. Jun 21 06:09:09.000690 systemd-resolved[1375]: Clock change detected. Flushing caches. Jun 21 06:09:09.219824 sshd[1757]: Accepted publickey for core from 147.75.109.163 port 51396 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:09.221961 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:09.230921 systemd-logind[1496]: New session 3 of user core. Jun 21 06:09:09.236044 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 06:09:09.432862 sshd[1760]: Connection closed by 147.75.109.163 port 51396 Jun 21 06:09:09.433964 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jun 21 06:09:09.442340 systemd[1]: sshd@2-10.128.0.41:22-147.75.109.163:51396.service: Deactivated successfully. Jun 21 06:09:09.445404 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 06:09:09.448911 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Jun 21 06:09:09.452315 systemd-logind[1496]: Removed session 3. Jun 21 06:09:09.489971 systemd[1]: Started sshd@3-10.128.0.41:22-147.75.109.163:51402.service - OpenSSH per-connection server daemon (147.75.109.163:51402). Jun 21 06:09:09.530925 kubelet[1741]: E0621 06:09:09.530791 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:09:09.533627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:09:09.533896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:09:09.534447 systemd[1]: kubelet.service: Consumed 1.310s CPU time, 263.4M memory peak. Jun 21 06:09:09.797219 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 51402 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:09.799303 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:09.806898 systemd-logind[1496]: New session 4 of user core. Jun 21 06:09:09.818068 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 06:09:10.011086 sshd[1771]: Connection closed by 147.75.109.163 port 51402 Jun 21 06:09:10.011998 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jun 21 06:09:10.020649 systemd[1]: sshd@3-10.128.0.41:22-147.75.109.163:51402.service: Deactivated successfully. Jun 21 06:09:10.024708 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 06:09:10.027237 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Jun 21 06:09:10.030557 systemd-logind[1496]: Removed session 4. Jun 21 06:09:10.066136 systemd[1]: Started sshd@4-10.128.0.41:22-147.75.109.163:51404.service - OpenSSH per-connection server daemon (147.75.109.163:51404). Jun 21 06:09:10.386441 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 51404 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:10.388200 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:10.395612 systemd-logind[1496]: New session 5 of user core. Jun 21 06:09:10.405074 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 06:09:10.583549 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 06:09:10.584048 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:09:10.598751 sudo[1780]: pam_unix(sudo:session): session closed for user root Jun 21 06:09:10.642344 sshd[1779]: Connection closed by 147.75.109.163 port 51404 Jun 21 06:09:10.643956 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jun 21 06:09:10.648999 systemd[1]: sshd@4-10.128.0.41:22-147.75.109.163:51404.service: Deactivated successfully. Jun 21 06:09:10.651595 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 06:09:10.653791 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Jun 21 06:09:10.656022 systemd-logind[1496]: Removed session 5. Jun 21 06:09:10.701365 systemd[1]: Started sshd@5-10.128.0.41:22-147.75.109.163:51412.service - OpenSSH per-connection server daemon (147.75.109.163:51412). Jun 21 06:09:11.007127 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 51412 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:11.008685 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:11.016143 systemd-logind[1496]: New session 6 of user core. Jun 21 06:09:11.023088 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 06:09:11.188558 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 06:09:11.189052 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:09:11.195994 sudo[1790]: pam_unix(sudo:session): session closed for user root Jun 21 06:09:11.209446 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 06:09:11.209925 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:09:11.222193 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 06:09:11.278540 augenrules[1812]: No rules Jun 21 06:09:11.280601 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 06:09:11.280977 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 06:09:11.282955 sudo[1789]: pam_unix(sudo:session): session closed for user root Jun 21 06:09:11.326676 sshd[1788]: Connection closed by 147.75.109.163 port 51412 Jun 21 06:09:11.327475 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jun 21 06:09:11.333309 systemd[1]: sshd@5-10.128.0.41:22-147.75.109.163:51412.service: Deactivated successfully. Jun 21 06:09:11.335672 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 06:09:11.336943 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Jun 21 06:09:11.338956 systemd-logind[1496]: Removed session 6. Jun 21 06:09:11.391173 systemd[1]: Started sshd@6-10.128.0.41:22-147.75.109.163:51420.service - OpenSSH per-connection server daemon (147.75.109.163:51420). Jun 21 06:09:11.693751 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 51420 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:09:11.695612 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:09:11.703182 systemd-logind[1496]: New session 7 of user core. Jun 21 06:09:11.717075 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 06:09:11.871293 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 06:09:11.871763 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:09:12.409024 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 06:09:12.429502 (dockerd)[1842]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 06:09:12.749253 dockerd[1842]: time="2025-06-21T06:09:12.749099617Z" level=info msg="Starting up" Jun 21 06:09:12.752425 dockerd[1842]: time="2025-06-21T06:09:12.751867271Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 06:09:12.789621 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2039493672-merged.mount: Deactivated successfully. Jun 21 06:09:12.882868 dockerd[1842]: time="2025-06-21T06:09:12.882613017Z" level=info msg="Loading containers: start." Jun 21 06:09:12.901869 kernel: Initializing XFRM netlink socket Jun 21 06:09:13.235418 systemd-networkd[1454]: docker0: Link UP Jun 21 06:09:13.242805 dockerd[1842]: time="2025-06-21T06:09:13.242744666Z" level=info msg="Loading containers: done." Jun 21 06:09:13.263615 dockerd[1842]: time="2025-06-21T06:09:13.263525287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 06:09:13.263799 dockerd[1842]: time="2025-06-21T06:09:13.263642169Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 06:09:13.263882 dockerd[1842]: time="2025-06-21T06:09:13.263796516Z" level=info msg="Initializing buildkit" Jun 21 06:09:13.296944 dockerd[1842]: time="2025-06-21T06:09:13.296870277Z" level=info msg="Completed buildkit initialization" Jun 21 06:09:13.306678 dockerd[1842]: time="2025-06-21T06:09:13.306618250Z" level=info msg="Daemon has completed initialization" Jun 21 06:09:13.306921 dockerd[1842]: time="2025-06-21T06:09:13.306717389Z" level=info msg="API listen on /run/docker.sock" Jun 21 06:09:13.307257 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 06:09:13.783980 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3889641818-merged.mount: Deactivated successfully. Jun 21 06:09:14.240231 containerd[1569]: time="2025-06-21T06:09:14.240095219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 21 06:09:14.793177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425006814.mount: Deactivated successfully. Jun 21 06:09:16.472343 containerd[1569]: time="2025-06-21T06:09:16.472270787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:16.474944 containerd[1569]: time="2025-06-21T06:09:16.474885162Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28084372" Jun 21 06:09:16.478853 containerd[1569]: time="2025-06-21T06:09:16.477812136Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:16.481935 containerd[1569]: time="2025-06-21T06:09:16.481898542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:16.483566 containerd[1569]: time="2025-06-21T06:09:16.483527556Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.24337421s" Jun 21 06:09:16.483720 containerd[1569]: time="2025-06-21T06:09:16.483688842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 21 06:09:16.484474 containerd[1569]: time="2025-06-21T06:09:16.484440907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 21 06:09:17.986974 containerd[1569]: time="2025-06-21T06:09:17.986909926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:17.988418 containerd[1569]: time="2025-06-21T06:09:17.988362070Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24715228" Jun 21 06:09:17.989654 containerd[1569]: time="2025-06-21T06:09:17.989586607Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:17.992994 containerd[1569]: time="2025-06-21T06:09:17.992913077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:17.994394 containerd[1569]: time="2025-06-21T06:09:17.994214462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.509729207s" Jun 21 06:09:17.994394 containerd[1569]: time="2025-06-21T06:09:17.994260655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 21 06:09:17.995380 containerd[1569]: time="2025-06-21T06:09:17.995021212Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 21 06:09:19.326250 containerd[1569]: time="2025-06-21T06:09:19.326182259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:19.327561 containerd[1569]: time="2025-06-21T06:09:19.327492635Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18785587" Jun 21 06:09:19.328809 containerd[1569]: time="2025-06-21T06:09:19.328744890Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:19.332095 containerd[1569]: time="2025-06-21T06:09:19.332034440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:19.333907 containerd[1569]: time="2025-06-21T06:09:19.333360086Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.338298157s" Jun 21 06:09:19.333907 containerd[1569]: time="2025-06-21T06:09:19.333405025Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 21 06:09:19.334432 containerd[1569]: time="2025-06-21T06:09:19.334322164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 21 06:09:19.784416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 06:09:19.787223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:09:20.214821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:20.226361 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:09:20.300309 kubelet[2113]: E0621 06:09:20.300250 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:09:20.307147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:09:20.307549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:09:20.308496 systemd[1]: kubelet.service: Consumed 228ms CPU time, 108.7M memory peak. Jun 21 06:09:20.806646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297272384.mount: Deactivated successfully. Jun 21 06:09:21.431136 containerd[1569]: time="2025-06-21T06:09:21.431053818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:21.432349 containerd[1569]: time="2025-06-21T06:09:21.432288828Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30385838" Jun 21 06:09:21.433766 containerd[1569]: time="2025-06-21T06:09:21.433701853Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:21.436426 containerd[1569]: time="2025-06-21T06:09:21.436350715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:21.437539 containerd[1569]: time="2025-06-21T06:09:21.437216466Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.102853676s" Jun 21 06:09:21.437539 containerd[1569]: time="2025-06-21T06:09:21.437261128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 21 06:09:21.438246 containerd[1569]: time="2025-06-21T06:09:21.438182614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 06:09:21.867591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242939086.mount: Deactivated successfully. Jun 21 06:09:23.014645 containerd[1569]: time="2025-06-21T06:09:23.014575889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:23.016476 containerd[1569]: time="2025-06-21T06:09:23.016415625Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jun 21 06:09:23.017854 containerd[1569]: time="2025-06-21T06:09:23.017792245Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:23.022360 containerd[1569]: time="2025-06-21T06:09:23.022251442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:23.024546 containerd[1569]: time="2025-06-21T06:09:23.023888512Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.585659404s" Jun 21 06:09:23.024546 containerd[1569]: time="2025-06-21T06:09:23.023933078Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 06:09:23.024873 containerd[1569]: time="2025-06-21T06:09:23.024843939Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 06:09:23.485056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709580484.mount: Deactivated successfully. Jun 21 06:09:23.494370 containerd[1569]: time="2025-06-21T06:09:23.494301769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:09:23.495624 containerd[1569]: time="2025-06-21T06:09:23.495584232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jun 21 06:09:23.497154 containerd[1569]: time="2025-06-21T06:09:23.497075035Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:09:23.500314 containerd[1569]: time="2025-06-21T06:09:23.500250621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:09:23.501348 containerd[1569]: time="2025-06-21T06:09:23.501308468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 476.33151ms" Jun 21 06:09:23.501741 containerd[1569]: time="2025-06-21T06:09:23.501350030Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 06:09:23.502237 containerd[1569]: time="2025-06-21T06:09:23.502193583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 21 06:09:23.960610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420981104.mount: Deactivated successfully. Jun 21 06:09:26.049933 containerd[1569]: time="2025-06-21T06:09:26.049858628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:26.051486 containerd[1569]: time="2025-06-21T06:09:26.051441622Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786577" Jun 21 06:09:26.052866 containerd[1569]: time="2025-06-21T06:09:26.052805277Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:26.056654 containerd[1569]: time="2025-06-21T06:09:26.056575383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:26.058519 containerd[1569]: time="2025-06-21T06:09:26.058038587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.555681554s" Jun 21 06:09:26.058519 containerd[1569]: time="2025-06-21T06:09:26.058084318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 21 06:09:29.170652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:29.170991 systemd[1]: kubelet.service: Consumed 228ms CPU time, 108.7M memory peak. Jun 21 06:09:29.174276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:09:29.212678 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-7.scope)... Jun 21 06:09:29.212936 systemd[1]: Reloading... Jun 21 06:09:29.389857 zram_generator::config[2311]: No configuration found. Jun 21 06:09:29.528114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:09:29.694936 systemd[1]: Reloading finished in 481 ms. Jun 21 06:09:29.768919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 06:09:29.769054 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 06:09:29.769438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:29.769527 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.3M memory peak. Jun 21 06:09:29.772786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:09:30.412060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:30.422447 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 06:09:30.487128 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:09:30.487128 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 06:09:30.487128 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:09:30.487859 kubelet[2362]: I0621 06:09:30.487258 2362 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 06:09:31.079032 kubelet[2362]: I0621 06:09:31.078977 2362 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 06:09:31.079032 kubelet[2362]: I0621 06:09:31.079014 2362 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 06:09:31.079411 kubelet[2362]: I0621 06:09:31.079373 2362 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 06:09:31.113345 kubelet[2362]: E0621 06:09:31.113274 2362 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.41:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:09:31.117777 kubelet[2362]: I0621 06:09:31.117729 2362 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 06:09:31.127186 kubelet[2362]: I0621 06:09:31.127124 2362 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 06:09:31.136852 kubelet[2362]: I0621 06:09:31.136748 2362 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 06:09:31.138920 kubelet[2362]: I0621 06:09:31.138886 2362 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 06:09:31.139528 kubelet[2362]: I0621 06:09:31.139273 2362 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 06:09:31.139653 kubelet[2362]: I0621 06:09:31.139330 2362 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 06:09:31.139856 kubelet[2362]: I0621 06:09:31.139673 2362 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 06:09:31.139856 kubelet[2362]: I0621 06:09:31.139692 2362 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 06:09:31.139965 kubelet[2362]: I0621 06:09:31.139863 2362 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:09:31.145451 kubelet[2362]: I0621 06:09:31.145010 2362 kubelet.go:408] "Attempting to sync node with API server" Jun 21 06:09:31.145451 kubelet[2362]: I0621 06:09:31.145073 2362 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 06:09:31.145451 kubelet[2362]: I0621 06:09:31.145123 2362 kubelet.go:314] "Adding apiserver pod source" Jun 21 06:09:31.145451 kubelet[2362]: I0621 06:09:31.145151 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 06:09:31.151662 kubelet[2362]: W0621 06:09:31.151578 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.41:6443: connect: connection refused Jun 21 06:09:31.151786 kubelet[2362]: E0621 06:09:31.151683 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.41:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:09:31.153868 kubelet[2362]: I0621 06:09:31.152940 2362 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 06:09:31.153868 kubelet[2362]: I0621 06:09:31.153803 2362 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 06:09:31.155879 kubelet[2362]: W0621 06:09:31.155362 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 06:09:31.158176 kubelet[2362]: W0621 06:09:31.158116 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.41:6443: connect: connection refused Jun 21 06:09:31.158297 kubelet[2362]: I0621 06:09:31.158274 2362 server.go:1274] "Started kubelet" Jun 21 06:09:31.158409 kubelet[2362]: E0621 06:09:31.158287 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.41:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:09:31.159799 kubelet[2362]: I0621 06:09:31.159756 2362 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 06:09:31.160364 kubelet[2362]: I0621 06:09:31.160340 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 06:09:31.161250 kubelet[2362]: I0621 06:09:31.161220 2362 server.go:449] "Adding debug handlers to kubelet server" Jun 21 06:09:31.165753 kubelet[2362]: I0621 06:09:31.165021 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 06:09:31.165753 kubelet[2362]: I0621 06:09:31.165289 2362 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 06:09:31.166130 kubelet[2362]: I0621 06:09:31.166084 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 06:09:31.168719 kubelet[2362]: I0621 06:09:31.167925 2362 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 06:09:31.168719 kubelet[2362]: E0621 06:09:31.168190 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" not found" Jun 21 06:09:31.169547 kubelet[2362]: E0621 06:09:31.168819 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.41:6443: connect: connection refused" interval="200ms" Jun 21 06:09:31.171970 kubelet[2362]: E0621 06:09:31.169624 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal.184af9e5f01eec8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,UID:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,},FirstTimestamp:2025-06-21 06:09:31.158244495 +0000 UTC m=+0.729951713,LastTimestamp:2025-06-21 06:09:31.158244495 +0000 UTC m=+0.729951713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,}" Jun 21 06:09:31.174871 kubelet[2362]: I0621 06:09:31.174402 2362 factory.go:221] Registration of the systemd container factory successfully Jun 21 06:09:31.174871 kubelet[2362]: I0621 06:09:31.174519 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 06:09:31.176575 kubelet[2362]: I0621 06:09:31.175972 2362 reconciler.go:26] "Reconciler: start to sync state" Jun 21 06:09:31.176575 kubelet[2362]: I0621 06:09:31.176042 2362 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 06:09:31.176575 kubelet[2362]: W0621 06:09:31.176422 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.41:6443: connect: connection refused Jun 21 06:09:31.176575 kubelet[2362]: E0621 06:09:31.176482 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.41:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:09:31.178400 kubelet[2362]: I0621 06:09:31.178374 2362 factory.go:221] Registration of the containerd container factory successfully Jun 21 06:09:31.195675 kubelet[2362]: I0621 06:09:31.195632 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 06:09:31.197460 kubelet[2362]: I0621 06:09:31.197424 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 06:09:31.197460 kubelet[2362]: I0621 06:09:31.197462 2362 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 06:09:31.197580 kubelet[2362]: I0621 06:09:31.197487 2362 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 06:09:31.197580 kubelet[2362]: E0621 06:09:31.197543 2362 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 06:09:31.205917 kubelet[2362]: E0621 06:09:31.205208 2362 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 06:09:31.206028 kubelet[2362]: W0621 06:09:31.205817 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.41:6443: connect: connection refused Jun 21 06:09:31.206167 kubelet[2362]: E0621 06:09:31.206139 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.41:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:09:31.242950 kubelet[2362]: I0621 06:09:31.242912 2362 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 06:09:31.243085 kubelet[2362]: I0621 06:09:31.243055 2362 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 06:09:31.243085 kubelet[2362]: I0621 06:09:31.243081 2362 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:09:31.246200 kubelet[2362]: I0621 06:09:31.246168 2362 policy_none.go:49] "None policy: Start" Jun 21 06:09:31.247160 kubelet[2362]: I0621 06:09:31.247037 2362 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 06:09:31.247160 kubelet[2362]: I0621 06:09:31.247069 2362 state_mem.go:35] "Initializing new in-memory state store" Jun 21 06:09:31.260137 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 06:09:31.268529 kubelet[2362]: E0621 06:09:31.268481 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" not found" Jun 21 06:09:31.275699 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 06:09:31.285327 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 06:09:31.296880 kubelet[2362]: I0621 06:09:31.296850 2362 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 06:09:31.297273 kubelet[2362]: I0621 06:09:31.297253 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 06:09:31.297863 kubelet[2362]: I0621 06:09:31.297448 2362 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 06:09:31.298299 kubelet[2362]: I0621 06:09:31.298278 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 06:09:31.302019 kubelet[2362]: E0621 06:09:31.301994 2362 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" not found" Jun 21 06:09:31.318298 systemd[1]: Created slice kubepods-burstable-pod92f5cb538f0d4a38b097cb60e62abcef.slice - libcontainer container kubepods-burstable-pod92f5cb538f0d4a38b097cb60e62abcef.slice. Jun 21 06:09:31.337199 systemd[1]: Created slice kubepods-burstable-pod9d5b6c790e8a670b0dadbd87d64a2914.slice - libcontainer container kubepods-burstable-pod9d5b6c790e8a670b0dadbd87d64a2914.slice. Jun 21 06:09:31.350002 systemd[1]: Created slice kubepods-burstable-poded59345883da3e4fa722df31164f00f6.slice - libcontainer container kubepods-burstable-poded59345883da3e4fa722df31164f00f6.slice. Jun 21 06:09:31.370398 kubelet[2362]: E0621 06:09:31.370342 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.41:6443: connect: connection refused" interval="400ms" Jun 21 06:09:31.402440 kubelet[2362]: I0621 06:09:31.402389 2362 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.402920 kubelet[2362]: E0621 06:09:31.402877 2362 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.41:6443/api/v1/nodes\": dial tcp 10.128.0.41:6443: connect: connection refused" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.477898 kubelet[2362]: I0621 06:09:31.477762 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.477898 kubelet[2362]: I0621 06:09:31.477848 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed59345883da3e4fa722df31164f00f6-kubeconfig\") pod \"kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"ed59345883da3e4fa722df31164f00f6\") " pod="kube-system/kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.477898 kubelet[2362]: I0621 06:09:31.477889 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f5cb538f0d4a38b097cb60e62abcef-ca-certs\") pod \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"92f5cb538f0d4a38b097cb60e62abcef\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.477898 kubelet[2362]: I0621 06:09:31.477917 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f5cb538f0d4a38b097cb60e62abcef-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"92f5cb538f0d4a38b097cb60e62abcef\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.478307 kubelet[2362]: I0621 06:09:31.477948 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-ca-certs\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.478307 kubelet[2362]: I0621 06:09:31.477979 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.478307 kubelet[2362]: I0621 06:09:31.478005 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f5cb538f0d4a38b097cb60e62abcef-k8s-certs\") pod \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"92f5cb538f0d4a38b097cb60e62abcef\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.478307 kubelet[2362]: I0621 06:09:31.478031 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.478416 kubelet[2362]: I0621 06:09:31.478062 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.608012 kubelet[2362]: I0621 06:09:31.607894 2362 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.609015 kubelet[2362]: E0621 06:09:31.608966 2362 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.41:6443/api/v1/nodes\": dial tcp 10.128.0.41:6443: connect: connection refused" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:31.633258 containerd[1569]: time="2025-06-21T06:09:31.633187128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,Uid:92f5cb538f0d4a38b097cb60e62abcef,Namespace:kube-system,Attempt:0,}" Jun 21 06:09:31.647052 containerd[1569]: time="2025-06-21T06:09:31.647000112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,Uid:9d5b6c790e8a670b0dadbd87d64a2914,Namespace:kube-system,Attempt:0,}" Jun 21 06:09:31.670737 containerd[1569]: time="2025-06-21T06:09:31.670280985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,Uid:ed59345883da3e4fa722df31164f00f6,Namespace:kube-system,Attempt:0,}" Jun 21 06:09:31.691997 containerd[1569]: time="2025-06-21T06:09:31.691937725Z" level=info msg="connecting to shim f40af33c20631a9621a371b1e2b698d3044c06c162bf87a77ce0e962aa40aefe" address="unix:///run/containerd/s/f20c6a121a2531ac95c18f6a1dbee6a1e539c809906fcdf8c76a9a8af20f4d5d" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:09:31.696609 containerd[1569]: time="2025-06-21T06:09:31.696556861Z" level=info msg="connecting to shim 0413bcf60727d644087c97b6798dce7f64ea4b4b4342e498753367e457035678" address="unix:///run/containerd/s/e1f4f55daa5b60448df79ed4b42d0fecc9fb8bd4dc98e6b4ecd6df1344d97e58" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:09:31.730223 systemd[1]: Started cri-containerd-f40af33c20631a9621a371b1e2b698d3044c06c162bf87a77ce0e962aa40aefe.scope - libcontainer container f40af33c20631a9621a371b1e2b698d3044c06c162bf87a77ce0e962aa40aefe. Jun 21 06:09:31.772294 kubelet[2362]: E0621 06:09:31.772222 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.41:6443: connect: connection refused" interval="800ms" Jun 21 06:09:31.773447 containerd[1569]: time="2025-06-21T06:09:31.773283463Z" level=info msg="connecting to shim cfa8d773dcfde2ddbe3f6c81d6f0d6e7d5b67990568bb69c37637b2325131c3d" address="unix:///run/containerd/s/46b4d230694dc4c66f4aa7be25e0f93c81bda18da885d8dc063afde1f3938f09" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:09:31.789251 systemd[1]: Started cri-containerd-0413bcf60727d644087c97b6798dce7f64ea4b4b4342e498753367e457035678.scope - libcontainer container 0413bcf60727d644087c97b6798dce7f64ea4b4b4342e498753367e457035678. Jun 21 06:09:31.845212 systemd[1]: Started cri-containerd-cfa8d773dcfde2ddbe3f6c81d6f0d6e7d5b67990568bb69c37637b2325131c3d.scope - libcontainer container cfa8d773dcfde2ddbe3f6c81d6f0d6e7d5b67990568bb69c37637b2325131c3d. Jun 21 06:09:31.871536 containerd[1569]: time="2025-06-21T06:09:31.871406032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,Uid:92f5cb538f0d4a38b097cb60e62abcef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40af33c20631a9621a371b1e2b698d3044c06c162bf87a77ce0e962aa40aefe\"" Jun 21 06:09:31.876735 kubelet[2362]: E0621 06:09:31.876674 2362 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-21291" Jun 21 06:09:31.880492 containerd[1569]: time="2025-06-21T06:09:31.880454174Z" level=info msg="CreateContainer within sandbox \"f40af33c20631a9621a371b1e2b698d3044c06c162bf87a77ce0e962aa40aefe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 06:09:31.892747 containerd[1569]: time="2025-06-21T06:09:31.892706697Z" level=info msg="Container f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:31.905538 containerd[1569]: time="2025-06-21T06:09:31.905478044Z" level=info msg="CreateContainer within sandbox \"f40af33c20631a9621a371b1e2b698d3044c06c162bf87a77ce0e962aa40aefe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1\"" Jun 21 06:09:31.907565 containerd[1569]: time="2025-06-21T06:09:31.907431020Z" level=info msg="StartContainer for \"f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1\"" Jun 21 06:09:31.910458 containerd[1569]: time="2025-06-21T06:09:31.909962446Z" level=info msg="connecting to shim f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1" address="unix:///run/containerd/s/f20c6a121a2531ac95c18f6a1dbee6a1e539c809906fcdf8c76a9a8af20f4d5d" protocol=ttrpc version=3 Jun 21 06:09:31.938362 containerd[1569]: time="2025-06-21T06:09:31.938311967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,Uid:9d5b6c790e8a670b0dadbd87d64a2914,Namespace:kube-system,Attempt:0,} returns sandbox id \"0413bcf60727d644087c97b6798dce7f64ea4b4b4342e498753367e457035678\"" Jun 21 06:09:31.944382 kubelet[2362]: E0621 06:09:31.943996 2362 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flat" Jun 21 06:09:31.946142 containerd[1569]: time="2025-06-21T06:09:31.946107890Z" level=info msg="CreateContainer within sandbox \"0413bcf60727d644087c97b6798dce7f64ea4b4b4342e498753367e457035678\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 06:09:31.947031 systemd[1]: Started cri-containerd-f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1.scope - libcontainer container f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1. Jun 21 06:09:31.982738 containerd[1569]: time="2025-06-21T06:09:31.982689261Z" level=info msg="Container fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:31.994428 containerd[1569]: time="2025-06-21T06:09:31.994374816Z" level=info msg="CreateContainer within sandbox \"0413bcf60727d644087c97b6798dce7f64ea4b4b4342e498753367e457035678\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95\"" Jun 21 06:09:31.996616 containerd[1569]: time="2025-06-21T06:09:31.996535942Z" level=info msg="StartContainer for \"fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95\"" Jun 21 06:09:32.000037 containerd[1569]: time="2025-06-21T06:09:31.999995656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,Uid:ed59345883da3e4fa722df31164f00f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfa8d773dcfde2ddbe3f6c81d6f0d6e7d5b67990568bb69c37637b2325131c3d\"" Jun 21 06:09:32.001250 containerd[1569]: time="2025-06-21T06:09:32.001078465Z" level=info msg="connecting to shim fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95" address="unix:///run/containerd/s/e1f4f55daa5b60448df79ed4b42d0fecc9fb8bd4dc98e6b4ecd6df1344d97e58" protocol=ttrpc version=3 Jun 21 06:09:32.004741 kubelet[2362]: E0621 06:09:32.003800 2362 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-21291" Jun 21 06:09:32.009551 containerd[1569]: time="2025-06-21T06:09:32.009514880Z" level=info msg="CreateContainer within sandbox \"cfa8d773dcfde2ddbe3f6c81d6f0d6e7d5b67990568bb69c37637b2325131c3d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 06:09:32.020038 kubelet[2362]: I0621 06:09:32.019565 2362 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:32.023514 kubelet[2362]: E0621 06:09:32.023461 2362 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.41:6443/api/v1/nodes\": dial tcp 10.128.0.41:6443: connect: connection refused" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:32.032101 containerd[1569]: time="2025-06-21T06:09:32.032032095Z" level=info msg="Container 0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:32.034455 systemd[1]: Started cri-containerd-fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95.scope - libcontainer container fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95. Jun 21 06:09:32.046542 containerd[1569]: time="2025-06-21T06:09:32.046495097Z" level=info msg="CreateContainer within sandbox \"cfa8d773dcfde2ddbe3f6c81d6f0d6e7d5b67990568bb69c37637b2325131c3d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396\"" Jun 21 06:09:32.048949 containerd[1569]: time="2025-06-21T06:09:32.047989705Z" level=info msg="StartContainer for \"0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396\"" Jun 21 06:09:32.051718 containerd[1569]: time="2025-06-21T06:09:32.051668854Z" level=info msg="connecting to shim 0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396" address="unix:///run/containerd/s/46b4d230694dc4c66f4aa7be25e0f93c81bda18da885d8dc063afde1f3938f09" protocol=ttrpc version=3 Jun 21 06:09:32.096141 systemd[1]: Started cri-containerd-0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396.scope - libcontainer container 0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396. Jun 21 06:09:32.102914 containerd[1569]: time="2025-06-21T06:09:32.102872558Z" level=info msg="StartContainer for \"f9e6f964546ca2cb8572390f6d82794406ab143dcb65c59e9e5d4707808016d1\" returns successfully" Jun 21 06:09:32.110969 kubelet[2362]: W0621 06:09:32.110791 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.41:6443: connect: connection refused Jun 21 06:09:32.111368 kubelet[2362]: E0621 06:09:32.111260 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.41:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:09:32.197534 containerd[1569]: time="2025-06-21T06:09:32.197410819Z" level=info msg="StartContainer for \"fdf5b463ae5e8d6fd567d8b1361f301c90be1035d76a139b61de250dab4edd95\" returns successfully" Jun 21 06:09:32.280742 containerd[1569]: time="2025-06-21T06:09:32.280685982Z" level=info msg="StartContainer for \"0680fb21eefb637bd4e82b782a2195b406e07302ef71d5d7d1725e1382b77396\" returns successfully" Jun 21 06:09:32.830847 kubelet[2362]: I0621 06:09:32.830514 2362 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:35.243984 kubelet[2362]: I0621 06:09:35.243899 2362 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:35.263559 kubelet[2362]: E0621 06:09:35.263346 2362 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal.184af9e5f01eec8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,UID:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,},FirstTimestamp:2025-06-21 06:09:31.158244495 +0000 UTC m=+0.729951713,LastTimestamp:2025-06-21 06:09:31.158244495 +0000 UTC m=+0.729951713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal,}" Jun 21 06:09:35.329938 kubelet[2362]: E0621 06:09:35.329882 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Jun 21 06:09:36.156008 kubelet[2362]: I0621 06:09:36.155688 2362 apiserver.go:52] "Watching apiserver" Jun 21 06:09:36.177198 kubelet[2362]: I0621 06:09:36.177077 2362 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 06:09:36.815703 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 21 06:09:37.402581 systemd[1]: Reload requested from client PID 2635 ('systemctl') (unit session-7.scope)... Jun 21 06:09:37.402606 systemd[1]: Reloading... Jun 21 06:09:37.559879 zram_generator::config[2688]: No configuration found. Jun 21 06:09:37.668372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:09:37.864872 systemd[1]: Reloading finished in 461 ms. Jun 21 06:09:37.909274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:09:37.929562 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 06:09:37.929922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:37.930030 systemd[1]: kubelet.service: Consumed 1.246s CPU time, 129.9M memory peak. Jun 21 06:09:37.933817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:09:38.238781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:09:38.250644 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 06:09:38.319692 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:09:38.320215 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 06:09:38.320215 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:09:38.320450 kubelet[2727]: I0621 06:09:38.320329 2727 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 06:09:38.331247 kubelet[2727]: I0621 06:09:38.331211 2727 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 06:09:38.332872 kubelet[2727]: I0621 06:09:38.331392 2727 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 06:09:38.332872 kubelet[2727]: I0621 06:09:38.332016 2727 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 06:09:38.335355 kubelet[2727]: I0621 06:09:38.335325 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 06:09:38.337951 kubelet[2727]: I0621 06:09:38.337867 2727 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 06:09:38.343775 kubelet[2727]: I0621 06:09:38.342978 2727 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 06:09:38.347216 kubelet[2727]: I0621 06:09:38.347165 2727 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 06:09:38.347447 kubelet[2727]: I0621 06:09:38.347321 2727 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 06:09:38.347561 kubelet[2727]: I0621 06:09:38.347508 2727 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 06:09:38.347811 kubelet[2727]: I0621 06:09:38.347561 2727 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 06:09:38.347988 kubelet[2727]: I0621 06:09:38.347810 2727 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 06:09:38.347988 kubelet[2727]: I0621 06:09:38.347845 2727 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 06:09:38.347988 kubelet[2727]: I0621 06:09:38.347886 2727 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:09:38.349185 kubelet[2727]: I0621 06:09:38.348043 2727 kubelet.go:408] "Attempting to sync node with API server" Jun 21 06:09:38.349185 kubelet[2727]: I0621 06:09:38.348063 2727 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 06:09:38.349185 kubelet[2727]: I0621 06:09:38.348105 2727 kubelet.go:314] "Adding apiserver pod source" Jun 21 06:09:38.349185 kubelet[2727]: I0621 06:09:38.348121 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 06:09:38.349967 kubelet[2727]: I0621 06:09:38.349945 2727 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 06:09:38.350719 kubelet[2727]: I0621 06:09:38.350696 2727 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 06:09:38.351413 kubelet[2727]: I0621 06:09:38.351391 2727 server.go:1274] "Started kubelet" Jun 21 06:09:38.357517 kubelet[2727]: I0621 06:09:38.356221 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 06:09:38.357517 kubelet[2727]: I0621 06:09:38.356559 2727 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 06:09:38.357517 kubelet[2727]: I0621 06:09:38.356651 2727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 06:09:38.363859 kubelet[2727]: I0621 06:09:38.363265 2727 server.go:449] "Adding debug handlers to kubelet server" Jun 21 06:09:38.364333 kubelet[2727]: I0621 06:09:38.364311 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 06:09:38.376866 kubelet[2727]: I0621 06:09:38.376025 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 06:09:38.382256 kubelet[2727]: I0621 06:09:38.382225 2727 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 06:09:38.382569 kubelet[2727]: E0621 06:09:38.382510 2727 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" not found" Jun 21 06:09:38.384180 kubelet[2727]: I0621 06:09:38.384151 2727 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 06:09:38.384392 kubelet[2727]: I0621 06:09:38.384370 2727 reconciler.go:26] "Reconciler: start to sync state" Jun 21 06:09:38.399022 kubelet[2727]: I0621 06:09:38.398851 2727 factory.go:221] Registration of the systemd container factory successfully Jun 21 06:09:38.399595 kubelet[2727]: I0621 06:09:38.399456 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 06:09:38.407613 kubelet[2727]: E0621 06:09:38.407412 2727 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 06:09:38.412113 kubelet[2727]: I0621 06:09:38.411979 2727 factory.go:221] Registration of the containerd container factory successfully Jun 21 06:09:38.426657 sudo[2744]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 06:09:38.428314 sudo[2744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 06:09:38.448441 kubelet[2727]: I0621 06:09:38.447937 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 06:09:38.451962 kubelet[2727]: I0621 06:09:38.451918 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 06:09:38.452330 kubelet[2727]: I0621 06:09:38.452315 2727 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 06:09:38.454696 kubelet[2727]: I0621 06:09:38.454672 2727 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 06:09:38.455956 kubelet[2727]: E0621 06:09:38.454903 2727 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 06:09:38.517715 kubelet[2727]: I0621 06:09:38.515893 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 06:09:38.517922 kubelet[2727]: I0621 06:09:38.517897 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 06:09:38.518045 kubelet[2727]: I0621 06:09:38.518022 2727 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:09:38.518378 kubelet[2727]: I0621 06:09:38.518360 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 06:09:38.518503 kubelet[2727]: I0621 06:09:38.518469 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 06:09:38.518585 kubelet[2727]: I0621 06:09:38.518575 2727 policy_none.go:49] "None policy: Start" Jun 21 06:09:38.520673 kubelet[2727]: I0621 06:09:38.520651 2727 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 06:09:38.520812 kubelet[2727]: I0621 06:09:38.520801 2727 state_mem.go:35] "Initializing new in-memory state store" Jun 21 06:09:38.521204 kubelet[2727]: I0621 06:09:38.521187 2727 state_mem.go:75] "Updated machine memory state" Jun 21 06:09:38.538854 kubelet[2727]: I0621 06:09:38.536934 2727 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 06:09:38.539232 kubelet[2727]: I0621 06:09:38.539209 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 06:09:38.539399 kubelet[2727]: I0621 06:09:38.539352 2727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 06:09:38.539944 kubelet[2727]: I0621 06:09:38.539921 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 06:09:38.566698 kubelet[2727]: W0621 06:09:38.566631 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 21 06:09:38.585304 kubelet[2727]: I0621 06:09:38.584585 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f5cb538f0d4a38b097cb60e62abcef-ca-certs\") pod \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"92f5cb538f0d4a38b097cb60e62abcef\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.586442 kubelet[2727]: I0621 06:09:38.586127 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f5cb538f0d4a38b097cb60e62abcef-k8s-certs\") pod \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"92f5cb538f0d4a38b097cb60e62abcef\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.588278 kubelet[2727]: I0621 06:09:38.588047 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-ca-certs\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.589011 kubelet[2727]: I0621 06:09:38.588955 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.590462 kubelet[2727]: W0621 06:09:38.588514 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 21 06:09:38.591008 kubelet[2727]: E0621 06:09:38.590638 2727 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.591008 kubelet[2727]: I0621 06:09:38.590588 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed59345883da3e4fa722df31164f00f6-kubeconfig\") pod \"kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"ed59345883da3e4fa722df31164f00f6\") " pod="kube-system/kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.591008 kubelet[2727]: W0621 06:09:38.588882 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 21 06:09:38.591008 kubelet[2727]: I0621 06:09:38.590877 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f5cb538f0d4a38b097cb60e62abcef-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"92f5cb538f0d4a38b097cb60e62abcef\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.591337 kubelet[2727]: I0621 06:09:38.591270 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.591729 kubelet[2727]: I0621 06:09:38.591321 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.592568 kubelet[2727]: I0621 06:09:38.592517 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d5b6c790e8a670b0dadbd87d64a2914-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" (UID: \"9d5b6c790e8a670b0dadbd87d64a2914\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.661332 kubelet[2727]: I0621 06:09:38.661294 2727 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.677715 kubelet[2727]: I0621 06:09:38.677106 2727 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:38.677715 kubelet[2727]: I0621 06:09:38.677202 2727 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:39.226977 sudo[2744]: pam_unix(sudo:session): session closed for user root Jun 21 06:09:39.349688 kubelet[2727]: I0621 06:09:39.349323 2727 apiserver.go:52] "Watching apiserver" Jun 21 06:09:39.385170 kubelet[2727]: I0621 06:09:39.385059 2727 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 06:09:39.505801 kubelet[2727]: W0621 06:09:39.505119 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 21 06:09:39.505801 kubelet[2727]: E0621 06:09:39.505554 2727 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" Jun 21 06:09:39.550008 kubelet[2727]: I0621 06:09:39.549478 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" podStartSLOduration=1.549452804 podStartE2EDuration="1.549452804s" podCreationTimestamp="2025-06-21 06:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:09:39.537643349 +0000 UTC m=+1.278452176" watchObservedRunningTime="2025-06-21 06:09:39.549452804 +0000 UTC m=+1.290261631" Jun 21 06:09:39.562921 kubelet[2727]: I0621 06:09:39.562464 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" podStartSLOduration=2.562439069 podStartE2EDuration="2.562439069s" podCreationTimestamp="2025-06-21 06:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:09:39.551077421 +0000 UTC m=+1.291886237" watchObservedRunningTime="2025-06-21 06:09:39.562439069 +0000 UTC m=+1.303247885" Jun 21 06:09:39.580136 kubelet[2727]: I0621 06:09:39.579577 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" podStartSLOduration=1.579556027 podStartE2EDuration="1.579556027s" podCreationTimestamp="2025-06-21 06:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:09:39.563755164 +0000 UTC m=+1.304563980" watchObservedRunningTime="2025-06-21 06:09:39.579556027 +0000 UTC m=+1.320364854" Jun 21 06:09:41.512164 sudo[1824]: pam_unix(sudo:session): session closed for user root Jun 21 06:09:41.555086 sshd[1823]: Connection closed by 147.75.109.163 port 51420 Jun 21 06:09:41.555631 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jun 21 06:09:41.562059 systemd[1]: sshd@6-10.128.0.41:22-147.75.109.163:51420.service: Deactivated successfully. Jun 21 06:09:41.565448 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 06:09:41.565738 systemd[1]: session-7.scope: Consumed 6.401s CPU time, 269M memory peak. Jun 21 06:09:41.567569 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Jun 21 06:09:41.570814 systemd-logind[1496]: Removed session 7. Jun 21 06:09:43.657501 kubelet[2727]: I0621 06:09:43.657453 2727 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 06:09:43.658481 kubelet[2727]: I0621 06:09:43.658343 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 06:09:43.658546 containerd[1569]: time="2025-06-21T06:09:43.657974662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 06:09:44.715863 systemd[1]: Created slice kubepods-besteffort-pod2ce20756_fd80_4cea_8862_bff467e45eee.slice - libcontainer container kubepods-besteffort-pod2ce20756_fd80_4cea_8862_bff467e45eee.slice. Jun 21 06:09:44.743360 kubelet[2727]: I0621 06:09:44.743248 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-etc-cni-netd\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.743360 kubelet[2727]: I0621 06:09:44.743304 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-hubble-tls\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.743360 kubelet[2727]: I0621 06:09:44.743332 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cni-path\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.743360 kubelet[2727]: I0621 06:09:44.743357 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-lib-modules\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745105 kubelet[2727]: I0621 06:09:44.743380 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-hostproc\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745105 kubelet[2727]: I0621 06:09:44.743403 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-config-path\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745105 kubelet[2727]: I0621 06:09:44.743428 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ce20756-fd80-4cea-8862-bff467e45eee-xtables-lock\") pod \"kube-proxy-88zvz\" (UID: \"2ce20756-fd80-4cea-8862-bff467e45eee\") " pod="kube-system/kube-proxy-88zvz" Jun 21 06:09:44.745105 kubelet[2727]: I0621 06:09:44.743453 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce20756-fd80-4cea-8862-bff467e45eee-lib-modules\") pod \"kube-proxy-88zvz\" (UID: \"2ce20756-fd80-4cea-8862-bff467e45eee\") " pod="kube-system/kube-proxy-88zvz" Jun 21 06:09:44.745105 kubelet[2727]: I0621 06:09:44.743481 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-net\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745105 kubelet[2727]: I0621 06:09:44.743509 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-bpf-maps\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745393 kubelet[2727]: I0621 06:09:44.743536 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-cgroup\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745393 kubelet[2727]: I0621 06:09:44.743573 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/543b025b-b621-4694-abff-fb359d6c0ca6-clustermesh-secrets\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745393 kubelet[2727]: I0621 06:09:44.743601 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpcz\" (UniqueName: \"kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-kube-api-access-mgpcz\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745393 kubelet[2727]: I0621 06:09:44.743628 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ce20756-fd80-4cea-8862-bff467e45eee-kube-proxy\") pod \"kube-proxy-88zvz\" (UID: \"2ce20756-fd80-4cea-8862-bff467e45eee\") " pod="kube-system/kube-proxy-88zvz" Jun 21 06:09:44.745393 kubelet[2727]: I0621 06:09:44.743654 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ln2\" (UniqueName: \"kubernetes.io/projected/2ce20756-fd80-4cea-8862-bff467e45eee-kube-api-access-v9ln2\") pod \"kube-proxy-88zvz\" (UID: \"2ce20756-fd80-4cea-8862-bff467e45eee\") " pod="kube-system/kube-proxy-88zvz" Jun 21 06:09:44.745624 kubelet[2727]: I0621 06:09:44.743683 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-run\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745624 kubelet[2727]: I0621 06:09:44.743706 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-xtables-lock\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.745624 kubelet[2727]: I0621 06:09:44.743735 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-kernel\") pod \"cilium-mnz5r\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " pod="kube-system/cilium-mnz5r" Jun 21 06:09:44.746106 systemd[1]: Created slice kubepods-burstable-pod543b025b_b621_4694_abff_fb359d6c0ca6.slice - libcontainer container kubepods-burstable-pod543b025b_b621_4694_abff_fb359d6c0ca6.slice. Jun 21 06:09:44.833888 systemd[1]: Created slice kubepods-besteffort-pod41f16cf8_acc1_4aa5_b4b7_2a3847864c38.slice - libcontainer container kubepods-besteffort-pod41f16cf8_acc1_4aa5_b4b7_2a3847864c38.slice. Jun 21 06:09:44.844615 kubelet[2727]: I0621 06:09:44.844561 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjct\" (UniqueName: \"kubernetes.io/projected/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-kube-api-access-2bjct\") pod \"cilium-operator-5d85765b45-h8lqt\" (UID: \"41f16cf8-acc1-4aa5-b4b7-2a3847864c38\") " pod="kube-system/cilium-operator-5d85765b45-h8lqt" Jun 21 06:09:44.844769 kubelet[2727]: I0621 06:09:44.844689 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-cilium-config-path\") pod \"cilium-operator-5d85765b45-h8lqt\" (UID: \"41f16cf8-acc1-4aa5-b4b7-2a3847864c38\") " pod="kube-system/cilium-operator-5d85765b45-h8lqt" Jun 21 06:09:45.036729 containerd[1569]: time="2025-06-21T06:09:45.036590604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88zvz,Uid:2ce20756-fd80-4cea-8862-bff467e45eee,Namespace:kube-system,Attempt:0,}" Jun 21 06:09:45.056099 containerd[1569]: time="2025-06-21T06:09:45.056044784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mnz5r,Uid:543b025b-b621-4694-abff-fb359d6c0ca6,Namespace:kube-system,Attempt:0,}" Jun 21 06:09:45.068626 containerd[1569]: time="2025-06-21T06:09:45.068560822Z" level=info msg="connecting to shim 517b48f58faa0f100e2a16381518035816190a3b12bee5633602e21660770bc8" address="unix:///run/containerd/s/3da87f66564c7a73d30692d32cfbfe718c0492af979e78f4460bb9d413bc6379" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:09:45.105118 containerd[1569]: time="2025-06-21T06:09:45.105007036Z" level=info msg="connecting to shim 86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5" address="unix:///run/containerd/s/26911e8b758ee1931292dfcb924714343080e935524f4211f2bd484b9967d3b0" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:09:45.109103 systemd[1]: Started cri-containerd-517b48f58faa0f100e2a16381518035816190a3b12bee5633602e21660770bc8.scope - libcontainer container 517b48f58faa0f100e2a16381518035816190a3b12bee5633602e21660770bc8. Jun 21 06:09:45.141612 containerd[1569]: time="2025-06-21T06:09:45.141564314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h8lqt,Uid:41f16cf8-acc1-4aa5-b4b7-2a3847864c38,Namespace:kube-system,Attempt:0,}" Jun 21 06:09:45.149394 systemd[1]: Started cri-containerd-86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5.scope - libcontainer container 86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5. Jun 21 06:09:45.171374 containerd[1569]: time="2025-06-21T06:09:45.171301054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88zvz,Uid:2ce20756-fd80-4cea-8862-bff467e45eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"517b48f58faa0f100e2a16381518035816190a3b12bee5633602e21660770bc8\"" Jun 21 06:09:45.187917 containerd[1569]: time="2025-06-21T06:09:45.187872263Z" level=info msg="CreateContainer within sandbox \"517b48f58faa0f100e2a16381518035816190a3b12bee5633602e21660770bc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 06:09:45.193592 containerd[1569]: time="2025-06-21T06:09:45.193514003Z" level=info msg="connecting to shim ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f" address="unix:///run/containerd/s/1e165961bf9d49f39547007becf3efe83a427dfbb7f2613b527cdbe5c3892623" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:09:45.214019 containerd[1569]: time="2025-06-21T06:09:45.213343891Z" level=info msg="Container c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:45.230059 containerd[1569]: time="2025-06-21T06:09:45.229996650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mnz5r,Uid:543b025b-b621-4694-abff-fb359d6c0ca6,Namespace:kube-system,Attempt:0,} returns sandbox id \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\"" Jun 21 06:09:45.233729 containerd[1569]: time="2025-06-21T06:09:45.233683478Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 06:09:45.235131 containerd[1569]: time="2025-06-21T06:09:45.234989376Z" level=info msg="CreateContainer within sandbox \"517b48f58faa0f100e2a16381518035816190a3b12bee5633602e21660770bc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694\"" Jun 21 06:09:45.235690 containerd[1569]: time="2025-06-21T06:09:45.235657898Z" level=info msg="StartContainer for \"c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694\"" Jun 21 06:09:45.237720 containerd[1569]: time="2025-06-21T06:09:45.237666269Z" level=info msg="connecting to shim c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694" address="unix:///run/containerd/s/3da87f66564c7a73d30692d32cfbfe718c0492af979e78f4460bb9d413bc6379" protocol=ttrpc version=3 Jun 21 06:09:45.259069 systemd[1]: Started cri-containerd-ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f.scope - libcontainer container ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f. Jun 21 06:09:45.280035 systemd[1]: Started cri-containerd-c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694.scope - libcontainer container c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694. Jun 21 06:09:45.360812 containerd[1569]: time="2025-06-21T06:09:45.360602782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h8lqt,Uid:41f16cf8-acc1-4aa5-b4b7-2a3847864c38,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\"" Jun 21 06:09:45.368033 containerd[1569]: time="2025-06-21T06:09:45.367993500Z" level=info msg="StartContainer for \"c275f0fad183eff6ace0bb7f1ab385f0979e99e6e1267b4ecf030381fe3f1694\" returns successfully" Jun 21 06:09:45.533134 kubelet[2727]: I0621 06:09:45.533068 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-88zvz" podStartSLOduration=1.533042153 podStartE2EDuration="1.533042153s" podCreationTimestamp="2025-06-21 06:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:09:45.529307552 +0000 UTC m=+7.270116379" watchObservedRunningTime="2025-06-21 06:09:45.533042153 +0000 UTC m=+7.273850984" Jun 21 06:09:50.214304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583813247.mount: Deactivated successfully. Jun 21 06:09:50.399266 update_engine[1502]: I20250621 06:09:50.398990 1502 update_attempter.cc:509] Updating boot flags... Jun 21 06:09:53.519107 containerd[1569]: time="2025-06-21T06:09:53.519032533Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:53.520613 containerd[1569]: time="2025-06-21T06:09:53.520544156Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 06:09:53.522369 containerd[1569]: time="2025-06-21T06:09:53.522289828Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:53.525925 containerd[1569]: time="2025-06-21T06:09:53.525760452Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.292027128s" Jun 21 06:09:53.525925 containerd[1569]: time="2025-06-21T06:09:53.525806941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 06:09:53.528401 containerd[1569]: time="2025-06-21T06:09:53.528139980Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 06:09:53.532459 containerd[1569]: time="2025-06-21T06:09:53.530015763Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 06:09:53.542124 containerd[1569]: time="2025-06-21T06:09:53.542085981Z" level=info msg="Container 0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:53.556132 containerd[1569]: time="2025-06-21T06:09:53.556084981Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\"" Jun 21 06:09:53.556998 containerd[1569]: time="2025-06-21T06:09:53.556775089Z" level=info msg="StartContainer for \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\"" Jun 21 06:09:53.558911 containerd[1569]: time="2025-06-21T06:09:53.558795272Z" level=info msg="connecting to shim 0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1" address="unix:///run/containerd/s/26911e8b758ee1931292dfcb924714343080e935524f4211f2bd484b9967d3b0" protocol=ttrpc version=3 Jun 21 06:09:53.594112 systemd[1]: Started cri-containerd-0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1.scope - libcontainer container 0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1. Jun 21 06:09:53.649775 containerd[1569]: time="2025-06-21T06:09:53.649711876Z" level=info msg="StartContainer for \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" returns successfully" Jun 21 06:09:53.664623 systemd[1]: cri-containerd-0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1.scope: Deactivated successfully. Jun 21 06:09:53.669860 containerd[1569]: time="2025-06-21T06:09:53.669795736Z" level=info msg="received exit event container_id:\"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" id:\"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" pid:3165 exited_at:{seconds:1750486193 nanos:669268182}" Jun 21 06:09:53.670198 containerd[1569]: time="2025-06-21T06:09:53.670131778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" id:\"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" pid:3165 exited_at:{seconds:1750486193 nanos:669268182}" Jun 21 06:09:53.700376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1-rootfs.mount: Deactivated successfully. Jun 21 06:09:56.476701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617278885.mount: Deactivated successfully. Jun 21 06:09:56.630416 containerd[1569]: time="2025-06-21T06:09:56.630369878Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 06:09:56.650014 containerd[1569]: time="2025-06-21T06:09:56.649029277Z" level=info msg="Container b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:56.664394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292835262.mount: Deactivated successfully. Jun 21 06:09:56.669072 containerd[1569]: time="2025-06-21T06:09:56.669034060Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\"" Jun 21 06:09:56.670100 containerd[1569]: time="2025-06-21T06:09:56.670046298Z" level=info msg="StartContainer for \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\"" Jun 21 06:09:56.673863 containerd[1569]: time="2025-06-21T06:09:56.673233346Z" level=info msg="connecting to shim b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440" address="unix:///run/containerd/s/26911e8b758ee1931292dfcb924714343080e935524f4211f2bd484b9967d3b0" protocol=ttrpc version=3 Jun 21 06:09:56.779769 systemd[1]: Started cri-containerd-b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440.scope - libcontainer container b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440. Jun 21 06:09:56.868474 containerd[1569]: time="2025-06-21T06:09:56.868420528Z" level=info msg="StartContainer for \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" returns successfully" Jun 21 06:09:56.904443 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 06:09:56.905969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:09:56.906589 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:09:56.909995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:09:56.915381 systemd[1]: cri-containerd-b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440.scope: Deactivated successfully. Jun 21 06:09:56.918656 containerd[1569]: time="2025-06-21T06:09:56.918591087Z" level=info msg="received exit event container_id:\"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" id:\"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" pid:3222 exited_at:{seconds:1750486196 nanos:915632852}" Jun 21 06:09:56.923062 containerd[1569]: time="2025-06-21T06:09:56.923014999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" id:\"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" pid:3222 exited_at:{seconds:1750486196 nanos:915632852}" Jun 21 06:09:56.962269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:09:57.456050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440-rootfs.mount: Deactivated successfully. Jun 21 06:09:57.503356 containerd[1569]: time="2025-06-21T06:09:57.503295139Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:57.504463 containerd[1569]: time="2025-06-21T06:09:57.504406604Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 06:09:57.505787 containerd[1569]: time="2025-06-21T06:09:57.505721820Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:09:57.507449 containerd[1569]: time="2025-06-21T06:09:57.507264490Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.97907437s" Jun 21 06:09:57.507449 containerd[1569]: time="2025-06-21T06:09:57.507312253Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 06:09:57.511195 containerd[1569]: time="2025-06-21T06:09:57.511138472Z" level=info msg="CreateContainer within sandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 06:09:57.530459 containerd[1569]: time="2025-06-21T06:09:57.529248603Z" level=info msg="Container b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:57.541917 containerd[1569]: time="2025-06-21T06:09:57.541872705Z" level=info msg="CreateContainer within sandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\"" Jun 21 06:09:57.542866 containerd[1569]: time="2025-06-21T06:09:57.542727356Z" level=info msg="StartContainer for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\"" Jun 21 06:09:57.544978 containerd[1569]: time="2025-06-21T06:09:57.544940924Z" level=info msg="connecting to shim b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26" address="unix:///run/containerd/s/1e165961bf9d49f39547007becf3efe83a427dfbb7f2613b527cdbe5c3892623" protocol=ttrpc version=3 Jun 21 06:09:57.583032 systemd[1]: Started cri-containerd-b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26.scope - libcontainer container b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26. Jun 21 06:09:57.633109 containerd[1569]: time="2025-06-21T06:09:57.632991066Z" level=info msg="StartContainer for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" returns successfully" Jun 21 06:09:57.653650 containerd[1569]: time="2025-06-21T06:09:57.653591012Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 06:09:57.676873 containerd[1569]: time="2025-06-21T06:09:57.674021062Z" level=info msg="Container 292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:57.695503 containerd[1569]: time="2025-06-21T06:09:57.695440252Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\"" Jun 21 06:09:57.696740 containerd[1569]: time="2025-06-21T06:09:57.696703002Z" level=info msg="StartContainer for \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\"" Jun 21 06:09:57.702045 containerd[1569]: time="2025-06-21T06:09:57.701999512Z" level=info msg="connecting to shim 292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134" address="unix:///run/containerd/s/26911e8b758ee1931292dfcb924714343080e935524f4211f2bd484b9967d3b0" protocol=ttrpc version=3 Jun 21 06:09:57.731275 systemd[1]: Started cri-containerd-292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134.scope - libcontainer container 292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134. Jun 21 06:09:57.811084 containerd[1569]: time="2025-06-21T06:09:57.810744993Z" level=info msg="StartContainer for \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" returns successfully" Jun 21 06:09:57.816423 systemd[1]: cri-containerd-292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134.scope: Deactivated successfully. Jun 21 06:09:57.821677 containerd[1569]: time="2025-06-21T06:09:57.821583838Z" level=info msg="received exit event container_id:\"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" id:\"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" pid:3305 exited_at:{seconds:1750486197 nanos:820469300}" Jun 21 06:09:57.823467 containerd[1569]: time="2025-06-21T06:09:57.823429374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" id:\"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" pid:3305 exited_at:{seconds:1750486197 nanos:820469300}" Jun 21 06:09:58.454201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653331607.mount: Deactivated successfully. Jun 21 06:09:58.675107 containerd[1569]: time="2025-06-21T06:09:58.675057539Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 06:09:58.701866 containerd[1569]: time="2025-06-21T06:09:58.699069540Z" level=info msg="Container 53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:58.722104 containerd[1569]: time="2025-06-21T06:09:58.721960769Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\"" Jun 21 06:09:58.723586 containerd[1569]: time="2025-06-21T06:09:58.723383673Z" level=info msg="StartContainer for \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\"" Jun 21 06:09:58.727526 containerd[1569]: time="2025-06-21T06:09:58.727480118Z" level=info msg="connecting to shim 53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f" address="unix:///run/containerd/s/26911e8b758ee1931292dfcb924714343080e935524f4211f2bd484b9967d3b0" protocol=ttrpc version=3 Jun 21 06:09:58.783045 systemd[1]: Started cri-containerd-53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f.scope - libcontainer container 53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f. Jun 21 06:09:58.908425 containerd[1569]: time="2025-06-21T06:09:58.908375441Z" level=info msg="StartContainer for \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" returns successfully" Jun 21 06:09:58.912766 systemd[1]: cri-containerd-53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f.scope: Deactivated successfully. Jun 21 06:09:58.913480 containerd[1569]: time="2025-06-21T06:09:58.912751107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" id:\"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" pid:3346 exited_at:{seconds:1750486198 nanos:912441423}" Jun 21 06:09:58.913480 containerd[1569]: time="2025-06-21T06:09:58.912880721Z" level=info msg="received exit event container_id:\"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" id:\"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" pid:3346 exited_at:{seconds:1750486198 nanos:912441423}" Jun 21 06:09:58.957757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f-rootfs.mount: Deactivated successfully. Jun 21 06:09:59.675972 containerd[1569]: time="2025-06-21T06:09:59.675818982Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 06:09:59.693889 containerd[1569]: time="2025-06-21T06:09:59.693200169Z" level=info msg="Container 5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:09:59.707558 kubelet[2727]: I0621 06:09:59.707469 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-h8lqt" podStartSLOduration=3.562993243 podStartE2EDuration="15.707445295s" podCreationTimestamp="2025-06-21 06:09:44 +0000 UTC" firstStartedPulling="2025-06-21 06:09:45.364082886 +0000 UTC m=+7.104891692" lastFinishedPulling="2025-06-21 06:09:57.508534939 +0000 UTC m=+19.249343744" observedRunningTime="2025-06-21 06:09:58.881808568 +0000 UTC m=+20.622617394" watchObservedRunningTime="2025-06-21 06:09:59.707445295 +0000 UTC m=+21.448254123" Jun 21 06:09:59.709589 containerd[1569]: time="2025-06-21T06:09:59.709535268Z" level=info msg="CreateContainer within sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\"" Jun 21 06:09:59.710153 containerd[1569]: time="2025-06-21T06:09:59.710071264Z" level=info msg="StartContainer for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\"" Jun 21 06:09:59.713088 containerd[1569]: time="2025-06-21T06:09:59.713015684Z" level=info msg="connecting to shim 5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07" address="unix:///run/containerd/s/26911e8b758ee1931292dfcb924714343080e935524f4211f2bd484b9967d3b0" protocol=ttrpc version=3 Jun 21 06:09:59.761191 systemd[1]: Started cri-containerd-5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07.scope - libcontainer container 5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07. Jun 21 06:09:59.819060 containerd[1569]: time="2025-06-21T06:09:59.818886422Z" level=info msg="StartContainer for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" returns successfully" Jun 21 06:10:00.018982 containerd[1569]: time="2025-06-21T06:10:00.018737326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" id:\"97a3d0d211a449484a363058a89b34800898983a93cccb7417b4bc6fb799ed61\" pid:3416 exited_at:{seconds:1750486200 nanos:18379484}" Jun 21 06:10:00.064911 kubelet[2727]: I0621 06:10:00.064804 2727 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 21 06:10:00.174171 systemd[1]: Created slice kubepods-burstable-pode9f9f400_5b8c_4e63_a32e_50114fd4be50.slice - libcontainer container kubepods-burstable-pode9f9f400_5b8c_4e63_a32e_50114fd4be50.slice. Jun 21 06:10:00.188042 systemd[1]: Created slice kubepods-burstable-pode837029c_7faf_4b97_8662_2273aeb18f41.slice - libcontainer container kubepods-burstable-pode837029c_7faf_4b97_8662_2273aeb18f41.slice. Jun 21 06:10:00.256134 kubelet[2727]: I0621 06:10:00.256070 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9f9f400-5b8c-4e63-a32e-50114fd4be50-config-volume\") pod \"coredns-7c65d6cfc9-2wp97\" (UID: \"e9f9f400-5b8c-4e63-a32e-50114fd4be50\") " pod="kube-system/coredns-7c65d6cfc9-2wp97" Jun 21 06:10:00.256575 kubelet[2727]: I0621 06:10:00.256450 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwx9\" (UniqueName: \"kubernetes.io/projected/e837029c-7faf-4b97-8662-2273aeb18f41-kube-api-access-5lwx9\") pod \"coredns-7c65d6cfc9-twzsm\" (UID: \"e837029c-7faf-4b97-8662-2273aeb18f41\") " pod="kube-system/coredns-7c65d6cfc9-twzsm" Jun 21 06:10:00.256814 kubelet[2727]: I0621 06:10:00.256734 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e837029c-7faf-4b97-8662-2273aeb18f41-config-volume\") pod \"coredns-7c65d6cfc9-twzsm\" (UID: \"e837029c-7faf-4b97-8662-2273aeb18f41\") " pod="kube-system/coredns-7c65d6cfc9-twzsm" Jun 21 06:10:00.257096 kubelet[2727]: I0621 06:10:00.257056 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872st\" (UniqueName: \"kubernetes.io/projected/e9f9f400-5b8c-4e63-a32e-50114fd4be50-kube-api-access-872st\") pod \"coredns-7c65d6cfc9-2wp97\" (UID: \"e9f9f400-5b8c-4e63-a32e-50114fd4be50\") " pod="kube-system/coredns-7c65d6cfc9-2wp97" Jun 21 06:10:00.487202 containerd[1569]: time="2025-06-21T06:10:00.487143409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wp97,Uid:e9f9f400-5b8c-4e63-a32e-50114fd4be50,Namespace:kube-system,Attempt:0,}" Jun 21 06:10:00.496151 containerd[1569]: time="2025-06-21T06:10:00.495803513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twzsm,Uid:e837029c-7faf-4b97-8662-2273aeb18f41,Namespace:kube-system,Attempt:0,}" Jun 21 06:10:00.721858 kubelet[2727]: I0621 06:10:00.721206 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mnz5r" podStartSLOduration=8.426399442 podStartE2EDuration="16.721181529s" podCreationTimestamp="2025-06-21 06:09:44 +0000 UTC" firstStartedPulling="2025-06-21 06:09:45.232579798 +0000 UTC m=+6.973388614" lastFinishedPulling="2025-06-21 06:09:53.527361885 +0000 UTC m=+15.268170701" observedRunningTime="2025-06-21 06:10:00.718402521 +0000 UTC m=+22.459211384" watchObservedRunningTime="2025-06-21 06:10:00.721181529 +0000 UTC m=+22.461990357" Jun 21 06:10:02.366592 systemd-networkd[1454]: cilium_host: Link UP Jun 21 06:10:02.371984 systemd-networkd[1454]: cilium_net: Link UP Jun 21 06:10:02.374588 systemd-networkd[1454]: cilium_net: Gained carrier Jun 21 06:10:02.377159 systemd-networkd[1454]: cilium_host: Gained carrier Jun 21 06:10:02.513921 systemd-networkd[1454]: cilium_vxlan: Link UP Jun 21 06:10:02.513935 systemd-networkd[1454]: cilium_vxlan: Gained carrier Jun 21 06:10:02.803221 systemd-networkd[1454]: cilium_net: Gained IPv6LL Jun 21 06:10:02.810080 kernel: NET: Registered PF_ALG protocol family Jun 21 06:10:03.283443 systemd-networkd[1454]: cilium_host: Gained IPv6LL Jun 21 06:10:03.706713 systemd-networkd[1454]: lxc_health: Link UP Jun 21 06:10:03.732109 systemd-networkd[1454]: cilium_vxlan: Gained IPv6LL Jun 21 06:10:03.740325 systemd-networkd[1454]: lxc_health: Gained carrier Jun 21 06:10:04.064645 systemd-networkd[1454]: lxc6e5ed133e411: Link UP Jun 21 06:10:04.075883 kernel: eth0: renamed from tmpb1c7e Jun 21 06:10:04.092975 systemd-networkd[1454]: lxc6e5ed133e411: Gained carrier Jun 21 06:10:04.095618 systemd-networkd[1454]: lxc51b33fddf56b: Link UP Jun 21 06:10:04.107858 kernel: eth0: renamed from tmp0a871 Jun 21 06:10:04.121742 systemd-networkd[1454]: lxc51b33fddf56b: Gained carrier Jun 21 06:10:05.652274 systemd-networkd[1454]: lxc_health: Gained IPv6LL Jun 21 06:10:05.907350 systemd-networkd[1454]: lxc51b33fddf56b: Gained IPv6LL Jun 21 06:10:06.163243 systemd-networkd[1454]: lxc6e5ed133e411: Gained IPv6LL Jun 21 06:10:09.103191 containerd[1569]: time="2025-06-21T06:10:09.101944569Z" level=info msg="connecting to shim 0a871e9d43a9c2e00f34d060184987b4ce7903126a119150b2875231af671360" address="unix:///run/containerd/s/991500b4628200f083b037593e99d62e96021de96653db7c77795bc1babbbd71" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:10:09.119863 containerd[1569]: time="2025-06-21T06:10:09.119637142Z" level=info msg="connecting to shim b1c7eb4dd2098436ce76ec7446b60916bb6c5b362b46bbe2db240e041aadf744" address="unix:///run/containerd/s/904d494f6e74ab67d84cc6c60070c76b837b36bad5c5319901b3b1eef427eb64" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:10:09.191472 systemd[1]: Started cri-containerd-b1c7eb4dd2098436ce76ec7446b60916bb6c5b362b46bbe2db240e041aadf744.scope - libcontainer container b1c7eb4dd2098436ce76ec7446b60916bb6c5b362b46bbe2db240e041aadf744. Jun 21 06:10:09.204770 systemd[1]: Started cri-containerd-0a871e9d43a9c2e00f34d060184987b4ce7903126a119150b2875231af671360.scope - libcontainer container 0a871e9d43a9c2e00f34d060184987b4ce7903126a119150b2875231af671360. Jun 21 06:10:09.324677 containerd[1569]: time="2025-06-21T06:10:09.324624110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2wp97,Uid:e9f9f400-5b8c-4e63-a32e-50114fd4be50,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c7eb4dd2098436ce76ec7446b60916bb6c5b362b46bbe2db240e041aadf744\"" Jun 21 06:10:09.332720 containerd[1569]: time="2025-06-21T06:10:09.332679223Z" level=info msg="CreateContainer within sandbox \"b1c7eb4dd2098436ce76ec7446b60916bb6c5b362b46bbe2db240e041aadf744\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 06:10:09.352146 containerd[1569]: time="2025-06-21T06:10:09.352059834Z" level=info msg="Container 8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:10:09.364144 containerd[1569]: time="2025-06-21T06:10:09.363924393Z" level=info msg="CreateContainer within sandbox \"b1c7eb4dd2098436ce76ec7446b60916bb6c5b362b46bbe2db240e041aadf744\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e\"" Jun 21 06:10:09.366856 containerd[1569]: time="2025-06-21T06:10:09.366307415Z" level=info msg="StartContainer for \"8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e\"" Jun 21 06:10:09.369867 containerd[1569]: time="2025-06-21T06:10:09.369814553Z" level=info msg="connecting to shim 8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e" address="unix:///run/containerd/s/904d494f6e74ab67d84cc6c60070c76b837b36bad5c5319901b3b1eef427eb64" protocol=ttrpc version=3 Jun 21 06:10:09.379245 containerd[1569]: time="2025-06-21T06:10:09.379194173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twzsm,Uid:e837029c-7faf-4b97-8662-2273aeb18f41,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a871e9d43a9c2e00f34d060184987b4ce7903126a119150b2875231af671360\"" Jun 21 06:10:09.384379 containerd[1569]: time="2025-06-21T06:10:09.384161395Z" level=info msg="CreateContainer within sandbox \"0a871e9d43a9c2e00f34d060184987b4ce7903126a119150b2875231af671360\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 06:10:09.401259 containerd[1569]: time="2025-06-21T06:10:09.400601271Z" level=info msg="Container a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:10:09.403972 systemd[1]: Started cri-containerd-8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e.scope - libcontainer container 8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e. Jun 21 06:10:09.420116 containerd[1569]: time="2025-06-21T06:10:09.420048848Z" level=info msg="CreateContainer within sandbox \"0a871e9d43a9c2e00f34d060184987b4ce7903126a119150b2875231af671360\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27\"" Jun 21 06:10:09.422292 containerd[1569]: time="2025-06-21T06:10:09.422231793Z" level=info msg="StartContainer for \"a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27\"" Jun 21 06:10:09.427023 containerd[1569]: time="2025-06-21T06:10:09.426952039Z" level=info msg="connecting to shim a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27" address="unix:///run/containerd/s/991500b4628200f083b037593e99d62e96021de96653db7c77795bc1babbbd71" protocol=ttrpc version=3 Jun 21 06:10:09.469090 systemd[1]: Started cri-containerd-a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27.scope - libcontainer container a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27. Jun 21 06:10:09.486408 containerd[1569]: time="2025-06-21T06:10:09.486320634Z" level=info msg="StartContainer for \"8c6e46c24d7b37c26121c44abdbbf677a3a968eff95a4b8ebdb362a8db99f50e\" returns successfully" Jun 21 06:10:09.538250 containerd[1569]: time="2025-06-21T06:10:09.538198867Z" level=info msg="StartContainer for \"a48d8ff91b2abafc194084cd2d5e3cce43f14b8d243af51eb5975ab942f9af27\" returns successfully" Jun 21 06:10:09.739507 kubelet[2727]: I0621 06:10:09.739063 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-twzsm" podStartSLOduration=25.73903894 podStartE2EDuration="25.73903894s" podCreationTimestamp="2025-06-21 06:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:10:09.738288446 +0000 UTC m=+31.479097274" watchObservedRunningTime="2025-06-21 06:10:09.73903894 +0000 UTC m=+31.479847767" Jun 21 06:10:09.778982 kubelet[2727]: I0621 06:10:09.778886 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2wp97" podStartSLOduration=25.777892058 podStartE2EDuration="25.777892058s" podCreationTimestamp="2025-06-21 06:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:10:09.757455371 +0000 UTC m=+31.498264193" watchObservedRunningTime="2025-06-21 06:10:09.777892058 +0000 UTC m=+31.518700884" Jun 21 06:10:12.042093 ntpd[1490]: Listen normally on 7 cilium_host 192.168.0.254:123 Jun 21 06:10:12.042216 ntpd[1490]: Listen normally on 8 cilium_net [fe80::6c51:2fff:febb:c5c0%4]:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 7 cilium_host 192.168.0.254:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 8 cilium_net [fe80::6c51:2fff:febb:c5c0%4]:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 9 cilium_host [fe80::207a:b6ff:fe67:f907%5]:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 10 cilium_vxlan [fe80::141b:b9ff:fed3:360a%6]:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 11 lxc_health [fe80::f8a9:5bff:febb:f823%8]:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 12 lxc6e5ed133e411 [fe80::b809:47ff:fe0c:3d31%10]:123 Jun 21 06:10:12.042724 ntpd[1490]: 21 Jun 06:10:12 ntpd[1490]: Listen normally on 13 lxc51b33fddf56b [fe80::b0f3:ffff:fe5a:34b0%12]:123 Jun 21 06:10:12.042295 ntpd[1490]: Listen normally on 9 cilium_host [fe80::207a:b6ff:fe67:f907%5]:123 Jun 21 06:10:12.042354 ntpd[1490]: Listen normally on 10 cilium_vxlan [fe80::141b:b9ff:fed3:360a%6]:123 Jun 21 06:10:12.042419 ntpd[1490]: Listen normally on 11 lxc_health [fe80::f8a9:5bff:febb:f823%8]:123 Jun 21 06:10:12.042475 ntpd[1490]: Listen normally on 12 lxc6e5ed133e411 [fe80::b809:47ff:fe0c:3d31%10]:123 Jun 21 06:10:12.042530 ntpd[1490]: Listen normally on 13 lxc51b33fddf56b [fe80::b0f3:ffff:fe5a:34b0%12]:123 Jun 21 06:10:36.114384 systemd[1]: Started sshd@7-10.128.0.41:22-147.75.109.163:59562.service - OpenSSH per-connection server daemon (147.75.109.163:59562). Jun 21 06:10:36.421867 sshd[4058]: Accepted publickey for core from 147.75.109.163 port 59562 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:10:36.425158 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:36.437916 systemd-logind[1496]: New session 8 of user core. Jun 21 06:10:36.444488 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 06:10:36.737994 sshd[4060]: Connection closed by 147.75.109.163 port 59562 Jun 21 06:10:36.738883 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:36.745206 systemd[1]: sshd@7-10.128.0.41:22-147.75.109.163:59562.service: Deactivated successfully. Jun 21 06:10:36.748357 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 06:10:36.749583 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Jun 21 06:10:36.751943 systemd-logind[1496]: Removed session 8. Jun 21 06:10:41.796296 systemd[1]: Started sshd@8-10.128.0.41:22-147.75.109.163:59564.service - OpenSSH per-connection server daemon (147.75.109.163:59564). Jun 21 06:10:42.101073 sshd[4075]: Accepted publickey for core from 147.75.109.163 port 59564 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:10:42.103102 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:42.109957 systemd-logind[1496]: New session 9 of user core. Jun 21 06:10:42.116303 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 06:10:42.401776 sshd[4077]: Connection closed by 147.75.109.163 port 59564 Jun 21 06:10:42.403070 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:42.408933 systemd[1]: sshd@8-10.128.0.41:22-147.75.109.163:59564.service: Deactivated successfully. Jun 21 06:10:42.412164 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 06:10:42.414117 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Jun 21 06:10:42.416759 systemd-logind[1496]: Removed session 9. Jun 21 06:10:47.456132 systemd[1]: Started sshd@9-10.128.0.41:22-147.75.109.163:47390.service - OpenSSH per-connection server daemon (147.75.109.163:47390). Jun 21 06:10:47.760984 sshd[4092]: Accepted publickey for core from 147.75.109.163 port 47390 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:10:47.762722 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:47.769122 systemd-logind[1496]: New session 10 of user core. Jun 21 06:10:47.774034 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 06:10:48.054158 sshd[4094]: Connection closed by 147.75.109.163 port 47390 Jun 21 06:10:48.055204 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:48.061141 systemd[1]: sshd@9-10.128.0.41:22-147.75.109.163:47390.service: Deactivated successfully. Jun 21 06:10:48.064383 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 06:10:48.066198 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Jun 21 06:10:48.068614 systemd-logind[1496]: Removed session 10. Jun 21 06:10:53.108651 systemd[1]: Started sshd@10-10.128.0.41:22-147.75.109.163:47404.service - OpenSSH per-connection server daemon (147.75.109.163:47404). Jun 21 06:10:53.413201 sshd[4107]: Accepted publickey for core from 147.75.109.163 port 47404 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:10:53.415094 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:53.422793 systemd-logind[1496]: New session 11 of user core. Jun 21 06:10:53.429119 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 06:10:53.709853 sshd[4109]: Connection closed by 147.75.109.163 port 47404 Jun 21 06:10:53.710721 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:53.716039 systemd[1]: sshd@10-10.128.0.41:22-147.75.109.163:47404.service: Deactivated successfully. Jun 21 06:10:53.720724 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 06:10:53.724613 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Jun 21 06:10:53.728003 systemd-logind[1496]: Removed session 11. Jun 21 06:10:53.768312 systemd[1]: Started sshd@11-10.128.0.41:22-147.75.109.163:47408.service - OpenSSH per-connection server daemon (147.75.109.163:47408). Jun 21 06:10:54.082882 sshd[4122]: Accepted publickey for core from 147.75.109.163 port 47408 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:10:54.084782 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:54.092819 systemd-logind[1496]: New session 12 of user core. Jun 21 06:10:54.099160 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 06:10:54.420170 sshd[4124]: Connection closed by 147.75.109.163 port 47408 Jun 21 06:10:54.421647 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:54.428420 systemd[1]: sshd@11-10.128.0.41:22-147.75.109.163:47408.service: Deactivated successfully. Jun 21 06:10:54.431653 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 06:10:54.433151 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Jun 21 06:10:54.435725 systemd-logind[1496]: Removed session 12. Jun 21 06:10:54.482171 systemd[1]: Started sshd@12-10.128.0.41:22-147.75.109.163:47424.service - OpenSSH per-connection server daemon (147.75.109.163:47424). Jun 21 06:10:54.793707 sshd[4134]: Accepted publickey for core from 147.75.109.163 port 47424 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:10:54.795435 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:54.802908 systemd-logind[1496]: New session 13 of user core. Jun 21 06:10:54.808080 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 06:10:55.090473 sshd[4136]: Connection closed by 147.75.109.163 port 47424 Jun 21 06:10:55.091341 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:55.096284 systemd[1]: sshd@12-10.128.0.41:22-147.75.109.163:47424.service: Deactivated successfully. Jun 21 06:10:55.100249 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 06:10:55.103161 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Jun 21 06:10:55.105946 systemd-logind[1496]: Removed session 13. Jun 21 06:11:00.148528 systemd[1]: Started sshd@13-10.128.0.41:22-147.75.109.163:50940.service - OpenSSH per-connection server daemon (147.75.109.163:50940). Jun 21 06:11:00.447330 sshd[4148]: Accepted publickey for core from 147.75.109.163 port 50940 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:00.449106 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:00.455342 systemd-logind[1496]: New session 14 of user core. Jun 21 06:11:00.462105 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 06:11:00.741967 sshd[4150]: Connection closed by 147.75.109.163 port 50940 Jun 21 06:11:00.743489 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:00.749463 systemd[1]: sshd@13-10.128.0.41:22-147.75.109.163:50940.service: Deactivated successfully. Jun 21 06:11:00.752441 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 06:11:00.753921 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Jun 21 06:11:00.756355 systemd-logind[1496]: Removed session 14. Jun 21 06:11:05.803370 systemd[1]: Started sshd@14-10.128.0.41:22-147.75.109.163:50956.service - OpenSSH per-connection server daemon (147.75.109.163:50956). Jun 21 06:11:06.106584 sshd[4163]: Accepted publickey for core from 147.75.109.163 port 50956 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:06.108391 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:06.115560 systemd-logind[1496]: New session 15 of user core. Jun 21 06:11:06.121041 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 06:11:06.392129 sshd[4165]: Connection closed by 147.75.109.163 port 50956 Jun 21 06:11:06.392928 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:06.398084 systemd[1]: sshd@14-10.128.0.41:22-147.75.109.163:50956.service: Deactivated successfully. Jun 21 06:11:06.401034 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 06:11:06.404385 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Jun 21 06:11:06.406655 systemd-logind[1496]: Removed session 15. Jun 21 06:11:11.452263 systemd[1]: Started sshd@15-10.128.0.41:22-147.75.109.163:60246.service - OpenSSH per-connection server daemon (147.75.109.163:60246). Jun 21 06:11:11.760263 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 60246 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:11.762200 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:11.769098 systemd-logind[1496]: New session 16 of user core. Jun 21 06:11:11.775052 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 06:11:12.048823 sshd[4179]: Connection closed by 147.75.109.163 port 60246 Jun 21 06:11:12.050165 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:12.056694 systemd[1]: sshd@15-10.128.0.41:22-147.75.109.163:60246.service: Deactivated successfully. Jun 21 06:11:12.059589 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 06:11:12.061517 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Jun 21 06:11:12.063648 systemd-logind[1496]: Removed session 16. Jun 21 06:11:12.104172 systemd[1]: Started sshd@16-10.128.0.41:22-147.75.109.163:60252.service - OpenSSH per-connection server daemon (147.75.109.163:60252). Jun 21 06:11:12.406035 sshd[4191]: Accepted publickey for core from 147.75.109.163 port 60252 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:12.407467 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:12.413911 systemd-logind[1496]: New session 17 of user core. Jun 21 06:11:12.429132 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 06:11:12.750781 sshd[4193]: Connection closed by 147.75.109.163 port 60252 Jun 21 06:11:12.751685 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:12.757496 systemd[1]: sshd@16-10.128.0.41:22-147.75.109.163:60252.service: Deactivated successfully. Jun 21 06:11:12.760708 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 06:11:12.762425 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Jun 21 06:11:12.764769 systemd-logind[1496]: Removed session 17. Jun 21 06:11:12.805404 systemd[1]: Started sshd@17-10.128.0.41:22-147.75.109.163:60262.service - OpenSSH per-connection server daemon (147.75.109.163:60262). Jun 21 06:11:13.111628 sshd[4202]: Accepted publickey for core from 147.75.109.163 port 60262 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:13.113142 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:13.119963 systemd-logind[1496]: New session 18 of user core. Jun 21 06:11:13.125047 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 06:11:14.913530 sshd[4204]: Connection closed by 147.75.109.163 port 60262 Jun 21 06:11:14.914375 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:14.921621 systemd[1]: sshd@17-10.128.0.41:22-147.75.109.163:60262.service: Deactivated successfully. Jun 21 06:11:14.926655 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 06:11:14.927954 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Jun 21 06:11:14.933188 systemd-logind[1496]: Removed session 18. Jun 21 06:11:14.974972 systemd[1]: Started sshd@18-10.128.0.41:22-147.75.109.163:60266.service - OpenSSH per-connection server daemon (147.75.109.163:60266). Jun 21 06:11:15.280446 sshd[4221]: Accepted publickey for core from 147.75.109.163 port 60266 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:15.282277 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:15.290001 systemd-logind[1496]: New session 19 of user core. Jun 21 06:11:15.297057 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 06:11:15.723975 sshd[4223]: Connection closed by 147.75.109.163 port 60266 Jun 21 06:11:15.725274 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:15.731254 systemd[1]: sshd@18-10.128.0.41:22-147.75.109.163:60266.service: Deactivated successfully. Jun 21 06:11:15.734240 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 06:11:15.735960 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Jun 21 06:11:15.738459 systemd-logind[1496]: Removed session 19. Jun 21 06:11:15.786499 systemd[1]: Started sshd@19-10.128.0.41:22-147.75.109.163:60268.service - OpenSSH per-connection server daemon (147.75.109.163:60268). Jun 21 06:11:16.099936 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 60268 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:16.102045 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:16.108934 systemd-logind[1496]: New session 20 of user core. Jun 21 06:11:16.115094 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 06:11:16.385622 sshd[4237]: Connection closed by 147.75.109.163 port 60268 Jun 21 06:11:16.386370 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:16.391964 systemd[1]: sshd@19-10.128.0.41:22-147.75.109.163:60268.service: Deactivated successfully. Jun 21 06:11:16.394978 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 06:11:16.397066 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Jun 21 06:11:16.399101 systemd-logind[1496]: Removed session 20. Jun 21 06:11:21.444433 systemd[1]: Started sshd@20-10.128.0.41:22-147.75.109.163:34330.service - OpenSSH per-connection server daemon (147.75.109.163:34330). Jun 21 06:11:21.749676 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 34330 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:21.751555 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:21.758900 systemd-logind[1496]: New session 21 of user core. Jun 21 06:11:21.766065 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 06:11:22.034725 sshd[4251]: Connection closed by 147.75.109.163 port 34330 Jun 21 06:11:22.036051 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:22.041691 systemd[1]: sshd@20-10.128.0.41:22-147.75.109.163:34330.service: Deactivated successfully. Jun 21 06:11:22.045337 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 06:11:22.046771 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Jun 21 06:11:22.049495 systemd-logind[1496]: Removed session 21. Jun 21 06:11:27.088883 systemd[1]: Started sshd@21-10.128.0.41:22-147.75.109.163:53994.service - OpenSSH per-connection server daemon (147.75.109.163:53994). Jun 21 06:11:27.393447 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 53994 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:27.395255 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:27.402907 systemd-logind[1496]: New session 22 of user core. Jun 21 06:11:27.409123 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 06:11:27.686757 sshd[4270]: Connection closed by 147.75.109.163 port 53994 Jun 21 06:11:27.687622 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:27.693193 systemd[1]: sshd@21-10.128.0.41:22-147.75.109.163:53994.service: Deactivated successfully. Jun 21 06:11:27.696260 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 06:11:27.697772 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Jun 21 06:11:27.700152 systemd-logind[1496]: Removed session 22. Jun 21 06:11:32.748431 systemd[1]: Started sshd@22-10.128.0.41:22-147.75.109.163:54002.service - OpenSSH per-connection server daemon (147.75.109.163:54002). Jun 21 06:11:33.068688 sshd[4282]: Accepted publickey for core from 147.75.109.163 port 54002 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:33.070557 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:33.076951 systemd-logind[1496]: New session 23 of user core. Jun 21 06:11:33.093127 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 06:11:33.360197 sshd[4284]: Connection closed by 147.75.109.163 port 54002 Jun 21 06:11:33.361433 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:33.367369 systemd[1]: sshd@22-10.128.0.41:22-147.75.109.163:54002.service: Deactivated successfully. Jun 21 06:11:33.370398 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 06:11:33.371911 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Jun 21 06:11:33.374622 systemd-logind[1496]: Removed session 23. Jun 21 06:11:33.421962 systemd[1]: Started sshd@23-10.128.0.41:22-147.75.109.163:54006.service - OpenSSH per-connection server daemon (147.75.109.163:54006). Jun 21 06:11:33.734783 sshd[4296]: Accepted publickey for core from 147.75.109.163 port 54006 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:33.736596 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:33.743791 systemd-logind[1496]: New session 24 of user core. Jun 21 06:11:33.749114 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 06:11:35.446936 containerd[1569]: time="2025-06-21T06:11:35.446820033Z" level=info msg="StopContainer for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" with timeout 30 (s)" Jun 21 06:11:35.450966 containerd[1569]: time="2025-06-21T06:11:35.450913247Z" level=info msg="Stop container \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" with signal terminated" Jun 21 06:11:35.471728 systemd[1]: cri-containerd-b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26.scope: Deactivated successfully. Jun 21 06:11:35.474414 containerd[1569]: time="2025-06-21T06:11:35.474309459Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 06:11:35.476510 containerd[1569]: time="2025-06-21T06:11:35.476470647Z" level=info msg="received exit event container_id:\"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" id:\"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" pid:3275 exited_at:{seconds:1750486295 nanos:475577432}" Jun 21 06:11:35.477051 containerd[1569]: time="2025-06-21T06:11:35.477013146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" id:\"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" pid:3275 exited_at:{seconds:1750486295 nanos:475577432}" Jun 21 06:11:35.484026 containerd[1569]: time="2025-06-21T06:11:35.483974252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" id:\"b407dc3eb3cfbf87052c1da53ad9d8a760aac7518f90947b41513306ee8159ca\" pid:4320 exited_at:{seconds:1750486295 nanos:483360904}" Jun 21 06:11:35.487240 containerd[1569]: time="2025-06-21T06:11:35.487145700Z" level=info msg="StopContainer for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" with timeout 2 (s)" Jun 21 06:11:35.487720 containerd[1569]: time="2025-06-21T06:11:35.487647452Z" level=info msg="Stop container \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" with signal terminated" Jun 21 06:11:35.504405 systemd-networkd[1454]: lxc_health: Link DOWN Jun 21 06:11:35.504422 systemd-networkd[1454]: lxc_health: Lost carrier Jun 21 06:11:35.532377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26-rootfs.mount: Deactivated successfully. Jun 21 06:11:35.540446 systemd[1]: cri-containerd-5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07.scope: Deactivated successfully. Jun 21 06:11:35.541380 systemd[1]: cri-containerd-5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07.scope: Consumed 9.385s CPU time, 128M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 06:11:35.545247 containerd[1569]: time="2025-06-21T06:11:35.545184244Z" level=info msg="received exit event container_id:\"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" id:\"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" pid:3385 exited_at:{seconds:1750486295 nanos:544785171}" Jun 21 06:11:35.545804 containerd[1569]: time="2025-06-21T06:11:35.545626687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" id:\"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" pid:3385 exited_at:{seconds:1750486295 nanos:544785171}" Jun 21 06:11:35.556853 containerd[1569]: time="2025-06-21T06:11:35.556709869Z" level=info msg="StopContainer for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" returns successfully" Jun 21 06:11:35.558762 containerd[1569]: time="2025-06-21T06:11:35.558727799Z" level=info msg="StopPodSandbox for \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\"" Jun 21 06:11:35.559763 containerd[1569]: time="2025-06-21T06:11:35.559676360Z" level=info msg="Container to stop \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:11:35.586602 systemd[1]: cri-containerd-ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f.scope: Deactivated successfully. Jun 21 06:11:35.595525 containerd[1569]: time="2025-06-21T06:11:35.595474889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" id:\"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" pid:2939 exit_status:137 exited_at:{seconds:1750486295 nanos:595050052}" Jun 21 06:11:35.603171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07-rootfs.mount: Deactivated successfully. Jun 21 06:11:35.615983 containerd[1569]: time="2025-06-21T06:11:35.615942983Z" level=info msg="StopContainer for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" returns successfully" Jun 21 06:11:35.617072 containerd[1569]: time="2025-06-21T06:11:35.617013355Z" level=info msg="StopPodSandbox for \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\"" Jun 21 06:11:35.617390 containerd[1569]: time="2025-06-21T06:11:35.617361722Z" level=info msg="Container to stop \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:11:35.617541 containerd[1569]: time="2025-06-21T06:11:35.617521255Z" level=info msg="Container to stop \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:11:35.617714 containerd[1569]: time="2025-06-21T06:11:35.617643193Z" level=info msg="Container to stop \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:11:35.617714 containerd[1569]: time="2025-06-21T06:11:35.617666682Z" level=info msg="Container to stop \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:11:35.617714 containerd[1569]: time="2025-06-21T06:11:35.617682102Z" level=info msg="Container to stop \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:11:35.631328 systemd[1]: cri-containerd-86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5.scope: Deactivated successfully. Jun 21 06:11:35.655115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f-rootfs.mount: Deactivated successfully. Jun 21 06:11:35.660578 containerd[1569]: time="2025-06-21T06:11:35.659035255Z" level=info msg="shim disconnected" id=ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f namespace=k8s.io Jun 21 06:11:35.660578 containerd[1569]: time="2025-06-21T06:11:35.660245370Z" level=warning msg="cleaning up after shim disconnected" id=ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f namespace=k8s.io Jun 21 06:11:35.660578 containerd[1569]: time="2025-06-21T06:11:35.660267765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 06:11:35.682428 containerd[1569]: time="2025-06-21T06:11:35.682376510Z" level=info msg="received exit event sandbox_id:\"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" exit_status:137 exited_at:{seconds:1750486295 nanos:595050052}" Jun 21 06:11:35.683038 containerd[1569]: time="2025-06-21T06:11:35.682998126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" id:\"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" pid:2883 exit_status:137 exited_at:{seconds:1750486295 nanos:638133502}" Jun 21 06:11:35.683773 containerd[1569]: time="2025-06-21T06:11:35.683522571Z" level=info msg="TearDown network for sandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" successfully" Jun 21 06:11:35.683773 containerd[1569]: time="2025-06-21T06:11:35.683675787Z" level=info msg="StopPodSandbox for \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" returns successfully" Jun 21 06:11:35.685561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5-rootfs.mount: Deactivated successfully. Jun 21 06:11:35.692857 containerd[1569]: time="2025-06-21T06:11:35.692038093Z" level=info msg="received exit event sandbox_id:\"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" exit_status:137 exited_at:{seconds:1750486295 nanos:638133502}" Jun 21 06:11:35.693039 containerd[1569]: time="2025-06-21T06:11:35.692990709Z" level=info msg="TearDown network for sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" successfully" Jun 21 06:11:35.693039 containerd[1569]: time="2025-06-21T06:11:35.693017688Z" level=info msg="StopPodSandbox for \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" returns successfully" Jun 21 06:11:35.693213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f-shm.mount: Deactivated successfully. Jun 21 06:11:35.695235 containerd[1569]: time="2025-06-21T06:11:35.694649075Z" level=info msg="shim disconnected" id=86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5 namespace=k8s.io Jun 21 06:11:35.695235 containerd[1569]: time="2025-06-21T06:11:35.694678186Z" level=warning msg="cleaning up after shim disconnected" id=86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5 namespace=k8s.io Jun 21 06:11:35.695235 containerd[1569]: time="2025-06-21T06:11:35.694691065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 06:11:35.813632 kubelet[2727]: I0621 06:11:35.813483 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-bpf-maps\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.813632 kubelet[2727]: I0621 06:11:35.813545 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-hostproc\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.813632 kubelet[2727]: I0621 06:11:35.813574 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-net\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.813632 kubelet[2727]: I0621 06:11:35.813598 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-xtables-lock\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.813632 kubelet[2727]: I0621 06:11:35.813634 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-config-path\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814433 kubelet[2727]: I0621 06:11:35.813659 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-run\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814433 kubelet[2727]: I0621 06:11:35.813683 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-kernel\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814433 kubelet[2727]: I0621 06:11:35.813710 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-cgroup\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814433 kubelet[2727]: I0621 06:11:35.813753 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bjct\" (UniqueName: \"kubernetes.io/projected/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-kube-api-access-2bjct\") pod \"41f16cf8-acc1-4aa5-b4b7-2a3847864c38\" (UID: \"41f16cf8-acc1-4aa5-b4b7-2a3847864c38\") " Jun 21 06:11:35.814433 kubelet[2727]: I0621 06:11:35.813782 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-cilium-config-path\") pod \"41f16cf8-acc1-4aa5-b4b7-2a3847864c38\" (UID: \"41f16cf8-acc1-4aa5-b4b7-2a3847864c38\") " Jun 21 06:11:35.814433 kubelet[2727]: I0621 06:11:35.813813 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-hubble-tls\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814750 kubelet[2727]: I0621 06:11:35.813858 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-lib-modules\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814750 kubelet[2727]: I0621 06:11:35.813886 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgpcz\" (UniqueName: \"kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-kube-api-access-mgpcz\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814750 kubelet[2727]: I0621 06:11:35.813915 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/543b025b-b621-4694-abff-fb359d6c0ca6-clustermesh-secrets\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814750 kubelet[2727]: I0621 06:11:35.813944 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cni-path\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814750 kubelet[2727]: I0621 06:11:35.813969 2727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-etc-cni-netd\") pod \"543b025b-b621-4694-abff-fb359d6c0ca6\" (UID: \"543b025b-b621-4694-abff-fb359d6c0ca6\") " Jun 21 06:11:35.814750 kubelet[2727]: I0621 06:11:35.814059 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.815084 kubelet[2727]: I0621 06:11:35.814113 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-hostproc" (OuterVolumeSpecName: "hostproc") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.815084 kubelet[2727]: I0621 06:11:35.814137 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.815084 kubelet[2727]: I0621 06:11:35.814159 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.816868 kubelet[2727]: I0621 06:11:35.815274 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.818563 kubelet[2727]: I0621 06:11:35.818529 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.818770 kubelet[2727]: I0621 06:11:35.818730 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.818995 kubelet[2727]: I0621 06:11:35.818940 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.824684 kubelet[2727]: I0621 06:11:35.819272 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.827659 kubelet[2727]: I0621 06:11:35.827452 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cni-path" (OuterVolumeSpecName: "cni-path") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:11:35.828760 kubelet[2727]: I0621 06:11:35.828613 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 06:11:35.829124 kubelet[2727]: I0621 06:11:35.829091 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41f16cf8-acc1-4aa5-b4b7-2a3847864c38" (UID: "41f16cf8-acc1-4aa5-b4b7-2a3847864c38"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 06:11:35.830538 kubelet[2727]: I0621 06:11:35.830506 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-kube-api-access-mgpcz" (OuterVolumeSpecName: "kube-api-access-mgpcz") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "kube-api-access-mgpcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 06:11:35.831001 kubelet[2727]: I0621 06:11:35.830872 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 06:11:35.831088 kubelet[2727]: I0621 06:11:35.831006 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543b025b-b621-4694-abff-fb359d6c0ca6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "543b025b-b621-4694-abff-fb359d6c0ca6" (UID: "543b025b-b621-4694-abff-fb359d6c0ca6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 21 06:11:35.838338 kubelet[2727]: I0621 06:11:35.838236 2727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-kube-api-access-2bjct" (OuterVolumeSpecName: "kube-api-access-2bjct") pod "41f16cf8-acc1-4aa5-b4b7-2a3847864c38" (UID: "41f16cf8-acc1-4aa5-b4b7-2a3847864c38"). InnerVolumeSpecName "kube-api-access-2bjct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 06:11:35.914614 kubelet[2727]: I0621 06:11:35.914548 2727 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-hostproc\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914614 kubelet[2727]: I0621 06:11:35.914595 2727 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-bpf-maps\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914614 kubelet[2727]: I0621 06:11:35.914615 2727 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-config-path\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914614 kubelet[2727]: I0621 06:11:35.914631 2727 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-net\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914657 2727 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-xtables-lock\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914672 2727 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-host-proc-sys-kernel\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914686 2727 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-run\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914701 2727 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-hubble-tls\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914714 2727 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cilium-cgroup\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914729 2727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bjct\" (UniqueName: \"kubernetes.io/projected/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-kube-api-access-2bjct\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.914950 kubelet[2727]: I0621 06:11:35.914747 2727 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41f16cf8-acc1-4aa5-b4b7-2a3847864c38-cilium-config-path\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.915163 kubelet[2727]: I0621 06:11:35.914761 2727 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-lib-modules\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.915163 kubelet[2727]: I0621 06:11:35.914776 2727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgpcz\" (UniqueName: \"kubernetes.io/projected/543b025b-b621-4694-abff-fb359d6c0ca6-kube-api-access-mgpcz\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.915163 kubelet[2727]: I0621 06:11:35.914790 2727 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/543b025b-b621-4694-abff-fb359d6c0ca6-clustermesh-secrets\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.915163 kubelet[2727]: I0621 06:11:35.914805 2727 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-etc-cni-netd\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.915163 kubelet[2727]: I0621 06:11:35.914822 2727 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/543b025b-b621-4694-abff-fb359d6c0ca6-cni-path\") on node \"ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal\" DevicePath \"\"" Jun 21 06:11:35.930861 kubelet[2727]: I0621 06:11:35.930807 2727 scope.go:117] "RemoveContainer" containerID="b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26" Jun 21 06:11:35.938873 containerd[1569]: time="2025-06-21T06:11:35.938040766Z" level=info msg="RemoveContainer for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\"" Jun 21 06:11:35.942113 systemd[1]: Removed slice kubepods-besteffort-pod41f16cf8_acc1_4aa5_b4b7_2a3847864c38.slice - libcontainer container kubepods-besteffort-pod41f16cf8_acc1_4aa5_b4b7_2a3847864c38.slice. Jun 21 06:11:35.947293 containerd[1569]: time="2025-06-21T06:11:35.947252964Z" level=info msg="RemoveContainer for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" returns successfully" Jun 21 06:11:35.949808 kubelet[2727]: I0621 06:11:35.949594 2727 scope.go:117] "RemoveContainer" containerID="b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26" Jun 21 06:11:35.950301 containerd[1569]: time="2025-06-21T06:11:35.950043208Z" level=error msg="ContainerStatus for \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\": not found" Jun 21 06:11:35.952270 systemd[1]: Removed slice kubepods-burstable-pod543b025b_b621_4694_abff_fb359d6c0ca6.slice - libcontainer container kubepods-burstable-pod543b025b_b621_4694_abff_fb359d6c0ca6.slice. Jun 21 06:11:35.952618 systemd[1]: kubepods-burstable-pod543b025b_b621_4694_abff_fb359d6c0ca6.slice: Consumed 9.534s CPU time, 128.4M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 06:11:35.953693 kubelet[2727]: E0621 06:11:35.952576 2727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\": not found" containerID="b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26" Jun 21 06:11:35.953693 kubelet[2727]: I0621 06:11:35.952660 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26"} err="failed to get container status \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3aaeb29e091252758ffc50bd26ae7754542ad5fe56a7f6abed51a870cfc8f26\": not found" Jun 21 06:11:35.953693 kubelet[2727]: I0621 06:11:35.952774 2727 scope.go:117] "RemoveContainer" containerID="5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07" Jun 21 06:11:35.958058 containerd[1569]: time="2025-06-21T06:11:35.957924430Z" level=info msg="RemoveContainer for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\"" Jun 21 06:11:35.967393 containerd[1569]: time="2025-06-21T06:11:35.967192563Z" level=info msg="RemoveContainer for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" returns successfully" Jun 21 06:11:35.967590 kubelet[2727]: I0621 06:11:35.967491 2727 scope.go:117] "RemoveContainer" containerID="53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f" Jun 21 06:11:35.971750 containerd[1569]: time="2025-06-21T06:11:35.971693702Z" level=info msg="RemoveContainer for \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\"" Jun 21 06:11:35.979197 containerd[1569]: time="2025-06-21T06:11:35.979149753Z" level=info msg="RemoveContainer for \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" returns successfully" Jun 21 06:11:35.979418 kubelet[2727]: I0621 06:11:35.979394 2727 scope.go:117] "RemoveContainer" containerID="292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134" Jun 21 06:11:35.984505 containerd[1569]: time="2025-06-21T06:11:35.984468579Z" level=info msg="RemoveContainer for \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\"" Jun 21 06:11:35.992305 containerd[1569]: time="2025-06-21T06:11:35.992267038Z" level=info msg="RemoveContainer for \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" returns successfully" Jun 21 06:11:35.992651 kubelet[2727]: I0621 06:11:35.992543 2727 scope.go:117] "RemoveContainer" containerID="b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440" Jun 21 06:11:35.994494 containerd[1569]: time="2025-06-21T06:11:35.994461156Z" level=info msg="RemoveContainer for \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\"" Jun 21 06:11:35.998643 containerd[1569]: time="2025-06-21T06:11:35.998574024Z" level=info msg="RemoveContainer for \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" returns successfully" Jun 21 06:11:35.998913 kubelet[2727]: I0621 06:11:35.998807 2727 scope.go:117] "RemoveContainer" containerID="0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1" Jun 21 06:11:36.000938 containerd[1569]: time="2025-06-21T06:11:36.000905618Z" level=info msg="RemoveContainer for \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\"" Jun 21 06:11:36.004644 containerd[1569]: time="2025-06-21T06:11:36.004563269Z" level=info msg="RemoveContainer for \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" returns successfully" Jun 21 06:11:36.004819 kubelet[2727]: I0621 06:11:36.004798 2727 scope.go:117] "RemoveContainer" containerID="5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07" Jun 21 06:11:36.005117 containerd[1569]: time="2025-06-21T06:11:36.005047699Z" level=error msg="ContainerStatus for \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\": not found" Jun 21 06:11:36.005232 kubelet[2727]: E0621 06:11:36.005205 2727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\": not found" containerID="5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07" Jun 21 06:11:36.005303 kubelet[2727]: I0621 06:11:36.005243 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07"} err="failed to get container status \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d6871c97d8fd2d9588fd78f710c7401822fcefc7cf38f9e50b106b966eb4b07\": not found" Jun 21 06:11:36.005303 kubelet[2727]: I0621 06:11:36.005274 2727 scope.go:117] "RemoveContainer" containerID="53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f" Jun 21 06:11:36.005544 containerd[1569]: time="2025-06-21T06:11:36.005499892Z" level=error msg="ContainerStatus for \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\": not found" Jun 21 06:11:36.005775 kubelet[2727]: E0621 06:11:36.005715 2727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\": not found" containerID="53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f" Jun 21 06:11:36.005775 kubelet[2727]: I0621 06:11:36.005755 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f"} err="failed to get container status \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\": rpc error: code = NotFound desc = an error occurred when try to find container \"53e5e0e65073d88f58900dc8565f660a25c26f2efb8823d03bc765bdba4a405f\": not found" Jun 21 06:11:36.006052 kubelet[2727]: I0621 06:11:36.005786 2727 scope.go:117] "RemoveContainer" containerID="292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134" Jun 21 06:11:36.006114 containerd[1569]: time="2025-06-21T06:11:36.006082757Z" level=error msg="ContainerStatus for \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\": not found" Jun 21 06:11:36.006317 kubelet[2727]: E0621 06:11:36.006228 2727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\": not found" containerID="292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134" Jun 21 06:11:36.006317 kubelet[2727]: I0621 06:11:36.006261 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134"} err="failed to get container status \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\": rpc error: code = NotFound desc = an error occurred when try to find container \"292c52b3134587043f2edb48ae26d18dcbb2ba6b418e504ce9889917885ac134\": not found" Jun 21 06:11:36.006317 kubelet[2727]: I0621 06:11:36.006289 2727 scope.go:117] "RemoveContainer" containerID="b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440" Jun 21 06:11:36.006627 containerd[1569]: time="2025-06-21T06:11:36.006587793Z" level=error msg="ContainerStatus for \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\": not found" Jun 21 06:11:36.006859 kubelet[2727]: E0621 06:11:36.006801 2727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\": not found" containerID="b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440" Jun 21 06:11:36.006991 kubelet[2727]: I0621 06:11:36.006944 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440"} err="failed to get container status \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84dab46e7150bad04656e04b128e0853df5b69eae0c474b9f2bfb40024df440\": not found" Jun 21 06:11:36.006991 kubelet[2727]: I0621 06:11:36.006991 2727 scope.go:117] "RemoveContainer" containerID="0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1" Jun 21 06:11:36.007300 containerd[1569]: time="2025-06-21T06:11:36.007232672Z" level=error msg="ContainerStatus for \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\": not found" Jun 21 06:11:36.007464 kubelet[2727]: E0621 06:11:36.007431 2727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\": not found" containerID="0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1" Jun 21 06:11:36.007464 kubelet[2727]: I0621 06:11:36.007474 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1"} err="failed to get container status \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0526f86534da25cae856f545a0f5dc7b14d6bca787b390452a82df5f2e2ff0d1\": not found" Jun 21 06:11:36.458291 kubelet[2727]: I0621 06:11:36.458210 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f16cf8-acc1-4aa5-b4b7-2a3847864c38" path="/var/lib/kubelet/pods/41f16cf8-acc1-4aa5-b4b7-2a3847864c38/volumes" Jun 21 06:11:36.458897 kubelet[2727]: I0621 06:11:36.458814 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" path="/var/lib/kubelet/pods/543b025b-b621-4694-abff-fb359d6c0ca6/volumes" Jun 21 06:11:36.526993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5-shm.mount: Deactivated successfully. Jun 21 06:11:36.527712 systemd[1]: var-lib-kubelet-pods-41f16cf8\x2dacc1\x2d4aa5\x2db4b7\x2d2a3847864c38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2bjct.mount: Deactivated successfully. Jun 21 06:11:36.527872 systemd[1]: var-lib-kubelet-pods-543b025b\x2db621\x2d4694\x2dabff\x2dfb359d6c0ca6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgpcz.mount: Deactivated successfully. Jun 21 06:11:36.528005 systemd[1]: var-lib-kubelet-pods-543b025b\x2db621\x2d4694\x2dabff\x2dfb359d6c0ca6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 06:11:36.528124 systemd[1]: var-lib-kubelet-pods-543b025b\x2db621\x2d4694\x2dabff\x2dfb359d6c0ca6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 06:11:37.418080 sshd[4298]: Connection closed by 147.75.109.163 port 54006 Jun 21 06:11:37.419009 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:37.425905 systemd[1]: sshd@23-10.128.0.41:22-147.75.109.163:54006.service: Deactivated successfully. Jun 21 06:11:37.429769 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 06:11:37.435440 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Jun 21 06:11:37.438164 systemd-logind[1496]: Removed session 24. Jun 21 06:11:37.478867 systemd[1]: Started sshd@24-10.128.0.41:22-147.75.109.163:57174.service - OpenSSH per-connection server daemon (147.75.109.163:57174). Jun 21 06:11:37.783085 sshd[4452]: Accepted publickey for core from 147.75.109.163 port 57174 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:37.784988 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:37.792900 systemd-logind[1496]: New session 25 of user core. Jun 21 06:11:37.798075 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 06:11:38.042234 ntpd[1490]: Deleting interface #11 lxc_health, fe80::f8a9:5bff:febb:f823%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jun 21 06:11:38.043407 ntpd[1490]: 21 Jun 06:11:38 ntpd[1490]: Deleting interface #11 lxc_health, fe80::f8a9:5bff:febb:f823%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jun 21 06:11:38.404026 containerd[1569]: time="2025-06-21T06:11:38.403701569Z" level=info msg="StopPodSandbox for \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\"" Jun 21 06:11:38.405223 containerd[1569]: time="2025-06-21T06:11:38.404687864Z" level=info msg="TearDown network for sandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" successfully" Jun 21 06:11:38.405223 containerd[1569]: time="2025-06-21T06:11:38.404738800Z" level=info msg="StopPodSandbox for \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" returns successfully" Jun 21 06:11:38.406913 containerd[1569]: time="2025-06-21T06:11:38.405564404Z" level=info msg="RemovePodSandbox for \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\"" Jun 21 06:11:38.406913 containerd[1569]: time="2025-06-21T06:11:38.405599260Z" level=info msg="Forcibly stopping sandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\"" Jun 21 06:11:38.406913 containerd[1569]: time="2025-06-21T06:11:38.405706129Z" level=info msg="TearDown network for sandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" successfully" Jun 21 06:11:38.408015 containerd[1569]: time="2025-06-21T06:11:38.407976713Z" level=info msg="Ensure that sandbox ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f in task-service has been cleanup successfully" Jun 21 06:11:38.420290 containerd[1569]: time="2025-06-21T06:11:38.420222870Z" level=info msg="RemovePodSandbox \"ff907af9e402b60186622d267987c044814dcf85b4d98641102e7620e9c9381f\" returns successfully" Jun 21 06:11:38.422250 containerd[1569]: time="2025-06-21T06:11:38.422213491Z" level=info msg="StopPodSandbox for \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\"" Jun 21 06:11:38.422546 containerd[1569]: time="2025-06-21T06:11:38.422521638Z" level=info msg="TearDown network for sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" successfully" Jun 21 06:11:38.422779 containerd[1569]: time="2025-06-21T06:11:38.422729215Z" level=info msg="StopPodSandbox for \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" returns successfully" Jun 21 06:11:38.425874 containerd[1569]: time="2025-06-21T06:11:38.424457437Z" level=info msg="RemovePodSandbox for \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\"" Jun 21 06:11:38.426914 containerd[1569]: time="2025-06-21T06:11:38.425977136Z" level=info msg="Forcibly stopping sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\"" Jun 21 06:11:38.426914 containerd[1569]: time="2025-06-21T06:11:38.426085877Z" level=info msg="TearDown network for sandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" successfully" Jun 21 06:11:38.428651 containerd[1569]: time="2025-06-21T06:11:38.428599424Z" level=info msg="Ensure that sandbox 86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5 in task-service has been cleanup successfully" Jun 21 06:11:38.433131 containerd[1569]: time="2025-06-21T06:11:38.433102830Z" level=info msg="RemovePodSandbox \"86bd8a15599cfbb02309edb96e4e22b9c22b3aed8e5957c8cd76d64a8c3a4bb5\" returns successfully" Jun 21 06:11:38.580580 kubelet[2727]: E0621 06:11:38.580538 2727 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 06:11:39.217996 sshd[4454]: Connection closed by 147.75.109.163 port 57174 Jun 21 06:11:39.218974 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:39.230269 systemd[1]: sshd@24-10.128.0.41:22-147.75.109.163:57174.service: Deactivated successfully. Jun 21 06:11:39.238902 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 06:11:39.239968 systemd[1]: session-25.scope: Consumed 1.192s CPU time, 23.9M memory peak. Jun 21 06:11:39.242922 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Jun 21 06:11:39.246040 systemd-logind[1496]: Removed session 25. Jun 21 06:11:39.255986 kubelet[2727]: E0621 06:11:39.255686 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" containerName="mount-bpf-fs" Jun 21 06:11:39.255986 kubelet[2727]: E0621 06:11:39.255725 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" containerName="cilium-agent" Jun 21 06:11:39.255986 kubelet[2727]: E0621 06:11:39.255737 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" containerName="apply-sysctl-overwrites" Jun 21 06:11:39.255986 kubelet[2727]: E0621 06:11:39.255747 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41f16cf8-acc1-4aa5-b4b7-2a3847864c38" containerName="cilium-operator" Jun 21 06:11:39.255986 kubelet[2727]: E0621 06:11:39.255757 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" containerName="clean-cilium-state" Jun 21 06:11:39.255986 kubelet[2727]: E0621 06:11:39.255769 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" containerName="mount-cgroup" Jun 21 06:11:39.255986 kubelet[2727]: I0621 06:11:39.255806 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f16cf8-acc1-4aa5-b4b7-2a3847864c38" containerName="cilium-operator" Jun 21 06:11:39.255986 kubelet[2727]: I0621 06:11:39.255817 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="543b025b-b621-4694-abff-fb359d6c0ca6" containerName="cilium-agent" Jun 21 06:11:39.287110 systemd[1]: Started sshd@25-10.128.0.41:22-147.75.109.163:57184.service - OpenSSH per-connection server daemon (147.75.109.163:57184). Jun 21 06:11:39.296094 systemd[1]: Created slice kubepods-burstable-pod9c4bda98_931f_4703_a020_deb9c5f6fd8a.slice - libcontainer container kubepods-burstable-pod9c4bda98_931f_4703_a020_deb9c5f6fd8a.slice. Jun 21 06:11:39.339198 kubelet[2727]: I0621 06:11:39.338435 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9c4bda98-931f-4703-a020-deb9c5f6fd8a-cilium-ipsec-secrets\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339198 kubelet[2727]: I0621 06:11:39.338853 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c4bda98-931f-4703-a020-deb9c5f6fd8a-hubble-tls\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339198 kubelet[2727]: I0621 06:11:39.338927 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-lib-modules\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339198 kubelet[2727]: I0621 06:11:39.338959 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-xtables-lock\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339198 kubelet[2727]: I0621 06:11:39.339007 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c4bda98-931f-4703-a020-deb9c5f6fd8a-clustermesh-secrets\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339198 kubelet[2727]: I0621 06:11:39.339038 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-etc-cni-netd\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339621 kubelet[2727]: I0621 06:11:39.339085 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx8v8\" (UniqueName: \"kubernetes.io/projected/9c4bda98-931f-4703-a020-deb9c5f6fd8a-kube-api-access-cx8v8\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339621 kubelet[2727]: I0621 06:11:39.339116 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-cni-path\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339621 kubelet[2727]: I0621 06:11:39.339165 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-host-proc-sys-kernel\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339621 kubelet[2727]: I0621 06:11:39.339208 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-bpf-maps\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339621 kubelet[2727]: I0621 06:11:39.339255 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-cilium-cgroup\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.339621 kubelet[2727]: I0621 06:11:39.339283 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c4bda98-931f-4703-a020-deb9c5f6fd8a-cilium-config-path\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.340972 kubelet[2727]: I0621 06:11:39.339307 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-hostproc\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.340972 kubelet[2727]: I0621 06:11:39.339335 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-host-proc-sys-net\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.340972 kubelet[2727]: I0621 06:11:39.339363 2727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c4bda98-931f-4703-a020-deb9c5f6fd8a-cilium-run\") pod \"cilium-9ss2w\" (UID: \"9c4bda98-931f-4703-a020-deb9c5f6fd8a\") " pod="kube-system/cilium-9ss2w" Jun 21 06:11:39.608223 containerd[1569]: time="2025-06-21T06:11:39.607798510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ss2w,Uid:9c4bda98-931f-4703-a020-deb9c5f6fd8a,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:39.639365 containerd[1569]: time="2025-06-21T06:11:39.639117343Z" level=info msg="connecting to shim f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c" address="unix:///run/containerd/s/9122f3187a56db9311041da8c17b6d15b1ed56ca6907749ff08e58e73be10326" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:39.644613 sshd[4468]: Accepted publickey for core from 147.75.109.163 port 57184 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:39.647731 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:39.657919 systemd-logind[1496]: New session 26 of user core. Jun 21 06:11:39.663110 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 06:11:39.682021 systemd[1]: Started cri-containerd-f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c.scope - libcontainer container f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c. Jun 21 06:11:39.717487 containerd[1569]: time="2025-06-21T06:11:39.717362882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ss2w,Uid:9c4bda98-931f-4703-a020-deb9c5f6fd8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\"" Jun 21 06:11:39.724055 containerd[1569]: time="2025-06-21T06:11:39.723993324Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 06:11:39.735162 containerd[1569]: time="2025-06-21T06:11:39.735109679Z" level=info msg="Container 2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:39.744315 containerd[1569]: time="2025-06-21T06:11:39.744252253Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\"" Jun 21 06:11:39.745231 containerd[1569]: time="2025-06-21T06:11:39.745110215Z" level=info msg="StartContainer for \"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\"" Jun 21 06:11:39.747047 containerd[1569]: time="2025-06-21T06:11:39.746990830Z" level=info msg="connecting to shim 2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c" address="unix:///run/containerd/s/9122f3187a56db9311041da8c17b6d15b1ed56ca6907749ff08e58e73be10326" protocol=ttrpc version=3 Jun 21 06:11:39.772085 systemd[1]: Started cri-containerd-2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c.scope - libcontainer container 2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c. Jun 21 06:11:39.828352 containerd[1569]: time="2025-06-21T06:11:39.828287369Z" level=info msg="StartContainer for \"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\" returns successfully" Jun 21 06:11:39.840098 systemd[1]: cri-containerd-2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c.scope: Deactivated successfully. Jun 21 06:11:39.843756 containerd[1569]: time="2025-06-21T06:11:39.843547599Z" level=info msg="received exit event container_id:\"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\" id:\"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\" pid:4532 exited_at:{seconds:1750486299 nanos:842330379}" Jun 21 06:11:39.844131 containerd[1569]: time="2025-06-21T06:11:39.844093935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\" id:\"2c2276af1428b686fcd1ccedabe03d6f62fc7d48266ae944d4db52a76976126c\" pid:4532 exited_at:{seconds:1750486299 nanos:842330379}" Jun 21 06:11:39.859720 sshd[4505]: Connection closed by 147.75.109.163 port 57184 Jun 21 06:11:39.860507 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:39.869735 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Jun 21 06:11:39.870676 systemd[1]: sshd@25-10.128.0.41:22-147.75.109.163:57184.service: Deactivated successfully. Jun 21 06:11:39.875119 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 06:11:39.879212 systemd-logind[1496]: Removed session 26. Jun 21 06:11:39.912363 systemd[1]: Started sshd@26-10.128.0.41:22-147.75.109.163:57186.service - OpenSSH per-connection server daemon (147.75.109.163:57186). Jun 21 06:11:39.962593 containerd[1569]: time="2025-06-21T06:11:39.961346106Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 06:11:39.971630 containerd[1569]: time="2025-06-21T06:11:39.971582010Z" level=info msg="Container 6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:39.979701 containerd[1569]: time="2025-06-21T06:11:39.979643996Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\"" Jun 21 06:11:39.980215 containerd[1569]: time="2025-06-21T06:11:39.980172436Z" level=info msg="StartContainer for \"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\"" Jun 21 06:11:39.983131 containerd[1569]: time="2025-06-21T06:11:39.982999746Z" level=info msg="connecting to shim 6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa" address="unix:///run/containerd/s/9122f3187a56db9311041da8c17b6d15b1ed56ca6907749ff08e58e73be10326" protocol=ttrpc version=3 Jun 21 06:11:40.009055 systemd[1]: Started cri-containerd-6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa.scope - libcontainer container 6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa. Jun 21 06:11:40.059638 containerd[1569]: time="2025-06-21T06:11:40.059564461Z" level=info msg="StartContainer for \"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\" returns successfully" Jun 21 06:11:40.065149 systemd[1]: cri-containerd-6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa.scope: Deactivated successfully. Jun 21 06:11:40.069318 containerd[1569]: time="2025-06-21T06:11:40.069279873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\" id:\"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\" pid:4587 exited_at:{seconds:1750486300 nanos:68817865}" Jun 21 06:11:40.069439 containerd[1569]: time="2025-06-21T06:11:40.069379012Z" level=info msg="received exit event container_id:\"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\" id:\"6609807fecd523fda5e6ce916e9b19f8e6c454f49bc3b05de4042f638109dcaa\" pid:4587 exited_at:{seconds:1750486300 nanos:68817865}" Jun 21 06:11:40.214864 sshd[4573]: Accepted publickey for core from 147.75.109.163 port 57186 ssh2: RSA SHA256:IX3am7/5iJVCCxdrs4U05scpqqY8+SPSNft19+80V70 Jun 21 06:11:40.216790 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:40.223662 systemd-logind[1496]: New session 27 of user core. Jun 21 06:11:40.228089 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 06:11:40.967298 containerd[1569]: time="2025-06-21T06:11:40.967084936Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 06:11:40.991607 containerd[1569]: time="2025-06-21T06:11:40.990921510Z" level=info msg="Container 9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:41.005620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003515270.mount: Deactivated successfully. Jun 21 06:11:41.013275 containerd[1569]: time="2025-06-21T06:11:41.013209105Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\"" Jun 21 06:11:41.014880 containerd[1569]: time="2025-06-21T06:11:41.014278303Z" level=info msg="StartContainer for \"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\"" Jun 21 06:11:41.016434 containerd[1569]: time="2025-06-21T06:11:41.016379892Z" level=info msg="connecting to shim 9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1" address="unix:///run/containerd/s/9122f3187a56db9311041da8c17b6d15b1ed56ca6907749ff08e58e73be10326" protocol=ttrpc version=3 Jun 21 06:11:41.046063 systemd[1]: Started cri-containerd-9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1.scope - libcontainer container 9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1. Jun 21 06:11:41.110572 containerd[1569]: time="2025-06-21T06:11:41.110501918Z" level=info msg="StartContainer for \"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\" returns successfully" Jun 21 06:11:41.111202 systemd[1]: cri-containerd-9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1.scope: Deactivated successfully. Jun 21 06:11:41.113007 containerd[1569]: time="2025-06-21T06:11:41.112464944Z" level=info msg="received exit event container_id:\"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\" id:\"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\" pid:4639 exited_at:{seconds:1750486301 nanos:112161767}" Jun 21 06:11:41.113007 containerd[1569]: time="2025-06-21T06:11:41.112758082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\" id:\"9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1\" pid:4639 exited_at:{seconds:1750486301 nanos:112161767}" Jun 21 06:11:41.147336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c54c1d7672d3bf591460265cdd2c4d0202ddd9d5346efccdfc03c4e6228cdf1-rootfs.mount: Deactivated successfully. Jun 21 06:11:41.207685 kubelet[2727]: I0621 06:11:41.206405 2727 setters.go:600] "Node became not ready" node="ci-4372-0-0-b914a48f1103a74b46ca.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T06:11:41Z","lastTransitionTime":"2025-06-21T06:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 06:11:41.973947 containerd[1569]: time="2025-06-21T06:11:41.973812102Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 06:11:41.989864 containerd[1569]: time="2025-06-21T06:11:41.988437307Z" level=info msg="Container a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:42.003618 containerd[1569]: time="2025-06-21T06:11:42.003546843Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\"" Jun 21 06:11:42.004862 containerd[1569]: time="2025-06-21T06:11:42.004464819Z" level=info msg="StartContainer for \"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\"" Jun 21 06:11:42.006037 containerd[1569]: time="2025-06-21T06:11:42.005986032Z" level=info msg="connecting to shim a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f" address="unix:///run/containerd/s/9122f3187a56db9311041da8c17b6d15b1ed56ca6907749ff08e58e73be10326" protocol=ttrpc version=3 Jun 21 06:11:42.051900 systemd[1]: Started cri-containerd-a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f.scope - libcontainer container a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f. Jun 21 06:11:42.109615 systemd[1]: cri-containerd-a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f.scope: Deactivated successfully. Jun 21 06:11:42.111004 containerd[1569]: time="2025-06-21T06:11:42.110742969Z" level=info msg="received exit event container_id:\"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\" id:\"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\" pid:4679 exited_at:{seconds:1750486302 nanos:109691484}" Jun 21 06:11:42.112272 containerd[1569]: time="2025-06-21T06:11:42.112210749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\" id:\"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\" pid:4679 exited_at:{seconds:1750486302 nanos:109691484}" Jun 21 06:11:42.122724 containerd[1569]: time="2025-06-21T06:11:42.122687078Z" level=info msg="StartContainer for \"a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f\" returns successfully" Jun 21 06:11:42.146537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5b90ba2a294a546a5a0592824990024522a75dfc19f4a83ec8dbd1a0fe4502f-rootfs.mount: Deactivated successfully. Jun 21 06:11:42.983164 containerd[1569]: time="2025-06-21T06:11:42.983095516Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 06:11:43.000857 containerd[1569]: time="2025-06-21T06:11:42.999129496Z" level=info msg="Container 57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:43.016822 containerd[1569]: time="2025-06-21T06:11:43.016750335Z" level=info msg="CreateContainer within sandbox \"f7b1356be63b46b93993098e818ab6ddef43174066e2ebb7e4039e0e825ab19c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\"" Jun 21 06:11:43.017861 containerd[1569]: time="2025-06-21T06:11:43.017406176Z" level=info msg="StartContainer for \"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\"" Jun 21 06:11:43.019484 containerd[1569]: time="2025-06-21T06:11:43.019444287Z" level=info msg="connecting to shim 57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10" address="unix:///run/containerd/s/9122f3187a56db9311041da8c17b6d15b1ed56ca6907749ff08e58e73be10326" protocol=ttrpc version=3 Jun 21 06:11:43.053068 systemd[1]: Started cri-containerd-57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10.scope - libcontainer container 57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10. Jun 21 06:11:43.117223 containerd[1569]: time="2025-06-21T06:11:43.117150455Z" level=info msg="StartContainer for \"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" returns successfully" Jun 21 06:11:43.212994 containerd[1569]: time="2025-06-21T06:11:43.212937731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" id:\"d614b11646110608af8d0ec13e2431cc16d656317db03b352adc1f58d1bd67a4\" pid:4747 exited_at:{seconds:1750486303 nanos:212283203}" Jun 21 06:11:43.627908 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 21 06:11:44.011346 kubelet[2727]: I0621 06:11:44.011260 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9ss2w" podStartSLOduration=5.011237594 podStartE2EDuration="5.011237594s" podCreationTimestamp="2025-06-21 06:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:11:44.010805549 +0000 UTC m=+125.751614401" watchObservedRunningTime="2025-06-21 06:11:44.011237594 +0000 UTC m=+125.752046421" Jun 21 06:11:44.627463 containerd[1569]: time="2025-06-21T06:11:44.627392825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" id:\"2a805b22e4d62d35c9863e779671fabc22a0aef3890cc1678ca2d8596ec36cf6\" pid:4821 exit_status:1 exited_at:{seconds:1750486304 nanos:626944108}" Jun 21 06:11:46.883823 containerd[1569]: time="2025-06-21T06:11:46.883762142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" id:\"135895f3bbe8461cf52bb640ad8a73b41fbb98968c9f94308dec4ee40689fe1e\" pid:5227 exit_status:1 exited_at:{seconds:1750486306 nanos:882475759}" Jun 21 06:11:46.905384 systemd-networkd[1454]: lxc_health: Link UP Jun 21 06:11:46.914948 systemd-networkd[1454]: lxc_health: Gained carrier Jun 21 06:11:48.563058 systemd-networkd[1454]: lxc_health: Gained IPv6LL Jun 21 06:11:49.130161 containerd[1569]: time="2025-06-21T06:11:49.130095057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" id:\"b2c26d8680befc87054c9b32f5a5671e565e8614e538466fbadbb2dccec1cfcd\" pid:5300 exited_at:{seconds:1750486309 nanos:129369569}" Jun 21 06:11:51.042228 ntpd[1490]: Listen normally on 14 lxc_health [fe80::20a9:78ff:fe78:6924%14]:123 Jun 21 06:11:51.042927 ntpd[1490]: 21 Jun 06:11:51 ntpd[1490]: Listen normally on 14 lxc_health [fe80::20a9:78ff:fe78:6924%14]:123 Jun 21 06:11:51.357498 containerd[1569]: time="2025-06-21T06:11:51.357254410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" id:\"c24157bcd12060d1dd336e80307b89df2441bf1555882c0f007823f5b5e148e0\" pid:5328 exited_at:{seconds:1750486311 nanos:356111478}" Jun 21 06:11:53.497805 containerd[1569]: time="2025-06-21T06:11:53.497708922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57304db76f5ff5b676dc61fe50532e60efca8dc416e6bbdc160489e7675b3a10\" id:\"81ed0b7fa3795600f23abc5dd0dfc1f8a0c58b3cc56a2e27cb984849ab658ee0\" pid:5357 exited_at:{seconds:1750486313 nanos:496951950}" Jun 21 06:11:53.604954 sshd[4619]: Connection closed by 147.75.109.163 port 57186 Jun 21 06:11:53.605942 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:53.612145 systemd[1]: sshd@26-10.128.0.41:22-147.75.109.163:57186.service: Deactivated successfully. Jun 21 06:11:53.615349 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 06:11:53.616786 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Jun 21 06:11:53.619672 systemd-logind[1496]: Removed session 27.