Jun 20 19:13:14.958752 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:13:14.958780 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:14.958792 kernel: BIOS-provided physical RAM map: Jun 20 19:13:14.958799 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 19:13:14.958805 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 19:13:14.958812 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jun 20 19:13:14.958820 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jun 20 19:13:14.958829 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jun 20 19:13:14.958836 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jun 20 19:13:14.958843 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 19:13:14.958850 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 19:13:14.958856 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 19:13:14.958863 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 19:13:14.958870 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 20 19:13:14.958881 kernel: NX (Execute Disable) protection: active Jun 20 19:13:14.958888 kernel: APIC: Static calls initialized Jun 20 19:13:14.958895 kernel: efi: EFI v2.7 by Microsoft Jun 20 19:13:14.958903 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5718 RNG=0x3ffd2018 Jun 20 19:13:14.958910 kernel: random: crng init done Jun 20 19:13:14.958918 kernel: secureboot: Secure boot disabled Jun 20 19:13:14.958926 kernel: SMBIOS 3.1.0 present. Jun 20 19:13:14.958933 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jun 20 19:13:14.958941 kernel: DMI: Memory slots populated: 2/2 Jun 20 19:13:14.958949 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 19:13:14.958957 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 20 19:13:14.958965 kernel: Hyper-V: Nested features: 0x3e0101 Jun 20 19:13:14.958972 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 19:13:14.958979 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 19:13:14.958987 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:13:14.958994 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 19:13:14.959002 kernel: tsc: Detected 2300.000 MHz processor Jun 20 19:13:14.959010 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:13:14.959018 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:13:14.959028 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 20 19:13:14.959036 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:13:14.959043 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:13:14.959051 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 20 19:13:14.959059 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 20 19:13:14.959067 kernel: Using GB pages for direct mapping Jun 20 19:13:14.959074 kernel: ACPI: Early table checksum verification disabled Jun 20 19:13:14.959086 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 19:13:14.959095 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:14.959103 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:14.959111 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 19:13:14.959119 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 19:13:14.959128 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:14.959136 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:14.959145 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:14.959153 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:13:14.959161 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 20 19:13:14.959170 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 19:13:14.959178 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 19:13:14.959186 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jun 20 19:13:14.959194 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 19:13:14.959202 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 19:13:14.959210 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 19:13:14.959220 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 19:13:14.959228 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 20 19:13:14.959236 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 20 19:13:14.959244 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 19:13:14.959252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 20 19:13:14.959260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 20 19:13:14.959268 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 20 19:13:14.959276 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 20 19:13:14.959284 kernel: Zone ranges: Jun 20 19:13:14.959294 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:13:14.959302 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:13:14.959309 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:13:14.959317 kernel: Device empty Jun 20 19:13:14.959325 kernel: Movable zone start for each node Jun 20 19:13:14.959333 kernel: Early memory node ranges Jun 20 19:13:14.959341 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 19:13:14.959369 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jun 20 19:13:14.959376 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jun 20 19:13:14.959385 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 19:13:14.959391 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 19:13:14.959398 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 19:13:14.959404 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:13:14.959412 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 19:13:14.959420 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jun 20 19:13:14.959427 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jun 20 19:13:14.959434 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 19:13:14.959441 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:13:14.959449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:13:14.959455 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:13:14.959462 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 19:13:14.959470 kernel: TSC deadline timer available Jun 20 19:13:14.959476 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:13:14.959483 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:13:14.959489 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:13:14.959496 kernel: CPU topo: Max. threads per core: 2 Jun 20 19:13:14.959504 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:13:14.959512 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:13:14.959519 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:13:14.959527 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 19:13:14.959535 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 19:13:14.959542 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:13:14.959550 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:13:14.959558 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:13:14.959565 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:13:14.959573 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:13:14.959581 kernel: Hyper-V: PV spinlocks enabled Jun 20 19:13:14.959588 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:13:14.959597 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:14.959605 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:13:14.959613 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:13:14.959621 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:13:14.959628 kernel: Fallback order for Node 0: 0 Jun 20 19:13:14.959636 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jun 20 19:13:14.959645 kernel: Policy zone: Normal Jun 20 19:13:14.959654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:13:14.959661 kernel: software IO TLB: area num 2. Jun 20 19:13:14.959668 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:13:14.959676 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:13:14.959683 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:13:14.959691 kernel: Dynamic Preempt: voluntary Jun 20 19:13:14.959699 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:13:14.959708 kernel: rcu: RCU event tracing is enabled. Jun 20 19:13:14.959722 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:13:14.959731 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:13:14.959738 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:13:14.959748 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:13:14.959755 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:13:14.959763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:13:14.959772 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:14.959780 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:14.959789 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:13:14.959797 kernel: Using NULL legacy PIC Jun 20 19:13:14.959807 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 19:13:14.959815 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:13:14.959823 kernel: Console: colour dummy device 80x25 Jun 20 19:13:14.959831 kernel: printk: legacy console [tty1] enabled Jun 20 19:13:14.959839 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:13:14.959847 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 20 19:13:14.959855 kernel: ACPI: Core revision 20240827 Jun 20 19:13:14.959864 kernel: Failed to register legacy timer interrupt Jun 20 19:13:14.959873 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:13:14.959881 kernel: x2apic enabled Jun 20 19:13:14.959889 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:13:14.959897 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jun 20 19:13:14.959905 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 19:13:14.959913 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 20 19:13:14.959921 kernel: Hyper-V: Using IPI hypercalls Jun 20 19:13:14.959929 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 19:13:14.959939 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 19:13:14.959948 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 19:13:14.959956 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 19:13:14.959965 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 19:13:14.959972 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 19:13:14.959981 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:13:14.959989 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 20 19:13:14.959997 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:13:14.960007 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:13:14.960015 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:13:14.960023 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:13:14.960031 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:13:14.960039 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:13:14.960047 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 19:13:14.960055 kernel: RETBleed: Vulnerable Jun 20 19:13:14.960063 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:13:14.960070 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:13:14.960078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:13:14.960086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:13:14.960095 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:13:14.960103 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 19:13:14.960111 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 19:13:14.960119 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 19:13:14.960127 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 20 19:13:14.960134 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 20 19:13:14.960142 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 20 19:13:14.960150 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:13:14.960158 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 19:13:14.960166 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 19:13:14.960174 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 19:13:14.960183 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 20 19:13:14.960191 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 20 19:13:14.960199 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 20 19:13:14.960207 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 20 19:13:14.960215 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:13:14.960223 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:13:14.960231 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:13:14.960238 kernel: landlock: Up and running. Jun 20 19:13:14.960247 kernel: SELinux: Initializing. Jun 20 19:13:14.960255 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:13:14.960263 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:13:14.960271 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 20 19:13:14.960280 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 20 19:13:14.960288 kernel: signal: max sigframe size: 11952 Jun 20 19:13:14.960296 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:13:14.960304 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:13:14.960312 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:13:14.960321 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:13:14.960329 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:13:14.960337 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:13:14.960357 kernel: .... node #0, CPUs: #1 Jun 20 19:13:14.960367 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:13:14.960375 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 20 19:13:14.960384 kernel: Memory: 8077028K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299984K reserved, 0K cma-reserved) Jun 20 19:13:14.960392 kernel: devtmpfs: initialized Jun 20 19:13:14.960399 kernel: x86/mm: Memory block size: 128MB Jun 20 19:13:14.960408 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 19:13:14.960416 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:13:14.960424 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:13:14.960433 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:13:14.960443 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:13:14.960451 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:13:14.960459 kernel: audit: type=2000 audit(1750446791.028:1): state=initialized audit_enabled=0 res=1 Jun 20 19:13:14.960466 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:13:14.960474 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:13:14.960482 kernel: cpuidle: using governor menu Jun 20 19:13:14.960491 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:13:14.960499 kernel: dca service started, version 1.12.1 Jun 20 19:13:14.960507 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jun 20 19:13:14.960517 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jun 20 19:13:14.960525 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:13:14.960533 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:13:14.960541 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:13:14.960549 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:13:14.960557 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:13:14.960566 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:13:14.960574 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:13:14.960583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:13:14.960592 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:13:14.960600 kernel: ACPI: Interpreter enabled Jun 20 19:13:14.960608 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:13:14.960615 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:13:14.960624 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:13:14.960632 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:13:14.960640 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 19:13:14.960648 kernel: iommu: Default domain type: Translated Jun 20 19:13:14.960656 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:13:14.960666 kernel: efivars: Registered efivars operations Jun 20 19:13:14.960674 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:13:14.960682 kernel: PCI: System does not support PCI Jun 20 19:13:14.960690 kernel: vgaarb: loaded Jun 20 19:13:14.960698 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:13:14.960705 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:13:14.960714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:13:14.960722 kernel: pnp: PnP ACPI init Jun 20 19:13:14.960730 kernel: pnp: PnP ACPI: found 3 devices Jun 20 19:13:14.960740 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:13:14.960749 kernel: NET: Registered PF_INET protocol family Jun 20 19:13:14.960757 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:13:14.960765 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:13:14.960773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:13:14.960781 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:13:14.960790 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:13:14.960798 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:13:14.960808 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:13:14.960816 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:13:14.960825 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:13:14.960833 kernel: NET: Registered PF_XDP protocol family Jun 20 19:13:14.960840 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:13:14.960848 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:13:14.960856 kernel: software IO TLB: mapped [mem 0x000000003a9c3000-0x000000003e9c3000] (64MB) Jun 20 19:13:14.960864 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 20 19:13:14.960873 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 20 19:13:14.960883 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 20 19:13:14.960891 kernel: clocksource: Switched to clocksource tsc Jun 20 19:13:14.960899 kernel: Initialise system trusted keyrings Jun 20 19:13:14.960907 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:13:14.960916 kernel: Key type asymmetric registered Jun 20 19:13:14.960923 kernel: Asymmetric key parser 'x509' registered Jun 20 19:13:14.960931 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:13:14.960939 kernel: io scheduler mq-deadline registered Jun 20 19:13:14.960948 kernel: io scheduler kyber registered Jun 20 19:13:14.960957 kernel: io scheduler bfq registered Jun 20 19:13:14.960966 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:13:14.960974 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:13:14.960983 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:13:14.960991 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:13:14.960999 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:13:14.961007 kernel: i8042: PNP: No PS/2 controller found. Jun 20 19:13:14.961131 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 19:13:14.961205 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T19:13:14 UTC (1750446794) Jun 20 19:13:14.961270 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 19:13:14.961280 kernel: intel_pstate: Intel P-state driver initializing Jun 20 19:13:14.961288 kernel: efifb: probing for efifb Jun 20 19:13:14.961297 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 19:13:14.961305 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 19:13:14.961313 kernel: efifb: scrolling: redraw Jun 20 19:13:14.961321 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:13:14.961329 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:13:14.961338 kernel: fb0: EFI VGA frame buffer device Jun 20 19:13:14.961356 kernel: pstore: Using crash dump compression: deflate Jun 20 19:13:14.961365 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:13:14.961373 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:13:14.961381 kernel: Segment Routing with IPv6 Jun 20 19:13:14.961389 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:13:14.961397 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:13:14.961405 kernel: Key type dns_resolver registered Jun 20 19:13:14.961413 kernel: IPI shorthand broadcast: enabled Jun 20 19:13:14.961423 kernel: sched_clock: Marking stable (2855003834, 99677885)->(3250183362, -295501643) Jun 20 19:13:14.961431 kernel: registered taskstats version 1 Jun 20 19:13:14.961439 kernel: Loading compiled-in X.509 certificates Jun 20 19:13:14.961448 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:13:14.961456 kernel: Demotion targets for Node 0: null Jun 20 19:13:14.961464 kernel: Key type .fscrypt registered Jun 20 19:13:14.961481 kernel: Key type fscrypt-provisioning registered Jun 20 19:13:14.961490 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:13:14.961500 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:13:14.961513 kernel: ima: No architecture policies found Jun 20 19:13:14.961523 kernel: clk: Disabling unused clocks Jun 20 19:13:14.961533 kernel: Warning: unable to open an initial console. Jun 20 19:13:14.961542 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:13:14.961550 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:13:14.961558 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:13:14.961566 kernel: Run /init as init process Jun 20 19:13:14.961575 kernel: with arguments: Jun 20 19:13:14.961583 kernel: /init Jun 20 19:13:14.961597 kernel: with environment: Jun 20 19:13:14.961605 kernel: HOME=/ Jun 20 19:13:14.961615 kernel: TERM=linux Jun 20 19:13:14.961624 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:13:14.961638 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:13:14.961652 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:13:14.961663 systemd[1]: Detected virtualization microsoft. Jun 20 19:13:14.961675 systemd[1]: Detected architecture x86-64. Jun 20 19:13:14.961684 systemd[1]: Running in initrd. Jun 20 19:13:14.961693 systemd[1]: No hostname configured, using default hostname. Jun 20 19:13:14.961704 systemd[1]: Hostname set to . Jun 20 19:13:14.961714 systemd[1]: Initializing machine ID from random generator. Jun 20 19:13:14.961723 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:13:14.961732 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:14.961742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:14.961756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:13:14.961765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:13:14.961776 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:13:14.961786 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:13:14.961798 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:13:14.961808 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:13:14.961816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:14.961830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:14.961838 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:13:14.961847 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:13:14.961857 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:13:14.961866 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:13:14.961875 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:13:14.961884 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:13:14.961892 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:13:14.961902 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:13:14.961915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:14.961923 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:14.961931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:14.961941 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:13:14.961951 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:13:14.961960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:13:14.961968 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:13:14.961976 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:13:14.961985 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:13:14.961994 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:13:14.962002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:13:14.962032 systemd-journald[205]: Collecting audit messages is disabled. Jun 20 19:13:14.962055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:14.962065 systemd-journald[205]: Journal started Jun 20 19:13:14.962085 systemd-journald[205]: Runtime Journal (/run/log/journal/5354ce3bf9a24c52a645fcb51f9bfb15) is 8M, max 158.9M, 150.9M free. Jun 20 19:13:14.967363 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:13:14.969634 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:13:14.975299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:14.981561 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:13:14.984077 systemd-modules-load[207]: Inserted module 'overlay' Jun 20 19:13:14.988467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:13:14.995452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:13:15.007824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:15.014455 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:13:15.025391 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:13:15.025615 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:13:15.028778 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:13:15.029749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:13:15.034928 kernel: Bridge firewalling registered Jun 20 19:13:15.034979 systemd-modules-load[207]: Inserted module 'br_netfilter' Jun 20 19:13:15.036700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:15.039452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:13:15.044111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:15.053785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:15.054527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:15.056896 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:13:15.065590 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:13:15.070854 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:13:15.085700 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:13:15.110699 systemd-resolved[236]: Positive Trust Anchors: Jun 20 19:13:15.112489 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:13:15.112526 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:13:15.129732 systemd-resolved[236]: Defaulting to hostname 'linux'. Jun 20 19:13:15.132513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:13:15.136888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:15.154363 kernel: SCSI subsystem initialized Jun 20 19:13:15.161361 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:13:15.170361 kernel: iscsi: registered transport (tcp) Jun 20 19:13:15.186477 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:13:15.186516 kernel: QLogic iSCSI HBA Driver Jun 20 19:13:15.197680 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:13:15.206436 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:15.212282 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:13:15.239471 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:13:15.242915 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:13:15.282367 kernel: raid6: avx512x4 gen() 46987 MB/s Jun 20 19:13:15.299355 kernel: raid6: avx512x2 gen() 46224 MB/s Jun 20 19:13:15.316359 kernel: raid6: avx512x1 gen() 28683 MB/s Jun 20 19:13:15.334361 kernel: raid6: avx2x4 gen() 41217 MB/s Jun 20 19:13:15.351354 kernel: raid6: avx2x2 gen() 45336 MB/s Jun 20 19:13:15.368626 kernel: raid6: avx2x1 gen() 32187 MB/s Jun 20 19:13:15.368693 kernel: raid6: using algorithm avx512x4 gen() 46987 MB/s Jun 20 19:13:15.387769 kernel: raid6: .... xor() 7700 MB/s, rmw enabled Jun 20 19:13:15.387791 kernel: raid6: using avx512x2 recovery algorithm Jun 20 19:13:15.404358 kernel: xor: automatically using best checksumming function avx Jun 20 19:13:15.511364 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:13:15.515022 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:13:15.520297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:15.535243 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jun 20 19:13:15.538917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:15.546028 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:13:15.557631 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jun 20 19:13:15.573762 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:13:15.575480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:13:15.609840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:15.616583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:13:15.651372 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:13:15.666490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:15.666602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:15.671284 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:15.677520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:15.686434 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 19:13:15.688379 kernel: AES CTR mode by8 optimization enabled Jun 20 19:13:15.696314 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 19:13:15.696368 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 19:13:15.710885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:15.711025 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:15.717616 kernel: PTP clock support registered Jun 20 19:13:15.722899 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 19:13:15.722517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:15.500194 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 19:13:15.508240 kernel: hv_vmbus: registering driver hv_utils Jun 20 19:13:15.508259 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 19:13:15.508269 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 19:13:15.508279 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 19:13:15.508288 kernel: hv_vmbus: registering driver hv_pci Jun 20 19:13:15.508299 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 19:13:15.508312 systemd-journald[205]: Time jumped backwards, rotating. Jun 20 19:13:15.508408 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 20 19:13:15.491291 systemd-resolved[236]: Clock change detected. Flushing caches. Jun 20 19:13:15.516784 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 20 19:13:15.521463 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 20 19:13:15.524493 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:13:15.528483 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 20 19:13:15.533447 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 20 19:13:15.542364 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 19:13:15.548803 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 20 19:13:15.548846 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:13:15.554108 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:13:15.554280 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 20 19:13:15.554918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:15.559834 kernel: scsi host0: storvsc_host_t Jun 20 19:13:15.561372 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 19:13:15.561408 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 19:13:15.572380 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd21500 (unnamed net_device) (uninitialized): VF slot 1 added Jun 20 19:13:15.578380 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 19:13:15.594427 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 19:13:15.594459 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 19:13:15.602798 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 19:13:15.602961 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:13:15.604365 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 19:13:15.610437 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 20 19:13:15.610563 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 20 19:13:15.620370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#171 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:15.633468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:15.860396 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 19:13:15.865369 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:16.146386 kernel: nvme nvme0: using unchecked data buffer Jun 20 19:13:16.330665 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 20 19:13:16.386448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:13:16.397608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 20 19:13:16.416594 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:13:16.418372 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 20 19:13:16.418594 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:13:16.446906 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:16.419382 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:13:16.419460 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:16.419486 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:13:16.420469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:13:16.424831 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:13:16.470058 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:13:16.591375 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 20 19:13:16.598494 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 20 19:13:16.598656 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 20 19:13:16.600794 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 19:13:16.604361 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 20 19:13:16.610404 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 20 19:13:16.614377 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 20 19:13:16.616487 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 20 19:13:16.629794 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 19:13:16.629955 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 20 19:13:16.633445 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 20 19:13:16.637245 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 20 19:13:16.646366 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jun 20 19:13:16.648953 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd21500 eth0: VF registering: eth1 Jun 20 19:13:16.649106 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jun 20 19:13:16.653376 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jun 20 19:13:17.453208 disk-uuid[675]: The operation has completed successfully. Jun 20 19:13:17.455236 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 19:13:17.507252 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:13:17.507358 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:13:17.536034 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:13:17.546412 sh[718]: Success Jun 20 19:13:17.577876 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:13:17.577915 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:13:17.577950 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:13:17.587373 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:13:17.827678 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:13:17.831797 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:13:17.846257 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:13:17.860580 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:13:17.860615 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (731) Jun 20 19:13:17.862417 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:13:17.863850 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:17.864635 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:13:18.182408 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:13:18.183071 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:13:18.191439 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:13:18.194068 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:13:18.203439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:13:18.223370 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (764) Jun 20 19:13:18.229981 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:18.230019 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:18.230032 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:18.263129 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:13:18.267434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:13:18.279468 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:18.279744 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:13:18.284444 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:13:18.297553 systemd-networkd[894]: lo: Link UP Jun 20 19:13:18.297561 systemd-networkd[894]: lo: Gained carrier Jun 20 19:13:18.301583 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:13:18.298807 systemd-networkd[894]: Enumeration completed Jun 20 19:13:18.305514 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:18.305888 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd21500 eth0: Data path switched to VF: enP30832s1 Jun 20 19:13:18.298872 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:13:18.299132 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:18.299135 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:18.301238 systemd[1]: Reached target network.target - Network. Jun 20 19:13:18.307250 systemd-networkd[894]: enP30832s1: Link UP Jun 20 19:13:18.307304 systemd-networkd[894]: eth0: Link UP Jun 20 19:13:18.307395 systemd-networkd[894]: eth0: Gained carrier Jun 20 19:13:18.307404 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:18.310488 systemd-networkd[894]: enP30832s1: Gained carrier Jun 20 19:13:18.318383 systemd-networkd[894]: eth0: DHCPv4 address 10.200.4.43/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:19.116214 ignition[901]: Ignition 2.21.0 Jun 20 19:13:19.116226 ignition[901]: Stage: fetch-offline Jun 20 19:13:19.117600 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:13:19.116298 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:19.116304 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:19.124453 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:13:19.116390 ignition[901]: parsed url from cmdline: "" Jun 20 19:13:19.116393 ignition[901]: no config URL provided Jun 20 19:13:19.116397 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:13:19.116401 ignition[901]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:13:19.116405 ignition[901]: failed to fetch config: resource requires networking Jun 20 19:13:19.116608 ignition[901]: Ignition finished successfully Jun 20 19:13:19.146920 ignition[911]: Ignition 2.21.0 Jun 20 19:13:19.146929 ignition[911]: Stage: fetch Jun 20 19:13:19.147084 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:19.147090 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:19.147153 ignition[911]: parsed url from cmdline: "" Jun 20 19:13:19.147155 ignition[911]: no config URL provided Jun 20 19:13:19.147159 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:13:19.147164 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:13:19.147191 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 19:13:19.240163 ignition[911]: GET result: OK Jun 20 19:13:19.240238 ignition[911]: config has been read from IMDS userdata Jun 20 19:13:19.240267 ignition[911]: parsing config with SHA512: 0d5b60533fcd06489bd8dde59fc366cf75bdcb326c8301d05da31499347e7938d9182e1decd322e5ba0dd83811face13bb7e30f65e9f864608b79b3002045865 Jun 20 19:13:19.247253 unknown[911]: fetched base config from "system" Jun 20 19:13:19.247262 unknown[911]: fetched base config from "system" Jun 20 19:13:19.247267 unknown[911]: fetched user config from "azure" Jun 20 19:13:19.248635 ignition[911]: fetch: fetch complete Jun 20 19:13:19.250501 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:13:19.248640 ignition[911]: fetch: fetch passed Jun 20 19:13:19.256482 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:13:19.248675 ignition[911]: Ignition finished successfully Jun 20 19:13:19.273085 ignition[917]: Ignition 2.21.0 Jun 20 19:13:19.273094 ignition[917]: Stage: kargs Jun 20 19:13:19.273275 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:19.275717 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:13:19.273282 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:19.274027 ignition[917]: kargs: kargs passed Jun 20 19:13:19.274064 ignition[917]: Ignition finished successfully Jun 20 19:13:19.286178 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:13:19.302508 ignition[923]: Ignition 2.21.0 Jun 20 19:13:19.302516 ignition[923]: Stage: disks Jun 20 19:13:19.304402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:13:19.302681 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:19.308180 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:13:19.302687 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:19.312647 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:13:19.303475 ignition[923]: disks: disks passed Jun 20 19:13:19.315108 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:13:19.303514 ignition[923]: Ignition finished successfully Jun 20 19:13:19.321848 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:13:19.327482 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:13:19.333451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:13:19.410578 systemd-fsck[932]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 19:13:19.414466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:13:19.420412 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:13:19.465562 systemd-networkd[894]: enP30832s1: Gained IPv6LL Jun 20 19:13:19.676371 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:13:19.676877 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:13:19.678066 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:13:19.696906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:13:19.702033 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:13:19.706974 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:13:19.710887 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:13:19.710922 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:13:19.719813 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:13:19.721502 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (941) Jun 20 19:13:19.722668 systemd-networkd[894]: eth0: Gained IPv6LL Jun 20 19:13:19.728752 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:19.728787 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:19.728798 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:19.728489 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:13:19.734325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:13:20.234954 coreos-metadata[943]: Jun 20 19:13:20.234 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:13:20.238231 coreos-metadata[943]: Jun 20 19:13:20.238 INFO Fetch successful Jun 20 19:13:20.241404 coreos-metadata[943]: Jun 20 19:13:20.239 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:13:20.253686 coreos-metadata[943]: Jun 20 19:13:20.253 INFO Fetch successful Jun 20 19:13:20.269111 coreos-metadata[943]: Jun 20 19:13:20.269 INFO wrote hostname ci-4344.1.0-a-dde0712b19 to /sysroot/etc/hostname Jun 20 19:13:20.272285 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:13:20.342876 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:13:20.443816 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:13:20.464213 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:13:20.467827 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:13:21.381045 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:13:21.383745 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:13:21.388286 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:13:21.399016 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:13:21.402406 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:21.423391 ignition[1063]: INFO : Ignition 2.21.0 Jun 20 19:13:21.423391 ignition[1063]: INFO : Stage: mount Jun 20 19:13:21.423391 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:21.423391 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:21.435439 ignition[1063]: INFO : mount: mount passed Jun 20 19:13:21.435439 ignition[1063]: INFO : Ignition finished successfully Jun 20 19:13:21.425471 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:13:21.426456 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:13:21.427927 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:13:21.441595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:13:21.463363 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1076) Jun 20 19:13:21.465412 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:13:21.465562 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:13:21.466700 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 20 19:13:21.471130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:13:21.494315 ignition[1093]: INFO : Ignition 2.21.0 Jun 20 19:13:21.494315 ignition[1093]: INFO : Stage: files Jun 20 19:13:21.499393 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:21.499393 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:21.499393 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:13:21.512946 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:13:21.512946 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:13:21.561832 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:13:21.564211 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:13:21.564211 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:13:21.564211 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:13:21.564211 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 19:13:21.562148 unknown[1093]: wrote ssh authorized keys file for user: core Jun 20 19:13:21.593284 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:13:21.701140 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:13:21.704050 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:13:21.730417 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:13:21.730417 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:13:21.730417 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:13:21.730417 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:13:21.730417 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:13:21.730417 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 19:13:22.604537 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:13:25.607134 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:13:25.607134 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:13:25.638335 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:13:25.645373 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:13:25.645373 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:13:25.652527 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:13:25.652527 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:13:25.652527 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:13:25.652527 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:13:25.652527 ignition[1093]: INFO : files: files passed Jun 20 19:13:25.652527 ignition[1093]: INFO : Ignition finished successfully Jun 20 19:13:25.649672 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:13:25.655192 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:13:25.665092 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:13:25.673713 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:13:25.673790 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:13:25.699136 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:25.704649 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:25.701793 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:13:25.709590 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:13:25.702119 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:13:25.705539 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:13:25.740530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:13:25.740610 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:13:25.743796 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:13:25.757072 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:13:25.761400 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:13:25.761911 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:13:25.781442 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:13:25.786161 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:13:25.801701 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:25.801946 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:25.810804 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:13:25.812971 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:13:25.813081 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:13:25.818896 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:13:25.821479 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:13:25.824164 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:13:25.828500 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:13:25.831162 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:13:25.835510 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:13:25.838675 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:13:25.843490 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:13:25.844902 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:13:25.845194 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:13:25.851511 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:13:25.854535 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:13:25.854669 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:13:25.859370 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:25.860869 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:25.869122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:13:25.869811 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:25.869883 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:13:25.869984 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:13:25.870778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:13:25.870870 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:13:25.871143 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:13:25.871223 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:13:25.871609 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:13:25.871705 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:13:25.873449 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:13:25.873592 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:13:25.873708 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:25.876265 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:13:25.883285 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:13:25.885158 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:25.905410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:13:25.905549 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:13:25.919437 ignition[1147]: INFO : Ignition 2.21.0 Jun 20 19:13:25.919437 ignition[1147]: INFO : Stage: umount Jun 20 19:13:25.919437 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:13:25.919437 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 19:13:25.919437 ignition[1147]: INFO : umount: umount passed Jun 20 19:13:25.919437 ignition[1147]: INFO : Ignition finished successfully Jun 20 19:13:25.921155 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:13:25.921221 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:13:25.921821 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:13:25.921877 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:13:25.923688 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:13:25.923732 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:13:25.923863 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:13:25.923888 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:13:25.924343 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:13:25.924377 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:13:25.924607 systemd[1]: Stopped target network.target - Network. Jun 20 19:13:25.924633 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:13:25.924658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:13:25.925116 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:13:25.925135 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:13:25.928380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:25.959638 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:13:25.959770 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:13:25.959970 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:13:25.960004 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:13:25.960244 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:13:25.960268 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:13:25.960531 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:13:25.960570 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:13:25.960815 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:13:25.960846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:13:25.961195 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:13:25.961831 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:13:25.971855 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:13:25.971937 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:13:25.978134 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:13:25.978339 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:13:25.978449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:13:25.995435 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:13:25.996036 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:13:26.001439 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:13:26.001493 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:26.009444 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:13:26.011522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:13:26.011578 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:13:26.011668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:13:26.011702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:26.013204 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:13:26.013245 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:26.013695 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:13:26.013730 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:26.014972 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:26.016337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:13:26.016444 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:13:26.016480 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:26.016910 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:13:26.016984 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:13:26.035140 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:13:26.047591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:26.052126 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:13:26.052192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:26.056011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:13:26.056045 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:26.061674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:13:26.061723 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:13:26.066317 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:13:26.066366 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:13:26.081257 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd21500 eth0: Data path switched from VF: enP30832s1 Jun 20 19:13:26.081445 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:26.067835 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:13:26.067866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:13:26.073865 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:13:26.073924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:13:26.078483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:13:26.082415 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:13:26.082460 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:26.094114 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:13:26.094165 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:26.098156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:26.098201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:26.104805 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:13:26.104856 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:13:26.104890 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:13:26.105215 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:13:26.105323 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:13:26.109677 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:13:26.109747 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:13:26.114823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:13:26.121464 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:13:26.140482 systemd[1]: Switching root. Jun 20 19:13:26.212868 systemd-journald[205]: Journal stopped Jun 20 19:13:30.057668 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 20 19:13:30.057698 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:13:30.057709 kernel: SELinux: policy capability open_perms=1 Jun 20 19:13:30.057717 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:13:30.057724 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:13:30.057732 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:13:30.057742 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:13:30.057749 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:13:30.057757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:13:30.057765 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:13:30.057773 kernel: audit: type=1403 audit(1750446807.234:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:13:30.057782 systemd[1]: Successfully loaded SELinux policy in 137.925ms. Jun 20 19:13:30.057792 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.413ms. Jun 20 19:13:30.057804 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:13:30.057814 systemd[1]: Detected virtualization microsoft. Jun 20 19:13:30.057822 systemd[1]: Detected architecture x86-64. Jun 20 19:13:30.057831 systemd[1]: Detected first boot. Jun 20 19:13:30.057840 systemd[1]: Hostname set to . Jun 20 19:13:30.057851 systemd[1]: Initializing machine ID from random generator. Jun 20 19:13:30.057860 zram_generator::config[1190]: No configuration found. Jun 20 19:13:30.057870 kernel: Guest personality initialized and is inactive Jun 20 19:13:30.057879 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 19:13:30.057888 kernel: Initialized host personality Jun 20 19:13:30.057896 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:13:30.057905 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:13:30.057916 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:13:30.057926 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:13:30.057935 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:13:30.057944 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:13:30.057953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:13:30.057963 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:13:30.057972 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:13:30.057983 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:13:30.057992 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:13:30.058000 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:13:30.058010 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:13:30.058019 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:13:30.058028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:13:30.058037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:13:30.058046 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:13:30.058058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:13:30.058070 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:13:30.058080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:13:30.058090 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:13:30.058099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:13:30.058108 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:13:30.058118 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:13:30.058127 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:13:30.058138 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:13:30.058148 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:13:30.058159 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:13:30.058168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:13:30.058177 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:13:30.058186 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:13:30.058195 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:13:30.058205 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:13:30.058216 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:13:30.058225 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:13:30.058234 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:13:30.058243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:13:30.058253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:13:30.058263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:13:30.058273 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:13:30.058282 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:13:30.058291 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:30.058300 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:13:30.058310 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:13:30.058319 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:13:30.058328 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:13:30.058339 systemd[1]: Reached target machines.target - Containers. Jun 20 19:13:30.058824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:13:30.059249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:30.059555 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:13:30.059568 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:13:30.059576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:30.059585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:13:30.059594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:30.059603 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:13:30.059615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:30.059624 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:13:30.059633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:13:30.059642 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:13:30.059651 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:13:30.059661 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:13:30.059671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:30.059682 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:13:30.059694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:13:30.059703 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:13:30.059712 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:13:30.059993 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:13:30.060006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:13:30.060015 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:13:30.060023 systemd[1]: Stopped verity-setup.service. Jun 20 19:13:30.060033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:30.060044 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:13:30.060053 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:13:30.060062 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:13:30.060071 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:13:30.060081 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:13:30.060090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:13:30.060099 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:13:30.060109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:30.060118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:30.060129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:30.060138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:30.060147 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:13:30.060157 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:13:30.060166 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:13:30.060175 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:13:30.060185 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:13:30.060194 kernel: fuse: init (API version 7.41) Jun 20 19:13:30.060228 systemd-journald[1276]: Collecting audit messages is disabled. Jun 20 19:13:30.060253 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:13:30.060262 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:13:30.060272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:30.060283 systemd-journald[1276]: Journal started Jun 20 19:13:30.060304 systemd-journald[1276]: Runtime Journal (/run/log/journal/8ea559907e084a87929674adb1825348) is 8M, max 158.9M, 150.9M free. Jun 20 19:13:30.072299 kernel: loop: module loaded Jun 20 19:13:30.072345 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:13:30.072376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:13:29.595178 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:13:29.605789 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 19:13:29.606065 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:13:30.082513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:13:30.092460 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:13:30.092500 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:13:30.099108 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:13:30.101895 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:13:30.102041 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:13:30.105029 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:13:30.105171 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:13:30.107957 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:30.108107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:30.110901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:13:30.113418 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:13:30.126817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:13:30.147901 kernel: loop0: detected capacity change from 0 to 28496 Jun 20 19:13:30.140726 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:13:30.144042 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:13:30.146700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:13:30.147455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:13:30.150463 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:13:30.153749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:13:30.156883 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:13:30.161416 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:13:30.164555 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:13:30.167503 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:13:30.170997 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:13:30.207678 systemd-journald[1276]: Time spent on flushing to /var/log/journal/8ea559907e084a87929674adb1825348 is 206.140ms for 987 entries. Jun 20 19:13:30.207678 systemd-journald[1276]: System Journal (/var/log/journal/8ea559907e084a87929674adb1825348) is 11.8M, max 2.6G, 2.6G free. Jun 20 19:13:32.100147 systemd-journald[1276]: Received client request to flush runtime journal. Jun 20 19:13:32.100217 systemd-journald[1276]: /var/log/journal/8ea559907e084a87929674adb1825348/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 19:13:32.100242 systemd-journald[1276]: Rotating system journal. Jun 20 19:13:32.100269 kernel: ACPI: bus type drm_connector registered Jun 20 19:13:30.439850 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:13:30.439969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:13:30.454502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:13:31.616695 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:13:31.622467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:13:32.102045 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:13:32.149145 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:13:32.149692 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:13:32.587573 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:13:32.508293 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jun 20 19:13:32.508309 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jun 20 19:13:32.511983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:13:32.610373 kernel: loop1: detected capacity change from 0 to 146240 Jun 20 19:13:33.478519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:13:33.482644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:13:33.505799 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jun 20 19:13:34.076836 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:13:34.082624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:13:34.157476 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:13:34.200407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 19:13:34.409369 kernel: loop2: detected capacity change from 0 to 113872 Jun 20 19:13:34.502656 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:13:34.567376 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 19:13:34.573363 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 19:13:34.573405 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 19:13:34.576378 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:13:34.577210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:34.581369 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:13:34.588263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:13:34.588463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:34.593545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:13:34.614368 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:13:34.681536 kernel: hv_vmbus: registering driver hv_balloon Jun 20 19:13:34.682921 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:13:34.682947 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 19:13:34.976375 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 19:13:35.100620 systemd-networkd[1360]: lo: Link UP Jun 20 19:13:35.100627 systemd-networkd[1360]: lo: Gained carrier Jun 20 19:13:35.101974 systemd-networkd[1360]: Enumeration completed Jun 20 19:13:35.102062 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:13:35.104471 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:35.104524 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:35.104923 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:13:35.108430 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jun 20 19:13:35.109052 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:13:35.115378 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jun 20 19:13:35.118380 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd21500 eth0: Data path switched to VF: enP30832s1 Jun 20 19:13:35.118699 systemd-networkd[1360]: enP30832s1: Link UP Jun 20 19:13:35.118758 systemd-networkd[1360]: eth0: Link UP Jun 20 19:13:35.118760 systemd-networkd[1360]: eth0: Gained carrier Jun 20 19:13:35.118775 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:35.124583 systemd-networkd[1360]: enP30832s1: Gained carrier Jun 20 19:13:35.131378 systemd-networkd[1360]: eth0: DHCPv4 address 10.200.4.43/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:35.253323 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:13:35.402183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 20 19:13:35.404133 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:13:35.502989 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:13:35.819144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:13:35.894369 kernel: loop3: detected capacity change from 0 to 229808 Jun 20 19:13:35.942370 kernel: loop4: detected capacity change from 0 to 28496 Jun 20 19:13:35.961367 kernel: loop5: detected capacity change from 0 to 146240 Jun 20 19:13:35.982364 kernel: loop6: detected capacity change from 0 to 113872 Jun 20 19:13:36.041376 kernel: loop7: detected capacity change from 0 to 229808 Jun 20 19:13:36.054388 (sd-merge)[1455]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 19:13:36.054734 (sd-merge)[1455]: Merged extensions into '/usr'. Jun 20 19:13:36.057620 systemd[1]: Reload requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:13:36.057631 systemd[1]: Reloading... Jun 20 19:13:36.108412 zram_generator::config[1481]: No configuration found. Jun 20 19:13:36.191034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:36.270449 systemd[1]: Reloading finished in 212 ms. Jun 20 19:13:36.296755 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:13:36.297473 systemd-networkd[1360]: enP30832s1: Gained IPv6LL Jun 20 19:13:36.311098 systemd[1]: Starting ensure-sysext.service... Jun 20 19:13:36.317197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:13:36.327960 systemd[1]: Reload requested from client PID 1542 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:13:36.327977 systemd[1]: Reloading... Jun 20 19:13:36.332508 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:13:36.332735 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:13:36.332926 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:13:36.333112 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:13:36.333829 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:13:36.334077 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jun 20 19:13:36.334163 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jun 20 19:13:36.353026 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:13:36.353118 systemd-tmpfiles[1543]: Skipping /boot Jun 20 19:13:36.365528 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:13:36.365606 systemd-tmpfiles[1543]: Skipping /boot Jun 20 19:13:36.391368 zram_generator::config[1571]: No configuration found. Jun 20 19:13:36.470058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:36.552113 systemd[1]: Reloading finished in 223 ms. Jun 20 19:13:36.553461 systemd-networkd[1360]: eth0: Gained IPv6LL Jun 20 19:13:36.559799 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:13:36.570138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:13:36.579940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:36.580898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:13:36.594177 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:13:36.596553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:36.597504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:36.606599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:36.610530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:36.613584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:36.613707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:36.616252 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:13:36.621835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:13:36.625574 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:13:36.628492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:36.630907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:36.631118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:36.635687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:36.635882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:36.639718 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:36.639878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:36.654980 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:13:36.663280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:36.663560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:13:36.666624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:13:36.672343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:13:36.679083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:13:36.682430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:13:36.685160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:13:36.685278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:13:36.685441 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:13:36.687905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:13:36.690731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:13:36.691637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:13:36.694531 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:13:36.694663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:13:36.700649 systemd[1]: Finished ensure-sysext.service. Jun 20 19:13:36.703175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:13:36.706951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:13:36.707595 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:13:36.710114 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:13:36.710274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:13:36.715986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:13:36.716052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:13:36.762047 systemd-resolved[1642]: Positive Trust Anchors: Jun 20 19:13:36.762058 systemd-resolved[1642]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:13:36.762093 systemd-resolved[1642]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:13:36.765740 systemd-resolved[1642]: Using system hostname 'ci-4344.1.0-a-dde0712b19'. Jun 20 19:13:36.767192 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:13:36.769449 systemd[1]: Reached target network.target - Network. Jun 20 19:13:36.771430 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:13:36.774405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:13:37.405704 augenrules[1678]: No rules Jun 20 19:13:37.406652 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:13:37.406856 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:13:38.216075 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:13:38.219613 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:13:43.207141 ldconfig[1306]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:13:43.217140 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:13:43.221497 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:13:43.238825 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:13:43.240457 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:13:43.243502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:13:43.246419 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:13:43.247930 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:13:43.249714 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:13:43.252438 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:13:43.255401 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:13:43.258421 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:13:43.258453 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:13:43.261394 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:13:43.280282 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:13:43.282487 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:13:43.287257 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:13:43.290526 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:13:43.293416 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:13:43.302770 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:13:43.304613 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:13:43.307834 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:13:43.311016 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:13:43.313414 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:13:43.316441 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:13:43.316467 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:13:43.318643 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 19:13:43.322193 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:13:43.327985 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:13:43.332469 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:13:43.337209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:13:43.341521 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:13:43.347102 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:13:43.349582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:13:43.351696 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:13:43.353785 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 20 19:13:43.357498 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 19:13:43.358262 jq[1695]: false Jun 20 19:13:43.359645 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 19:13:43.361476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:43.364575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:13:43.368755 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:13:43.377341 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:13:43.380339 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:13:43.388844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:13:43.399598 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:13:43.402300 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:13:43.403719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:13:43.404258 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:13:43.405043 (chronyd)[1690]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 19:13:43.409044 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:13:43.415438 KVP[1698]: KVP starting; pid is:1698 Jun 20 19:13:43.415807 chronyd[1719]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 19:13:43.418181 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:13:43.419007 kernel: hv_utils: KVP IC version 4.0 Jun 20 19:13:43.418287 KVP[1698]: KVP LIC Version: 3.1 Jun 20 19:13:43.420003 chronyd[1719]: Timezone right/UTC failed leap second check, ignoring Jun 20 19:13:43.420302 chronyd[1719]: Loaded seccomp filter (level 2) Jun 20 19:13:43.420899 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:13:43.422569 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 19:13:43.430016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:13:43.430403 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:13:43.435775 jq[1715]: true Jun 20 19:13:43.451332 jq[1726]: true Jun 20 19:13:43.467067 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:13:43.512263 extend-filesystems[1696]: Found /dev/nvme0n1p6 Jun 20 19:13:43.613406 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:13:43.615568 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:13:43.617791 (ntainerd)[1756]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:13:43.656315 extend-filesystems[1696]: Found /dev/nvme0n1p9 Jun 20 19:13:43.658323 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Refreshing passwd entry cache Jun 20 19:13:43.658153 oslogin_cache_refresh[1697]: Refreshing passwd entry cache Jun 20 19:13:43.661591 extend-filesystems[1696]: Checking size of /dev/nvme0n1p9 Jun 20 19:13:43.666823 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:13:43.695080 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Failure getting users, quitting Jun 20 19:13:43.695080 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:13:43.695080 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Refreshing group entry cache Jun 20 19:13:43.694992 oslogin_cache_refresh[1697]: Failure getting users, quitting Jun 20 19:13:43.695007 oslogin_cache_refresh[1697]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:13:43.695040 oslogin_cache_refresh[1697]: Refreshing group entry cache Jun 20 19:13:43.706914 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Failure getting groups, quitting Jun 20 19:13:43.706914 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:13:43.706229 oslogin_cache_refresh[1697]: Failure getting groups, quitting Jun 20 19:13:43.706238 oslogin_cache_refresh[1697]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:13:43.707083 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:13:43.707265 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:13:43.719397 tar[1721]: linux-amd64/LICENSE Jun 20 19:13:43.719397 tar[1721]: linux-amd64/helm Jun 20 19:13:43.842612 systemd-logind[1712]: New seat seat0. Jun 20 19:13:44.140871 systemd-logind[1712]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:13:44.141214 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:13:44.169040 update_engine[1713]: I20250620 19:13:44.168641 1713 main.cc:92] Flatcar Update Engine starting Jun 20 19:13:44.189983 extend-filesystems[1696]: Old size kept for /dev/nvme0n1p9 Jun 20 19:13:44.191162 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:13:44.191401 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:13:44.248787 dbus-daemon[1693]: [system] SELinux support is enabled Jun 20 19:13:44.249139 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:13:44.255594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:13:44.255616 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:13:44.258664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:13:44.258688 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:13:44.263938 dbus-daemon[1693]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 19:13:44.264951 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:13:44.267815 update_engine[1713]: I20250620 19:13:44.267779 1713 update_check_scheduler.cc:74] Next update check in 6m51s Jun 20 19:13:44.269413 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:13:44.416320 sshd_keygen[1728]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:13:44.439043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:13:44.444133 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:13:44.448010 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 19:13:44.474187 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:13:44.474376 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:13:44.480292 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:13:44.496482 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 19:13:44.504054 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:13:44.512591 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:13:44.524515 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:13:44.527455 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:13:44.617063 coreos-metadata[1692]: Jun 20 19:13:44.616 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 19:13:44.674015 coreos-metadata[1692]: Jun 20 19:13:44.673 INFO Fetch successful Jun 20 19:13:44.674328 coreos-metadata[1692]: Jun 20 19:13:44.674 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 19:13:44.677718 coreos-metadata[1692]: Jun 20 19:13:44.677 INFO Fetch successful Jun 20 19:13:44.678092 coreos-metadata[1692]: Jun 20 19:13:44.678 INFO Fetching http://168.63.129.16/machine/f8b52719-46b5-4e3d-98b0-02da1a2fdfcb/fceabdc9%2Dd0b6%2D4fc5%2D961c%2Db28085b777a0.%5Fci%2D4344.1.0%2Da%2Ddde0712b19?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 19:13:44.680205 coreos-metadata[1692]: Jun 20 19:13:44.680 INFO Fetch successful Jun 20 19:13:44.680404 coreos-metadata[1692]: Jun 20 19:13:44.680 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 19:13:44.692101 coreos-metadata[1692]: Jun 20 19:13:44.692 INFO Fetch successful Jun 20 19:13:44.722409 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:13:44.723921 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:13:44.926702 tar[1721]: linux-amd64/README.md Jun 20 19:13:44.937971 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:13:45.411017 locksmithd[1792]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:13:45.797412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:45.804664 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:45.848190 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:13:45.848598 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:13:45.850898 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:13:46.430311 kubelet[1839]: E0620 19:13:46.430249 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:46.431932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:46.432050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:46.432410 systemd[1]: kubelet.service: Consumed 881ms CPU time, 267.8M memory peak. Jun 20 19:13:46.890027 containerd[1756]: time="2025-06-20T19:13:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:13:46.890817 containerd[1756]: time="2025-06-20T19:13:46.890784240Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:13:46.897211 containerd[1756]: time="2025-06-20T19:13:46.897176192Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.831µs" Jun 20 19:13:46.897211 containerd[1756]: time="2025-06-20T19:13:46.897200986Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:13:46.897314 containerd[1756]: time="2025-06-20T19:13:46.897218477Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:13:46.897391 containerd[1756]: time="2025-06-20T19:13:46.897347659Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:13:46.897417 containerd[1756]: time="2025-06-20T19:13:46.897389608Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:13:46.897417 containerd[1756]: time="2025-06-20T19:13:46.897410137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897478 containerd[1756]: time="2025-06-20T19:13:46.897455019Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897478 containerd[1756]: time="2025-06-20T19:13:46.897471665Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897657 containerd[1756]: time="2025-06-20T19:13:46.897641037Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897657 containerd[1756]: time="2025-06-20T19:13:46.897652224Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897701 containerd[1756]: time="2025-06-20T19:13:46.897661717Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897701 containerd[1756]: time="2025-06-20T19:13:46.897668392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897742 containerd[1756]: time="2025-06-20T19:13:46.897727577Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897906 containerd[1756]: time="2025-06-20T19:13:46.897890817Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897932 containerd[1756]: time="2025-06-20T19:13:46.897912831Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:13:46.897932 containerd[1756]: time="2025-06-20T19:13:46.897922919Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:13:46.897975 containerd[1756]: time="2025-06-20T19:13:46.897948304Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:13:46.898161 containerd[1756]: time="2025-06-20T19:13:46.898141350Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:13:46.898192 containerd[1756]: time="2025-06-20T19:13:46.898182197Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:13:47.245106 containerd[1756]: time="2025-06-20T19:13:47.245011244Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:13:47.245106 containerd[1756]: time="2025-06-20T19:13:47.245072269Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:13:47.245106 containerd[1756]: time="2025-06-20T19:13:47.245086058Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:13:47.245106 containerd[1756]: time="2025-06-20T19:13:47.245106229Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245118466Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245128485Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245140284Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245150907Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245161500Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245174510Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:13:47.245257 containerd[1756]: time="2025-06-20T19:13:47.245183925Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245416741Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245603617Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245630686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245691798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245709828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245726927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245740879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245752281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245765503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245781324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245796927Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245808376Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245884214Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245902415Z" level=info msg="Start snapshots syncer" Jun 20 19:13:47.246380 containerd[1756]: time="2025-06-20T19:13:47.245922751Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:13:47.246719 containerd[1756]: time="2025-06-20T19:13:47.246239705Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:13:47.246719 containerd[1756]: time="2025-06-20T19:13:47.246302163Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:13:47.246836 containerd[1756]: time="2025-06-20T19:13:47.246751190Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:13:47.246884 containerd[1756]: time="2025-06-20T19:13:47.246866589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:13:47.246911 containerd[1756]: time="2025-06-20T19:13:47.246901459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:13:47.246932 containerd[1756]: time="2025-06-20T19:13:47.246921216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:13:47.246960 containerd[1756]: time="2025-06-20T19:13:47.246935133Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:13:47.246960 containerd[1756]: time="2025-06-20T19:13:47.246950899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:13:47.246995 containerd[1756]: time="2025-06-20T19:13:47.246965940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:13:47.246995 containerd[1756]: time="2025-06-20T19:13:47.246980388Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:13:47.247037 containerd[1756]: time="2025-06-20T19:13:47.247010519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:13:47.247037 containerd[1756]: time="2025-06-20T19:13:47.247022391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:13:47.247073 containerd[1756]: time="2025-06-20T19:13:47.247036911Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:13:47.247093 containerd[1756]: time="2025-06-20T19:13:47.247072357Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:13:47.247114 containerd[1756]: time="2025-06-20T19:13:47.247089810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:13:47.247114 containerd[1756]: time="2025-06-20T19:13:47.247099541Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:13:47.247150 containerd[1756]: time="2025-06-20T19:13:47.247112785Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:13:47.247150 containerd[1756]: time="2025-06-20T19:13:47.247123508Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:13:47.247150 containerd[1756]: time="2025-06-20T19:13:47.247133523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:13:47.247150 containerd[1756]: time="2025-06-20T19:13:47.247147060Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:13:47.247224 containerd[1756]: time="2025-06-20T19:13:47.247165560Z" level=info msg="runtime interface created" Jun 20 19:13:47.247224 containerd[1756]: time="2025-06-20T19:13:47.247170561Z" level=info msg="created NRI interface" Jun 20 19:13:47.247224 containerd[1756]: time="2025-06-20T19:13:47.247182049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:13:47.247224 containerd[1756]: time="2025-06-20T19:13:47.247194979Z" level=info msg="Connect containerd service" Jun 20 19:13:47.247285 containerd[1756]: time="2025-06-20T19:13:47.247224718Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:13:47.248112 containerd[1756]: time="2025-06-20T19:13:47.248077262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:13:48.462001 waagent[1814]: 2025-06-20T19:13:48.461927Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 19:13:48.463514 waagent[1814]: 2025-06-20T19:13:48.463476Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 19:13:48.466450 waagent[1814]: 2025-06-20T19:13:48.466408Z INFO Daemon Daemon Python: 3.11.12 Jun 20 19:13:48.467470 waagent[1814]: 2025-06-20T19:13:48.467410Z INFO Daemon Daemon Run daemon Jun 20 19:13:48.468681 waagent[1814]: 2025-06-20T19:13:48.468647Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 19:13:48.471315 waagent[1814]: 2025-06-20T19:13:48.471127Z INFO Daemon Daemon Using waagent for provisioning Jun 20 19:13:48.473722 waagent[1814]: 2025-06-20T19:13:48.473613Z INFO Daemon Daemon Activate resource disk Jun 20 19:13:48.475969 waagent[1814]: 2025-06-20T19:13:48.475548Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 19:13:48.480263 containerd[1756]: time="2025-06-20T19:13:48.480228409Z" level=info msg="Start subscribing containerd event" Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480279618Z" level=info msg="Start recovering state" Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480428505Z" level=info msg="Start event monitor" Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480466385Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480474903Z" level=info msg="Start streaming server" Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480483416Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480490561Z" level=info msg="runtime interface starting up..." Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480496919Z" level=info msg="starting plugins..." Jun 20 19:13:48.480527 containerd[1756]: time="2025-06-20T19:13:48.480508365Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:13:48.481050 containerd[1756]: time="2025-06-20T19:13:48.480785284Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:13:48.481050 containerd[1756]: time="2025-06-20T19:13:48.480822460Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:13:48.481050 containerd[1756]: time="2025-06-20T19:13:48.480876323Z" level=info msg="containerd successfully booted in 1.591140s" Jun 20 19:13:48.480975 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:13:48.488056 waagent[1814]: 2025-06-20T19:13:48.487977Z INFO Daemon Daemon Found device: None Jun 20 19:13:48.488702 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:13:48.490703 systemd[1]: Startup finished in 2.990s (kernel) + 12.627s (initrd) + 21.393s (userspace) = 37.011s. Jun 20 19:13:48.493454 waagent[1814]: 2025-06-20T19:13:48.493416Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 19:13:48.495912 waagent[1814]: 2025-06-20T19:13:48.495855Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 19:13:48.500719 waagent[1814]: 2025-06-20T19:13:48.500392Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:13:48.502711 waagent[1814]: 2025-06-20T19:13:48.502661Z INFO Daemon Daemon Running default provisioning handler Jun 20 19:13:48.512377 waagent[1814]: 2025-06-20T19:13:48.512313Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 19:13:48.516705 waagent[1814]: 2025-06-20T19:13:48.516671Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 19:13:48.518790 waagent[1814]: 2025-06-20T19:13:48.518757Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 19:13:48.521420 waagent[1814]: 2025-06-20T19:13:48.521388Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 19:13:48.560571 waagent[1814]: 2025-06-20T19:13:48.560234Z INFO Daemon Daemon Successfully mounted dvd Jun 20 19:13:48.586840 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 19:13:48.587826 waagent[1814]: 2025-06-20T19:13:48.587790Z INFO Daemon Daemon Detect protocol endpoint Jun 20 19:13:48.592191 waagent[1814]: 2025-06-20T19:13:48.587929Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 19:13:48.592191 waagent[1814]: 2025-06-20T19:13:48.588117Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 19:13:48.592191 waagent[1814]: 2025-06-20T19:13:48.588610Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 19:13:48.592191 waagent[1814]: 2025-06-20T19:13:48.588743Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 19:13:48.592191 waagent[1814]: 2025-06-20T19:13:48.589152Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 19:13:48.602737 waagent[1814]: 2025-06-20T19:13:48.602710Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 19:13:48.603749 waagent[1814]: 2025-06-20T19:13:48.603675Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 19:13:48.603749 waagent[1814]: 2025-06-20T19:13:48.603768Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 19:13:48.665490 login[1816]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:13:48.666871 login[1817]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:13:48.680401 waagent[1814]: 2025-06-20T19:13:48.680336Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 19:13:48.682242 waagent[1814]: 2025-06-20T19:13:48.681772Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 19:13:48.686679 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:13:48.687482 waagent[1814]: 2025-06-20T19:13:48.687208Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:13:48.688051 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:13:48.695951 systemd-logind[1712]: New session 1 of user core. Jun 20 19:13:48.699019 systemd-logind[1712]: New session 2 of user core. Jun 20 19:13:48.707171 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:13:48.709431 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:13:48.711511 waagent[1814]: 2025-06-20T19:13:48.710471Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 19:13:48.711511 waagent[1814]: 2025-06-20T19:13:48.710925Z INFO Daemon Jun 20 19:13:48.711511 waagent[1814]: 2025-06-20T19:13:48.711153Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 970023ce-51e7-4d4a-b61d-997eb05db669 eTag: 7228281702266021426 source: Fabric] Jun 20 19:13:48.711673 waagent[1814]: 2025-06-20T19:13:48.711649Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 19:13:48.711959 waagent[1814]: 2025-06-20T19:13:48.711941Z INFO Daemon Jun 20 19:13:48.712303 waagent[1814]: 2025-06-20T19:13:48.712267Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:13:48.722593 waagent[1814]: 2025-06-20T19:13:48.722531Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 19:13:48.724908 (systemd)[1880]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:13:48.726598 systemd-logind[1712]: New session c1 of user core. Jun 20 19:13:48.805770 waagent[1814]: 2025-06-20T19:13:48.805730Z INFO Daemon Downloaded certificate {'thumbprint': 'F15DB6B8E21752236E57C3B5A2A491D4D1E51DEB', 'hasPrivateKey': True} Jun 20 19:13:48.810368 waagent[1814]: 2025-06-20T19:13:48.809399Z INFO Daemon Fetch goal state completed Jun 20 19:13:48.818732 waagent[1814]: 2025-06-20T19:13:48.818707Z INFO Daemon Daemon Starting provisioning Jun 20 19:13:48.820457 waagent[1814]: 2025-06-20T19:13:48.820380Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 19:13:48.821763 waagent[1814]: 2025-06-20T19:13:48.821246Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-dde0712b19] Jun 20 19:13:48.840797 waagent[1814]: 2025-06-20T19:13:48.840763Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-dde0712b19] Jun 20 19:13:48.841089 waagent[1814]: 2025-06-20T19:13:48.841060Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 19:13:48.841742 waagent[1814]: 2025-06-20T19:13:48.841332Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 19:13:48.849302 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:13:48.849309 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:13:48.849330 systemd-networkd[1360]: eth0: DHCP lease lost Jun 20 19:13:48.855394 waagent[1814]: 2025-06-20T19:13:48.850858Z INFO Daemon Daemon Create user account if not exists Jun 20 19:13:48.855394 waagent[1814]: 2025-06-20T19:13:48.851193Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 19:13:48.855394 waagent[1814]: 2025-06-20T19:13:48.851654Z INFO Daemon Daemon Configure sudoer Jun 20 19:13:48.862922 waagent[1814]: 2025-06-20T19:13:48.862659Z INFO Daemon Daemon Configure sshd Jun 20 19:13:48.864421 systemd-networkd[1360]: eth0: DHCPv4 address 10.200.4.43/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jun 20 19:13:48.866374 waagent[1814]: 2025-06-20T19:13:48.865838Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 19:13:48.866374 waagent[1814]: 2025-06-20T19:13:48.865990Z INFO Daemon Daemon Deploy ssh public key. Jun 20 19:13:48.887688 systemd[1880]: Queued start job for default target default.target. Jun 20 19:13:48.894045 systemd[1880]: Created slice app.slice - User Application Slice. Jun 20 19:13:48.894071 systemd[1880]: Reached target paths.target - Paths. Jun 20 19:13:48.894102 systemd[1880]: Reached target timers.target - Timers. Jun 20 19:13:48.894982 systemd[1880]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:13:48.902686 systemd[1880]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:13:48.902807 systemd[1880]: Reached target sockets.target - Sockets. Jun 20 19:13:48.902890 systemd[1880]: Reached target basic.target - Basic System. Jun 20 19:13:48.903081 systemd[1880]: Reached target default.target - Main User Target. Jun 20 19:13:48.903103 systemd[1880]: Startup finished in 172ms. Jun 20 19:13:48.903241 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:13:48.904612 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:13:48.905846 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:13:49.998246 waagent[1814]: 2025-06-20T19:13:49.998198Z INFO Daemon Daemon Provisioning complete Jun 20 19:13:50.006862 waagent[1814]: 2025-06-20T19:13:50.006823Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 19:13:50.011739 waagent[1814]: 2025-06-20T19:13:50.007038Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 19:13:50.011739 waagent[1814]: 2025-06-20T19:13:50.007376Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 19:13:50.102904 waagent[1921]: 2025-06-20T19:13:50.102847Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 19:13:50.103121 waagent[1921]: 2025-06-20T19:13:50.102929Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 19:13:50.103121 waagent[1921]: 2025-06-20T19:13:50.102965Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 19:13:50.103121 waagent[1921]: 2025-06-20T19:13:50.102999Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 20 19:13:50.141439 waagent[1921]: 2025-06-20T19:13:50.141399Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 19:13:50.141551 waagent[1921]: 2025-06-20T19:13:50.141529Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:50.141597 waagent[1921]: 2025-06-20T19:13:50.141575Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:50.147443 waagent[1921]: 2025-06-20T19:13:50.147395Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 19:13:50.159488 waagent[1921]: 2025-06-20T19:13:50.159460Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 19:13:50.159771 waagent[1921]: 2025-06-20T19:13:50.159745Z INFO ExtHandler Jun 20 19:13:50.159807 waagent[1921]: 2025-06-20T19:13:50.159792Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a42f2afe-4e58-4f87-ac26-9f26977f4b08 eTag: 7228281702266021426 source: Fabric] Jun 20 19:13:50.159990 waagent[1921]: 2025-06-20T19:13:50.159969Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 19:13:50.160291 waagent[1921]: 2025-06-20T19:13:50.160268Z INFO ExtHandler Jun 20 19:13:50.160324 waagent[1921]: 2025-06-20T19:13:50.160303Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 19:13:50.163977 waagent[1921]: 2025-06-20T19:13:50.163950Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 19:13:50.227382 waagent[1921]: 2025-06-20T19:13:50.227310Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F15DB6B8E21752236E57C3B5A2A491D4D1E51DEB', 'hasPrivateKey': True} Jun 20 19:13:50.227655 waagent[1921]: 2025-06-20T19:13:50.227628Z INFO ExtHandler Fetch goal state completed Jun 20 19:13:50.243606 waagent[1921]: 2025-06-20T19:13:50.243566Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 19:13:50.247261 waagent[1921]: 2025-06-20T19:13:50.247224Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1921 Jun 20 19:13:50.247395 waagent[1921]: 2025-06-20T19:13:50.247332Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 19:13:50.247614 waagent[1921]: 2025-06-20T19:13:50.247591Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 19:13:50.248675 waagent[1921]: 2025-06-20T19:13:50.248619Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 19:13:50.248940 waagent[1921]: 2025-06-20T19:13:50.248916Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 19:13:50.249032 waagent[1921]: 2025-06-20T19:13:50.249014Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 19:13:50.249433 waagent[1921]: 2025-06-20T19:13:50.249406Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 19:13:50.268539 waagent[1921]: 2025-06-20T19:13:50.268515Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 19:13:50.268657 waagent[1921]: 2025-06-20T19:13:50.268633Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 19:13:50.273491 waagent[1921]: 2025-06-20T19:13:50.273340Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 19:13:50.278115 systemd[1]: Reload requested from client PID 1936 ('systemctl') (unit waagent.service)... Jun 20 19:13:50.278125 systemd[1]: Reloading... Jun 20 19:13:50.352385 zram_generator::config[1974]: No configuration found. Jun 20 19:13:50.426145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:13:50.512887 systemd[1]: Reloading finished in 234 ms. Jun 20 19:13:50.527383 waagent[1921]: 2025-06-20T19:13:50.526302Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 19:13:50.527383 waagent[1921]: 2025-06-20T19:13:50.526973Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 19:13:50.636378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jun 20 19:13:50.890700 waagent[1921]: 2025-06-20T19:13:50.890621Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 19:13:50.890885 waagent[1921]: 2025-06-20T19:13:50.890861Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 19:13:50.891529 waagent[1921]: 2025-06-20T19:13:50.891490Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 19:13:50.891667 waagent[1921]: 2025-06-20T19:13:50.891644Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:50.891724 waagent[1921]: 2025-06-20T19:13:50.891698Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:50.891884 waagent[1921]: 2025-06-20T19:13:50.891867Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 19:13:50.892064 waagent[1921]: 2025-06-20T19:13:50.892042Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 19:13:50.892386 waagent[1921]: 2025-06-20T19:13:50.892341Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 19:13:50.892474 waagent[1921]: 2025-06-20T19:13:50.892450Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 19:13:50.892573 waagent[1921]: 2025-06-20T19:13:50.892554Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 19:13:50.892573 waagent[1921]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 19:13:50.892573 waagent[1921]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 19:13:50.892573 waagent[1921]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 19:13:50.892573 waagent[1921]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:50.892573 waagent[1921]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:50.892573 waagent[1921]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 19:13:50.892873 waagent[1921]: 2025-06-20T19:13:50.892786Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 19:13:50.892946 waagent[1921]: 2025-06-20T19:13:50.892909Z INFO EnvHandler ExtHandler Configure routes Jun 20 19:13:50.892991 waagent[1921]: 2025-06-20T19:13:50.892971Z INFO EnvHandler ExtHandler Gateway:None Jun 20 19:13:50.893025 waagent[1921]: 2025-06-20T19:13:50.893010Z INFO EnvHandler ExtHandler Routes:None Jun 20 19:13:50.893298 waagent[1921]: 2025-06-20T19:13:50.893154Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 19:13:50.893411 waagent[1921]: 2025-06-20T19:13:50.893381Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 19:13:50.893470 waagent[1921]: 2025-06-20T19:13:50.893454Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 19:13:50.893575 waagent[1921]: 2025-06-20T19:13:50.893546Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 19:13:50.900369 waagent[1921]: 2025-06-20T19:13:50.900333Z INFO ExtHandler ExtHandler Jun 20 19:13:50.900423 waagent[1921]: 2025-06-20T19:13:50.900395Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 374870a8-fb6d-43f6-bd1b-0ba68c340104 correlation 4c90bc36-41ca-4d72-bd95-05e9c79b5610 created: 2025-06-20T19:12:44.136040Z] Jun 20 19:13:50.900631 waagent[1921]: 2025-06-20T19:13:50.900610Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 19:13:50.900950 waagent[1921]: 2025-06-20T19:13:50.900932Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 19:13:50.934627 waagent[1921]: 2025-06-20T19:13:50.934212Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 19:13:50.934627 waagent[1921]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 19:13:50.934627 waagent[1921]: 2025-06-20T19:13:50.934562Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DF18F4BD-F5B1-4279-9177-EF7FBC246853;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 19:13:50.950903 waagent[1921]: 2025-06-20T19:13:50.950864Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 19:13:50.950903 waagent[1921]: Executing ['ip', '-a', '-o', 'link']: Jun 20 19:13:50.950903 waagent[1921]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 19:13:50.950903 waagent[1921]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d2:15:00 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 20 19:13:50.950903 waagent[1921]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d2:15:00 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jun 20 19:13:50.950903 waagent[1921]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 19:13:50.950903 waagent[1921]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 19:13:50.950903 waagent[1921]: 2: eth0 inet 10.200.4.43/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 19:13:50.950903 waagent[1921]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 19:13:50.950903 waagent[1921]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 19:13:50.950903 waagent[1921]: 2: eth0 inet6 fe80::6245:bdff:fed2:1500/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:13:50.950903 waagent[1921]: 3: enP30832s1 inet6 fe80::6245:bdff:fed2:1500/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 19:13:51.015451 waagent[1921]: 2025-06-20T19:13:51.015411Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 19:13:51.015451 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:51.015451 waagent[1921]: pkts bytes target prot opt in out source destination Jun 20 19:13:51.015451 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:51.015451 waagent[1921]: pkts bytes target prot opt in out source destination Jun 20 19:13:51.015451 waagent[1921]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jun 20 19:13:51.015451 waagent[1921]: pkts bytes target prot opt in out source destination Jun 20 19:13:51.015451 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:13:51.015451 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:13:51.015451 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:13:51.017731 waagent[1921]: 2025-06-20T19:13:51.017691Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 19:13:51.017731 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:51.017731 waagent[1921]: pkts bytes target prot opt in out source destination Jun 20 19:13:51.017731 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 19:13:51.017731 waagent[1921]: pkts bytes target prot opt in out source destination Jun 20 19:13:51.017731 waagent[1921]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jun 20 19:13:51.017731 waagent[1921]: pkts bytes target prot opt in out source destination Jun 20 19:13:51.017731 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 19:13:51.017731 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 19:13:51.017731 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 19:13:56.448907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:13:56.450690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:13:56.934162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:13:56.938637 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:13:56.972809 kubelet[2074]: E0620 19:13:56.972780 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:13:56.975415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:13:56.975532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:13:56.975824 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.2M memory peak. Jun 20 19:14:04.084486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:14:04.085573 systemd[1]: Started sshd@0-10.200.4.43:22-10.200.16.10:60096.service - OpenSSH per-connection server daemon (10.200.16.10:60096). Jun 20 19:14:04.757941 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 60096 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:04.759204 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:04.763691 systemd-logind[1712]: New session 3 of user core. Jun 20 19:14:04.769478 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:14:05.288735 systemd[1]: Started sshd@1-10.200.4.43:22-10.200.16.10:60110.service - OpenSSH per-connection server daemon (10.200.16.10:60110). Jun 20 19:14:05.888812 sshd[2087]: Accepted publickey for core from 10.200.16.10 port 60110 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:05.890064 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:05.894781 systemd-logind[1712]: New session 4 of user core. Jun 20 19:14:05.899496 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:14:06.310731 sshd[2089]: Connection closed by 10.200.16.10 port 60110 Jun 20 19:14:06.311264 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:06.314169 systemd[1]: sshd@1-10.200.4.43:22-10.200.16.10:60110.service: Deactivated successfully. Jun 20 19:14:06.315701 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:14:06.317255 systemd-logind[1712]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:14:06.318042 systemd-logind[1712]: Removed session 4. Jun 20 19:14:06.415231 systemd[1]: Started sshd@2-10.200.4.43:22-10.200.16.10:60112.service - OpenSSH per-connection server daemon (10.200.16.10:60112). Jun 20 19:14:07.012559 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 60112 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:07.013869 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:07.014977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:14:07.016935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:07.020400 systemd-logind[1712]: New session 5 of user core. Jun 20 19:14:07.027960 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:14:07.203764 chronyd[1719]: Selected source PHC0 Jun 20 19:14:07.435175 sshd[2100]: Connection closed by 10.200.16.10 port 60112 Jun 20 19:14:07.436020 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:07.438822 systemd[1]: sshd@2-10.200.4.43:22-10.200.16.10:60112.service: Deactivated successfully. Jun 20 19:14:07.440689 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:14:07.441403 systemd-logind[1712]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:14:07.442394 systemd-logind[1712]: Removed session 5. Jun 20 19:14:07.493168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:07.501562 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:07.534588 kubelet[2110]: E0620 19:14:07.534559 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:07.537791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:07.537903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:07.538123 systemd[1]: kubelet.service: Consumed 117ms CPU time, 108.9M memory peak. Jun 20 19:14:07.539755 systemd[1]: Started sshd@3-10.200.4.43:22-10.200.16.10:60122.service - OpenSSH per-connection server daemon (10.200.16.10:60122). Jun 20 19:14:08.146651 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 60122 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:08.147900 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:08.152096 systemd-logind[1712]: New session 6 of user core. Jun 20 19:14:08.159484 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:14:08.568581 sshd[2120]: Connection closed by 10.200.16.10 port 60122 Jun 20 19:14:08.569111 sshd-session[2118]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:08.571959 systemd[1]: sshd@3-10.200.4.43:22-10.200.16.10:60122.service: Deactivated successfully. Jun 20 19:14:08.573463 systemd-logind[1712]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:14:08.573501 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:14:08.575162 systemd-logind[1712]: Removed session 6. Jun 20 19:14:08.685294 systemd[1]: Started sshd@4-10.200.4.43:22-10.200.16.10:35678.service - OpenSSH per-connection server daemon (10.200.16.10:35678). Jun 20 19:14:09.285581 sshd[2126]: Accepted publickey for core from 10.200.16.10 port 35678 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:09.286871 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:09.291206 systemd-logind[1712]: New session 7 of user core. Jun 20 19:14:09.296511 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:14:09.707934 sudo[2129]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:14:09.708159 sudo[2129]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:14:09.721231 sudo[2129]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:09.815984 sshd[2128]: Connection closed by 10.200.16.10 port 35678 Jun 20 19:14:09.816761 sshd-session[2126]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:09.819654 systemd[1]: sshd@4-10.200.4.43:22-10.200.16.10:35678.service: Deactivated successfully. Jun 20 19:14:09.820945 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:14:09.822347 systemd-logind[1712]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:14:09.823111 systemd-logind[1712]: Removed session 7. Jun 20 19:14:09.944502 systemd[1]: Started sshd@5-10.200.4.43:22-10.200.16.10:35690.service - OpenSSH per-connection server daemon (10.200.16.10:35690). Jun 20 19:14:10.545687 sshd[2135]: Accepted publickey for core from 10.200.16.10 port 35690 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:10.546933 sshd-session[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:10.551228 systemd-logind[1712]: New session 8 of user core. Jun 20 19:14:10.559484 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:14:10.871554 sudo[2139]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:14:10.871932 sudo[2139]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:14:10.877883 sudo[2139]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:10.881317 sudo[2138]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:14:10.881515 sudo[2138]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:14:10.888021 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:14:10.915509 augenrules[2161]: No rules Jun 20 19:14:10.916346 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:14:10.916541 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:14:10.917412 sudo[2138]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:11.085483 sshd[2137]: Connection closed by 10.200.16.10 port 35690 Jun 20 19:14:11.085931 sshd-session[2135]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:11.089221 systemd[1]: sshd@5-10.200.4.43:22-10.200.16.10:35690.service: Deactivated successfully. Jun 20 19:14:11.090486 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:14:11.091019 systemd-logind[1712]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:14:11.092009 systemd-logind[1712]: Removed session 8. Jun 20 19:14:11.202925 systemd[1]: Started sshd@6-10.200.4.43:22-10.200.16.10:35692.service - OpenSSH per-connection server daemon (10.200.16.10:35692). Jun 20 19:14:11.803211 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 35692 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:14:11.804432 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:14:11.808647 systemd-logind[1712]: New session 9 of user core. Jun 20 19:14:11.815494 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:14:12.129048 sudo[2173]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:14:12.129272 sudo[2173]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:14:13.364691 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:14:13.373659 (dockerd)[2192]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:14:14.067341 dockerd[2192]: time="2025-06-20T19:14:14.067287898Z" level=info msg="Starting up" Jun 20 19:14:14.067992 dockerd[2192]: time="2025-06-20T19:14:14.067965905Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:14:14.167399 dockerd[2192]: time="2025-06-20T19:14:14.167348405Z" level=info msg="Loading containers: start." Jun 20 19:14:14.208420 kernel: Initializing XFRM netlink socket Jun 20 19:14:14.460379 systemd-networkd[1360]: docker0: Link UP Jun 20 19:14:14.472581 dockerd[2192]: time="2025-06-20T19:14:14.472556916Z" level=info msg="Loading containers: done." Jun 20 19:14:14.483101 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck265225737-merged.mount: Deactivated successfully. Jun 20 19:14:14.489455 dockerd[2192]: time="2025-06-20T19:14:14.489416780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:14:14.489532 dockerd[2192]: time="2025-06-20T19:14:14.489477404Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:14:14.489573 dockerd[2192]: time="2025-06-20T19:14:14.489561584Z" level=info msg="Initializing buildkit" Jun 20 19:14:14.531247 dockerd[2192]: time="2025-06-20T19:14:14.531209239Z" level=info msg="Completed buildkit initialization" Jun 20 19:14:14.536756 dockerd[2192]: time="2025-06-20T19:14:14.536729868Z" level=info msg="Daemon has completed initialization" Jun 20 19:14:14.536840 dockerd[2192]: time="2025-06-20T19:14:14.536773304Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:14:14.537001 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:14:15.203128 containerd[1756]: time="2025-06-20T19:14:15.203098658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:14:16.111735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194087336.mount: Deactivated successfully. Jun 20 19:14:17.181759 containerd[1756]: time="2025-06-20T19:14:17.181716508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:17.184030 containerd[1756]: time="2025-06-20T19:14:17.184002466Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jun 20 19:14:17.187809 containerd[1756]: time="2025-06-20T19:14:17.187772415Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:17.191470 containerd[1756]: time="2025-06-20T19:14:17.191440143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:17.192248 containerd[1756]: time="2025-06-20T19:14:17.192042402Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.988912055s" Jun 20 19:14:17.192248 containerd[1756]: time="2025-06-20T19:14:17.192072842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 19:14:17.192614 containerd[1756]: time="2025-06-20T19:14:17.192588524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:14:17.698662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:14:17.700109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:18.431479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:18.438729 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:14:18.481552 kubelet[2457]: E0620 19:14:18.481528 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:14:18.483496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:14:18.483608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:14:18.483877 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jun 20 19:14:18.817463 containerd[1756]: time="2025-06-20T19:14:18.817427121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:18.820372 containerd[1756]: time="2025-06-20T19:14:18.820339992Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jun 20 19:14:18.823858 containerd[1756]: time="2025-06-20T19:14:18.823808473Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:18.829719 containerd[1756]: time="2025-06-20T19:14:18.829670136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:18.830377 containerd[1756]: time="2025-06-20T19:14:18.830231293Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.637550571s" Jun 20 19:14:18.830377 containerd[1756]: time="2025-06-20T19:14:18.830259000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 19:14:18.830926 containerd[1756]: time="2025-06-20T19:14:18.830906900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:14:19.965730 containerd[1756]: time="2025-06-20T19:14:19.965688560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.968204 containerd[1756]: time="2025-06-20T19:14:19.968172318Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jun 20 19:14:19.971548 containerd[1756]: time="2025-06-20T19:14:19.971507608Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.975617 containerd[1756]: time="2025-06-20T19:14:19.975579249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:19.976205 containerd[1756]: time="2025-06-20T19:14:19.976098815Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.145168874s" Jun 20 19:14:19.976205 containerd[1756]: time="2025-06-20T19:14:19.976125248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 19:14:19.976672 containerd[1756]: time="2025-06-20T19:14:19.976640328Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:14:20.972100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897176817.mount: Deactivated successfully. Jun 20 19:14:21.306597 containerd[1756]: time="2025-06-20T19:14:21.306512588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:21.310121 containerd[1756]: time="2025-06-20T19:14:21.310091633Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jun 20 19:14:21.313009 containerd[1756]: time="2025-06-20T19:14:21.312972999Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:21.316553 containerd[1756]: time="2025-06-20T19:14:21.316518147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:21.316879 containerd[1756]: time="2025-06-20T19:14:21.316858009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.340193568s" Jun 20 19:14:21.316915 containerd[1756]: time="2025-06-20T19:14:21.316888002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 19:14:21.317424 containerd[1756]: time="2025-06-20T19:14:21.317401149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:14:21.962304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443953873.mount: Deactivated successfully. Jun 20 19:14:22.790683 containerd[1756]: time="2025-06-20T19:14:22.790642061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:22.793456 containerd[1756]: time="2025-06-20T19:14:22.793427923Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jun 20 19:14:22.796896 containerd[1756]: time="2025-06-20T19:14:22.796857235Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:22.800582 containerd[1756]: time="2025-06-20T19:14:22.800543345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:22.801427 containerd[1756]: time="2025-06-20T19:14:22.801073495Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.483646509s" Jun 20 19:14:22.801427 containerd[1756]: time="2025-06-20T19:14:22.801102084Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 19:14:22.801575 containerd[1756]: time="2025-06-20T19:14:22.801559176Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:14:22.811337 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 19:14:23.378930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154245186.mount: Deactivated successfully. Jun 20 19:14:23.399718 containerd[1756]: time="2025-06-20T19:14:23.399683018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:23.402285 containerd[1756]: time="2025-06-20T19:14:23.402256898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:14:23.405399 containerd[1756]: time="2025-06-20T19:14:23.405362938Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:23.408663 containerd[1756]: time="2025-06-20T19:14:23.408623650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:14:23.409325 containerd[1756]: time="2025-06-20T19:14:23.408997000Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.411368ms" Jun 20 19:14:23.409325 containerd[1756]: time="2025-06-20T19:14:23.409023382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:14:23.409508 containerd[1756]: time="2025-06-20T19:14:23.409493909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:14:24.015136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120570851.mount: Deactivated successfully. Jun 20 19:14:25.490226 containerd[1756]: time="2025-06-20T19:14:25.490182816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:25.492510 containerd[1756]: time="2025-06-20T19:14:25.492479577Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jun 20 19:14:25.495492 containerd[1756]: time="2025-06-20T19:14:25.495462262Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:25.499562 containerd[1756]: time="2025-06-20T19:14:25.499528897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:25.500386 containerd[1756]: time="2025-06-20T19:14:25.500179146Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.09063184s" Jun 20 19:14:25.500386 containerd[1756]: time="2025-06-20T19:14:25.500211015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 19:14:27.642229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:27.642670 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jun 20 19:14:27.644376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:27.662798 systemd[1]: Reload requested from client PID 2615 ('systemctl') (unit session-9.scope)... Jun 20 19:14:27.662886 systemd[1]: Reloading... Jun 20 19:14:27.745383 zram_generator::config[2660]: No configuration found. Jun 20 19:14:27.830476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:27.925159 systemd[1]: Reloading finished in 262 ms. Jun 20 19:14:27.995957 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:14:27.996025 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:14:27.996238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:27.996274 systemd[1]: kubelet.service: Consumed 71ms CPU time, 77.9M memory peak. Jun 20 19:14:27.997864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:28.627371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:28.635623 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:14:28.666952 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:28.666952 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:14:28.666952 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:28.667186 kubelet[2728]: I0620 19:14:28.666977 2728 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:14:29.449369 kubelet[2728]: I0620 19:14:29.449110 2728 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:14:29.449369 kubelet[2728]: I0620 19:14:29.449137 2728 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:14:29.449661 kubelet[2728]: I0620 19:14:29.449652 2728 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:14:29.476383 kubelet[2728]: E0620 19:14:29.476340 2728 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:14:29.477470 kubelet[2728]: I0620 19:14:29.477337 2728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:14:29.480907 update_engine[1713]: I20250620 19:14:29.480858 1713 update_attempter.cc:509] Updating boot flags... Jun 20 19:14:29.485717 kubelet[2728]: I0620 19:14:29.485700 2728 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:14:29.488743 kubelet[2728]: I0620 19:14:29.488685 2728 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:14:29.489523 kubelet[2728]: I0620 19:14:29.489503 2728 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:14:29.489749 kubelet[2728]: I0620 19:14:29.489582 2728 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-dde0712b19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:14:29.489879 kubelet[2728]: I0620 19:14:29.489873 2728 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:14:29.489916 kubelet[2728]: I0620 19:14:29.489912 2728 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:14:29.490041 kubelet[2728]: I0620 19:14:29.490036 2728 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:29.492600 kubelet[2728]: I0620 19:14:29.492582 2728 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:14:29.492752 kubelet[2728]: I0620 19:14:29.492604 2728 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:14:29.492752 kubelet[2728]: I0620 19:14:29.492623 2728 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:14:29.497374 kubelet[2728]: I0620 19:14:29.495494 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:14:29.511643 kubelet[2728]: E0620 19:14:29.511620 2728 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-dde0712b19&limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:14:29.512150 kubelet[2728]: I0620 19:14:29.512136 2728 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:14:29.512842 kubelet[2728]: I0620 19:14:29.512828 2728 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:14:29.515153 kubelet[2728]: W0620 19:14:29.515130 2728 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:14:29.525628 kubelet[2728]: E0620 19:14:29.525602 2728 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:14:29.527854 kubelet[2728]: I0620 19:14:29.527830 2728 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:14:29.527922 kubelet[2728]: I0620 19:14:29.527884 2728 server.go:1289] "Started kubelet" Jun 20 19:14:29.549914 kubelet[2728]: I0620 19:14:29.549868 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:14:29.552992 kubelet[2728]: I0620 19:14:29.552633 2728 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:14:29.552992 kubelet[2728]: I0620 19:14:29.552984 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:14:29.554394 kubelet[2728]: E0620 19:14:29.552887 2728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-dde0712b19.184ad6275067085a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-dde0712b19,UID:ci-4344.1.0-a-dde0712b19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-dde0712b19,},FirstTimestamp:2025-06-20 19:14:29.527857242 +0000 UTC m=+0.888975982,LastTimestamp:2025-06-20 19:14:29.527857242 +0000 UTC m=+0.888975982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-dde0712b19,}" Jun 20 19:14:29.555834 kubelet[2728]: E0620 19:14:29.555665 2728 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:14:29.555834 kubelet[2728]: I0620 19:14:29.555749 2728 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:14:29.556694 kubelet[2728]: I0620 19:14:29.556489 2728 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:14:29.561561 kubelet[2728]: I0620 19:14:29.557991 2728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:14:29.561561 kubelet[2728]: I0620 19:14:29.559839 2728 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:14:29.561561 kubelet[2728]: E0620 19:14:29.560060 2728 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-dde0712b19\" not found" Jun 20 19:14:29.561561 kubelet[2728]: I0620 19:14:29.561152 2728 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:14:29.561902 kubelet[2728]: I0620 19:14:29.561758 2728 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:14:29.563393 kubelet[2728]: I0620 19:14:29.562004 2728 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:14:29.563393 kubelet[2728]: I0620 19:14:29.562084 2728 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:14:29.563525 kubelet[2728]: E0620 19:14:29.563490 2728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-dde0712b19?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="200ms" Jun 20 19:14:29.564109 kubelet[2728]: E0620 19:14:29.564086 2728 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:14:29.567370 kubelet[2728]: I0620 19:14:29.564997 2728 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:14:29.579235 kubelet[2728]: I0620 19:14:29.579200 2728 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:14:29.579235 kubelet[2728]: I0620 19:14:29.579232 2728 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:14:29.579316 kubelet[2728]: I0620 19:14:29.579246 2728 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:29.588002 kubelet[2728]: I0620 19:14:29.587988 2728 policy_none.go:49] "None policy: Start" Jun 20 19:14:29.588057 kubelet[2728]: I0620 19:14:29.588007 2728 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:14:29.588057 kubelet[2728]: I0620 19:14:29.588017 2728 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:14:29.612461 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:14:29.613207 kubelet[2728]: I0620 19:14:29.613179 2728 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:14:29.617772 kubelet[2728]: I0620 19:14:29.617756 2728 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:14:29.617829 kubelet[2728]: I0620 19:14:29.617783 2728 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:14:29.617829 kubelet[2728]: I0620 19:14:29.617797 2728 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:14:29.617829 kubelet[2728]: I0620 19:14:29.617803 2728 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:14:29.617889 kubelet[2728]: E0620 19:14:29.617830 2728 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:14:29.619940 kubelet[2728]: E0620 19:14:29.619169 2728 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:14:29.631441 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:14:29.636410 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:14:29.643977 kubelet[2728]: E0620 19:14:29.643956 2728 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:14:29.644109 kubelet[2728]: I0620 19:14:29.644099 2728 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:14:29.644136 kubelet[2728]: I0620 19:14:29.644112 2728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:14:29.645802 kubelet[2728]: I0620 19:14:29.645792 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:14:29.650371 kubelet[2728]: E0620 19:14:29.647437 2728 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:14:29.650371 kubelet[2728]: E0620 19:14:29.647480 2728 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-dde0712b19\" not found" Jun 20 19:14:29.728007 systemd[1]: Created slice kubepods-burstable-pod9972871aaa99d9682a3f6ca206c99a91.slice - libcontainer container kubepods-burstable-pod9972871aaa99d9682a3f6ca206c99a91.slice. Jun 20 19:14:29.737818 kubelet[2728]: E0620 19:14:29.737779 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.740720 systemd[1]: Created slice kubepods-burstable-pod44bc5c4f618e33393a9ed21ee01c6d9d.slice - libcontainer container kubepods-burstable-pod44bc5c4f618e33393a9ed21ee01c6d9d.slice. Jun 20 19:14:29.745251 kubelet[2728]: I0620 19:14:29.745237 2728 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.745442 kubelet[2728]: E0620 19:14:29.745426 2728 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.750331 kubelet[2728]: E0620 19:14:29.750205 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.752012 systemd[1]: Created slice kubepods-burstable-pod166124d5e574396de0941c60dfb73eb7.slice - libcontainer container kubepods-burstable-pod166124d5e574396de0941c60dfb73eb7.slice. Jun 20 19:14:29.753286 kubelet[2728]: E0620 19:14:29.753271 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763532 kubelet[2728]: I0620 19:14:29.763504 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763532 kubelet[2728]: I0620 19:14:29.763531 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9972871aaa99d9682a3f6ca206c99a91-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" (UID: \"9972871aaa99d9682a3f6ca206c99a91\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763618 kubelet[2728]: I0620 19:14:29.763549 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9972871aaa99d9682a3f6ca206c99a91-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" (UID: \"9972871aaa99d9682a3f6ca206c99a91\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763618 kubelet[2728]: I0620 19:14:29.763588 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9972871aaa99d9682a3f6ca206c99a91-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" (UID: \"9972871aaa99d9682a3f6ca206c99a91\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763618 kubelet[2728]: I0620 19:14:29.763607 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763686 kubelet[2728]: I0620 19:14:29.763624 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/166124d5e574396de0941c60dfb73eb7-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-dde0712b19\" (UID: \"166124d5e574396de0941c60dfb73eb7\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763686 kubelet[2728]: I0620 19:14:29.763641 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763686 kubelet[2728]: I0620 19:14:29.763658 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763686 kubelet[2728]: I0620 19:14:29.763676 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.763923 kubelet[2728]: E0620 19:14:29.763903 2728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-dde0712b19?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="400ms" Jun 20 19:14:29.947646 kubelet[2728]: I0620 19:14:29.947605 2728 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:29.947927 kubelet[2728]: E0620 19:14:29.947890 2728 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:30.039501 containerd[1756]: time="2025-06-20T19:14:30.039413102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-dde0712b19,Uid:9972871aaa99d9682a3f6ca206c99a91,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:30.050817 containerd[1756]: time="2025-06-20T19:14:30.050791790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-dde0712b19,Uid:44bc5c4f618e33393a9ed21ee01c6d9d,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:30.057516 containerd[1756]: time="2025-06-20T19:14:30.057485842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-dde0712b19,Uid:166124d5e574396de0941c60dfb73eb7,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:30.116468 containerd[1756]: time="2025-06-20T19:14:30.116398256Z" level=info msg="connecting to shim dcc09ce22ae5f32571f5a98fb890a01bba7eaafb8b8ca2f642435e2d19c54316" address="unix:///run/containerd/s/ea4c70a9cb80662a9dd3a12df2499b15e760a312c33e4483f7d41f186089f7b7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:30.139901 containerd[1756]: time="2025-06-20T19:14:30.139601257Z" level=info msg="connecting to shim c3f1dd43b66efb2d1802c2d69b15f8516429d8a75d1352ca62d98c5d6eb5b77c" address="unix:///run/containerd/s/ce9c5db112b7fb3cf18a187c61db3513dc3e18abdf5b042f35c8b543b4d780a6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:30.141668 systemd[1]: Started cri-containerd-dcc09ce22ae5f32571f5a98fb890a01bba7eaafb8b8ca2f642435e2d19c54316.scope - libcontainer container dcc09ce22ae5f32571f5a98fb890a01bba7eaafb8b8ca2f642435e2d19c54316. Jun 20 19:14:30.153670 containerd[1756]: time="2025-06-20T19:14:30.153562860Z" level=info msg="connecting to shim 061d635e29fd9e7bf4fbb4cb2f62955e0af03590692ae591412c8a33ff94d6f1" address="unix:///run/containerd/s/008a12f3faf94b9a7abf892b257afc8e6a6ac0ca661f06022e3889f9dff819be" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:30.164900 kubelet[2728]: E0620 19:14:30.164879 2728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-dde0712b19?timeout=10s\": dial tcp 10.200.4.43:6443: connect: connection refused" interval="800ms" Jun 20 19:14:30.170497 systemd[1]: Started cri-containerd-c3f1dd43b66efb2d1802c2d69b15f8516429d8a75d1352ca62d98c5d6eb5b77c.scope - libcontainer container c3f1dd43b66efb2d1802c2d69b15f8516429d8a75d1352ca62d98c5d6eb5b77c. Jun 20 19:14:30.188456 systemd[1]: Started cri-containerd-061d635e29fd9e7bf4fbb4cb2f62955e0af03590692ae591412c8a33ff94d6f1.scope - libcontainer container 061d635e29fd9e7bf4fbb4cb2f62955e0af03590692ae591412c8a33ff94d6f1. Jun 20 19:14:30.207343 containerd[1756]: time="2025-06-20T19:14:30.207121800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-dde0712b19,Uid:9972871aaa99d9682a3f6ca206c99a91,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc09ce22ae5f32571f5a98fb890a01bba7eaafb8b8ca2f642435e2d19c54316\"" Jun 20 19:14:30.215602 containerd[1756]: time="2025-06-20T19:14:30.215578702Z" level=info msg="CreateContainer within sandbox \"dcc09ce22ae5f32571f5a98fb890a01bba7eaafb8b8ca2f642435e2d19c54316\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:14:30.235114 containerd[1756]: time="2025-06-20T19:14:30.235086146Z" level=info msg="Container d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:30.239971 containerd[1756]: time="2025-06-20T19:14:30.239937592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-dde0712b19,Uid:44bc5c4f618e33393a9ed21ee01c6d9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3f1dd43b66efb2d1802c2d69b15f8516429d8a75d1352ca62d98c5d6eb5b77c\"" Jun 20 19:14:30.247625 containerd[1756]: time="2025-06-20T19:14:30.247593637Z" level=info msg="CreateContainer within sandbox \"c3f1dd43b66efb2d1802c2d69b15f8516429d8a75d1352ca62d98c5d6eb5b77c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:14:30.265379 containerd[1756]: time="2025-06-20T19:14:30.265346334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-dde0712b19,Uid:166124d5e574396de0941c60dfb73eb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"061d635e29fd9e7bf4fbb4cb2f62955e0af03590692ae591412c8a33ff94d6f1\"" Jun 20 19:14:30.267809 containerd[1756]: time="2025-06-20T19:14:30.267786420Z" level=info msg="Container d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:30.269409 containerd[1756]: time="2025-06-20T19:14:30.269386724Z" level=info msg="CreateContainer within sandbox \"dcc09ce22ae5f32571f5a98fb890a01bba7eaafb8b8ca2f642435e2d19c54316\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d\"" Jun 20 19:14:30.270864 containerd[1756]: time="2025-06-20T19:14:30.269782994Z" level=info msg="StartContainer for \"d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d\"" Jun 20 19:14:30.271469 containerd[1756]: time="2025-06-20T19:14:30.271448377Z" level=info msg="connecting to shim d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d" address="unix:///run/containerd/s/ea4c70a9cb80662a9dd3a12df2499b15e760a312c33e4483f7d41f186089f7b7" protocol=ttrpc version=3 Jun 20 19:14:30.275062 containerd[1756]: time="2025-06-20T19:14:30.274628909Z" level=info msg="CreateContainer within sandbox \"061d635e29fd9e7bf4fbb4cb2f62955e0af03590692ae591412c8a33ff94d6f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:14:30.284470 systemd[1]: Started cri-containerd-d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d.scope - libcontainer container d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d. Jun 20 19:14:30.292421 containerd[1756]: time="2025-06-20T19:14:30.291857913Z" level=info msg="CreateContainer within sandbox \"c3f1dd43b66efb2d1802c2d69b15f8516429d8a75d1352ca62d98c5d6eb5b77c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f\"" Jun 20 19:14:30.293606 containerd[1756]: time="2025-06-20T19:14:30.292590936Z" level=info msg="StartContainer for \"d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f\"" Jun 20 19:14:30.293606 containerd[1756]: time="2025-06-20T19:14:30.293555443Z" level=info msg="connecting to shim d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f" address="unix:///run/containerd/s/ce9c5db112b7fb3cf18a187c61db3513dc3e18abdf5b042f35c8b543b4d780a6" protocol=ttrpc version=3 Jun 20 19:14:30.303734 containerd[1756]: time="2025-06-20T19:14:30.303226412Z" level=info msg="Container a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:30.309526 systemd[1]: Started cri-containerd-d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f.scope - libcontainer container d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f. Jun 20 19:14:30.327138 containerd[1756]: time="2025-06-20T19:14:30.327115324Z" level=info msg="CreateContainer within sandbox \"061d635e29fd9e7bf4fbb4cb2f62955e0af03590692ae591412c8a33ff94d6f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68\"" Jun 20 19:14:30.328030 containerd[1756]: time="2025-06-20T19:14:30.328009073Z" level=info msg="StartContainer for \"a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68\"" Jun 20 19:14:30.332735 containerd[1756]: time="2025-06-20T19:14:30.332709951Z" level=info msg="connecting to shim a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68" address="unix:///run/containerd/s/008a12f3faf94b9a7abf892b257afc8e6a6ac0ca661f06022e3889f9dff819be" protocol=ttrpc version=3 Jun 20 19:14:30.341620 containerd[1756]: time="2025-06-20T19:14:30.341598472Z" level=info msg="StartContainer for \"d87103231e1eb91bd45c8172983a2695cd9d2a29c46410a5172044f9c3f8304d\" returns successfully" Jun 20 19:14:30.350574 kubelet[2728]: I0620 19:14:30.350216 2728 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:30.350702 kubelet[2728]: E0620 19:14:30.350673 2728 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.43:6443/api/v1/nodes\": dial tcp 10.200.4.43:6443: connect: connection refused" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:30.355480 systemd[1]: Started cri-containerd-a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68.scope - libcontainer container a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68. Jun 20 19:14:30.369784 containerd[1756]: time="2025-06-20T19:14:30.369758427Z" level=info msg="StartContainer for \"d67618bea90985debd87d5a2c45167662b64fde2a416c07d44e9dfb99eb3009f\" returns successfully" Jun 20 19:14:30.420470 containerd[1756]: time="2025-06-20T19:14:30.420449662Z" level=info msg="StartContainer for \"a743d5ad9a164760f720cfe8ad8fd0845306f6e4e0c1b55bc5963eefe985af68\" returns successfully" Jun 20 19:14:30.628385 kubelet[2728]: E0620 19:14:30.628158 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:30.633667 kubelet[2728]: E0620 19:14:30.633647 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:30.635345 kubelet[2728]: E0620 19:14:30.635226 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:31.153909 kubelet[2728]: I0620 19:14:31.153833 2728 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:31.639339 kubelet[2728]: E0620 19:14:31.639314 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:31.639664 kubelet[2728]: E0620 19:14:31.639645 2728 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:31.983984 kubelet[2728]: E0620 19:14:31.983954 2728 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-dde0712b19\" not found" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.128813 kubelet[2728]: I0620 19:14:32.128774 2728 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.161933 kubelet[2728]: I0620 19:14:32.161697 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.169158 kubelet[2728]: E0620 19:14:32.169133 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-dde0712b19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.169158 kubelet[2728]: I0620 19:14:32.169157 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.170289 kubelet[2728]: E0620 19:14:32.170270 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.170289 kubelet[2728]: I0620 19:14:32.170287 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.171369 kubelet[2728]: E0620 19:14:32.171336 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:32.521768 kubelet[2728]: I0620 19:14:32.521744 2728 apiserver.go:52] "Watching apiserver" Jun 20 19:14:32.561414 kubelet[2728]: I0620 19:14:32.561398 2728 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:14:34.086030 systemd[1]: Reload requested from client PID 3035 ('systemctl') (unit session-9.scope)... Jun 20 19:14:34.086044 systemd[1]: Reloading... Jun 20 19:14:34.172392 zram_generator::config[3081]: No configuration found. Jun 20 19:14:34.260982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:14:34.357628 systemd[1]: Reloading finished in 271 ms. Jun 20 19:14:34.379579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:34.401020 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:14:34.401213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:34.401259 systemd[1]: kubelet.service: Consumed 1.154s CPU time, 130.7M memory peak. Jun 20 19:14:34.402493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:14:34.807449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:14:34.814640 (kubelet)[3148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:14:34.846044 kubelet[3148]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:34.846379 kubelet[3148]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:14:34.846379 kubelet[3148]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:14:34.846379 kubelet[3148]: I0620 19:14:34.846321 3148 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:14:34.853292 kubelet[3148]: I0620 19:14:34.853248 3148 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:14:34.853292 kubelet[3148]: I0620 19:14:34.853275 3148 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:14:34.853920 kubelet[3148]: I0620 19:14:34.853903 3148 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:14:34.855862 kubelet[3148]: I0620 19:14:34.855836 3148 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:14:34.858868 kubelet[3148]: I0620 19:14:34.858391 3148 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:14:34.861185 kubelet[3148]: I0620 19:14:34.861172 3148 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:14:34.863576 kubelet[3148]: I0620 19:14:34.863560 3148 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:14:34.863712 kubelet[3148]: I0620 19:14:34.863688 3148 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:14:34.863843 kubelet[3148]: I0620 19:14:34.863710 3148 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-dde0712b19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:14:34.863927 kubelet[3148]: I0620 19:14:34.863843 3148 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:14:34.863927 kubelet[3148]: I0620 19:14:34.863851 3148 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:14:34.863927 kubelet[3148]: I0620 19:14:34.863884 3148 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:34.863999 kubelet[3148]: I0620 19:14:34.863992 3148 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:14:34.864020 kubelet[3148]: I0620 19:14:34.864006 3148 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:14:34.864041 kubelet[3148]: I0620 19:14:34.864026 3148 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:14:34.864041 kubelet[3148]: I0620 19:14:34.864037 3148 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:14:34.867368 kubelet[3148]: I0620 19:14:34.865895 3148 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:14:34.867368 kubelet[3148]: I0620 19:14:34.866281 3148 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:14:34.872267 kubelet[3148]: I0620 19:14:34.872253 3148 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:14:34.872328 kubelet[3148]: I0620 19:14:34.872288 3148 server.go:1289] "Started kubelet" Jun 20 19:14:34.874392 kubelet[3148]: I0620 19:14:34.874247 3148 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:14:34.879463 kubelet[3148]: I0620 19:14:34.879437 3148 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:14:34.880811 kubelet[3148]: I0620 19:14:34.880417 3148 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:14:34.883646 kubelet[3148]: I0620 19:14:34.883609 3148 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:14:34.883842 kubelet[3148]: I0620 19:14:34.883833 3148 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:14:34.887849 kubelet[3148]: I0620 19:14:34.887835 3148 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:14:34.889106 kubelet[3148]: I0620 19:14:34.889096 3148 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:14:34.889256 kubelet[3148]: E0620 19:14:34.889240 3148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-dde0712b19\" not found" Jun 20 19:14:34.890691 kubelet[3148]: I0620 19:14:34.890507 3148 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:14:34.891039 kubelet[3148]: I0620 19:14:34.890933 3148 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:14:34.893955 kubelet[3148]: I0620 19:14:34.893916 3148 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:14:34.894260 kubelet[3148]: I0620 19:14:34.894239 3148 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:14:34.894505 kubelet[3148]: I0620 19:14:34.894460 3148 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:14:34.898231 kubelet[3148]: I0620 19:14:34.898208 3148 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:14:34.898231 kubelet[3148]: I0620 19:14:34.898227 3148 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:14:34.898307 kubelet[3148]: I0620 19:14:34.898241 3148 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:14:34.898307 kubelet[3148]: I0620 19:14:34.898253 3148 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:14:34.898307 kubelet[3148]: E0620 19:14:34.898280 3148 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:14:34.899805 kubelet[3148]: E0620 19:14:34.899791 3148 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:14:34.901771 kubelet[3148]: I0620 19:14:34.901752 3148 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:14:34.940380 kubelet[3148]: I0620 19:14:34.940366 3148 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:14:34.940380 kubelet[3148]: I0620 19:14:34.940379 3148 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:14:34.940470 kubelet[3148]: I0620 19:14:34.940395 3148 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:14:34.941370 kubelet[3148]: I0620 19:14:34.940534 3148 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:14:34.941370 kubelet[3148]: I0620 19:14:34.940544 3148 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:14:34.941370 kubelet[3148]: I0620 19:14:34.940563 3148 policy_none.go:49] "None policy: Start" Jun 20 19:14:34.941370 kubelet[3148]: I0620 19:14:34.940571 3148 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:14:34.941370 kubelet[3148]: I0620 19:14:34.940580 3148 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:14:34.941370 kubelet[3148]: I0620 19:14:34.940705 3148 state_mem.go:75] "Updated machine memory state" Jun 20 19:14:34.943818 kubelet[3148]: E0620 19:14:34.943809 3148 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:14:34.944422 kubelet[3148]: I0620 19:14:34.944413 3148 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:14:34.944504 kubelet[3148]: I0620 19:14:34.944487 3148 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:14:34.945156 kubelet[3148]: I0620 19:14:34.944715 3148 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:14:34.947483 kubelet[3148]: E0620 19:14:34.946503 3148 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:14:34.998882 kubelet[3148]: I0620 19:14:34.998866 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:34.999010 kubelet[3148]: I0620 19:14:34.998868 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:34.999082 kubelet[3148]: I0620 19:14:34.998944 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.014425 kubelet[3148]: I0620 19:14:35.014410 3148 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:14:35.017281 kubelet[3148]: I0620 19:14:35.017268 3148 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:14:35.018576 kubelet[3148]: I0620 19:14:35.018430 3148 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:14:35.048070 kubelet[3148]: I0620 19:14:35.048058 3148 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.065509 kubelet[3148]: I0620 19:14:35.065453 3148 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.065509 kubelet[3148]: I0620 19:14:35.065503 3148 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094399 kubelet[3148]: I0620 19:14:35.094145 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094399 kubelet[3148]: I0620 19:14:35.094176 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9972871aaa99d9682a3f6ca206c99a91-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" (UID: \"9972871aaa99d9682a3f6ca206c99a91\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094399 kubelet[3148]: I0620 19:14:35.094195 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9972871aaa99d9682a3f6ca206c99a91-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" (UID: \"9972871aaa99d9682a3f6ca206c99a91\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094399 kubelet[3148]: I0620 19:14:35.094213 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094399 kubelet[3148]: I0620 19:14:35.094231 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094557 kubelet[3148]: I0620 19:14:35.094249 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094557 kubelet[3148]: I0620 19:14:35.094267 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44bc5c4f618e33393a9ed21ee01c6d9d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-dde0712b19\" (UID: \"44bc5c4f618e33393a9ed21ee01c6d9d\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094557 kubelet[3148]: I0620 19:14:35.094284 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/166124d5e574396de0941c60dfb73eb7-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-dde0712b19\" (UID: \"166124d5e574396de0941c60dfb73eb7\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.094557 kubelet[3148]: I0620 19:14:35.094298 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9972871aaa99d9682a3f6ca206c99a91-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" (UID: \"9972871aaa99d9682a3f6ca206c99a91\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.864650 kubelet[3148]: I0620 19:14:35.864624 3148 apiserver.go:52] "Watching apiserver" Jun 20 19:14:35.890893 kubelet[3148]: I0620 19:14:35.890872 3148 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:14:35.928369 kubelet[3148]: I0620 19:14:35.928326 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.928784 kubelet[3148]: I0620 19:14:35.928705 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.938565 kubelet[3148]: I0620 19:14:35.938485 3148 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:14:35.938653 kubelet[3148]: E0620 19:14:35.938593 3148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-dde0712b19\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.939134 kubelet[3148]: I0620 19:14:35.939110 3148 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 19:14:35.939197 kubelet[3148]: E0620 19:14:35.939150 3148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-dde0712b19\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" Jun 20 19:14:35.946622 kubelet[3148]: I0620 19:14:35.946571 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-dde0712b19" podStartSLOduration=0.946560629 podStartE2EDuration="946.560629ms" podCreationTimestamp="2025-06-20 19:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:35.946262443 +0000 UTC m=+1.128169770" watchObservedRunningTime="2025-06-20 19:14:35.946560629 +0000 UTC m=+1.128467933" Jun 20 19:14:35.964158 kubelet[3148]: I0620 19:14:35.964034 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-dde0712b19" podStartSLOduration=0.964022769 podStartE2EDuration="964.022769ms" podCreationTimestamp="2025-06-20 19:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:35.955846896 +0000 UTC m=+1.137754202" watchObservedRunningTime="2025-06-20 19:14:35.964022769 +0000 UTC m=+1.145930072" Jun 20 19:14:39.455270 kubelet[3148]: I0620 19:14:39.455228 3148 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:14:39.455633 containerd[1756]: time="2025-06-20T19:14:39.455608142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:14:39.455850 kubelet[3148]: I0620 19:14:39.455827 3148 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:14:39.987288 kubelet[3148]: I0620 19:14:39.986702 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-dde0712b19" podStartSLOduration=4.986662611 podStartE2EDuration="4.986662611s" podCreationTimestamp="2025-06-20 19:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:35.964125258 +0000 UTC m=+1.146032564" watchObservedRunningTime="2025-06-20 19:14:39.986662611 +0000 UTC m=+5.168569918" Jun 20 19:14:40.000594 systemd[1]: Created slice kubepods-besteffort-pod88d7fce0_2c41_459e_b97f_2d9b475adaae.slice - libcontainer container kubepods-besteffort-pod88d7fce0_2c41_459e_b97f_2d9b475adaae.slice. Jun 20 19:14:40.025409 kubelet[3148]: I0620 19:14:40.025380 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88d7fce0-2c41-459e-b97f-2d9b475adaae-kube-proxy\") pod \"kube-proxy-9dmz2\" (UID: \"88d7fce0-2c41-459e-b97f-2d9b475adaae\") " pod="kube-system/kube-proxy-9dmz2" Jun 20 19:14:40.025522 kubelet[3148]: I0620 19:14:40.025416 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88d7fce0-2c41-459e-b97f-2d9b475adaae-xtables-lock\") pod \"kube-proxy-9dmz2\" (UID: \"88d7fce0-2c41-459e-b97f-2d9b475adaae\") " pod="kube-system/kube-proxy-9dmz2" Jun 20 19:14:40.025522 kubelet[3148]: I0620 19:14:40.025432 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmfn\" (UniqueName: \"kubernetes.io/projected/88d7fce0-2c41-459e-b97f-2d9b475adaae-kube-api-access-mcmfn\") pod \"kube-proxy-9dmz2\" (UID: \"88d7fce0-2c41-459e-b97f-2d9b475adaae\") " pod="kube-system/kube-proxy-9dmz2" Jun 20 19:14:40.025522 kubelet[3148]: I0620 19:14:40.025450 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88d7fce0-2c41-459e-b97f-2d9b475adaae-lib-modules\") pod \"kube-proxy-9dmz2\" (UID: \"88d7fce0-2c41-459e-b97f-2d9b475adaae\") " pod="kube-system/kube-proxy-9dmz2" Jun 20 19:14:40.129047 kubelet[3148]: E0620 19:14:40.129026 3148 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:14:40.129047 kubelet[3148]: E0620 19:14:40.129046 3148 projected.go:194] Error preparing data for projected volume kube-api-access-mcmfn for pod kube-system/kube-proxy-9dmz2: configmap "kube-root-ca.crt" not found Jun 20 19:14:40.129168 kubelet[3148]: E0620 19:14:40.129103 3148 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88d7fce0-2c41-459e-b97f-2d9b475adaae-kube-api-access-mcmfn podName:88d7fce0-2c41-459e-b97f-2d9b475adaae nodeName:}" failed. No retries permitted until 2025-06-20 19:14:40.629084965 +0000 UTC m=+5.810992267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mcmfn" (UniqueName: "kubernetes.io/projected/88d7fce0-2c41-459e-b97f-2d9b475adaae-kube-api-access-mcmfn") pod "kube-proxy-9dmz2" (UID: "88d7fce0-2c41-459e-b97f-2d9b475adaae") : configmap "kube-root-ca.crt" not found Jun 20 19:14:40.669701 systemd[1]: Created slice kubepods-besteffort-pod4d56b9b6_4efd_4579_9677_abdcbdb73a2a.slice - libcontainer container kubepods-besteffort-pod4d56b9b6_4efd_4579_9677_abdcbdb73a2a.slice. Jun 20 19:14:40.729530 kubelet[3148]: I0620 19:14:40.729502 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d56b9b6-4efd-4579-9677-abdcbdb73a2a-var-lib-calico\") pod \"tigera-operator-68f7c7984d-lv6sx\" (UID: \"4d56b9b6-4efd-4579-9677-abdcbdb73a2a\") " pod="tigera-operator/tigera-operator-68f7c7984d-lv6sx" Jun 20 19:14:40.729530 kubelet[3148]: I0620 19:14:40.729534 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zcr\" (UniqueName: \"kubernetes.io/projected/4d56b9b6-4efd-4579-9677-abdcbdb73a2a-kube-api-access-s7zcr\") pod \"tigera-operator-68f7c7984d-lv6sx\" (UID: \"4d56b9b6-4efd-4579-9677-abdcbdb73a2a\") " pod="tigera-operator/tigera-operator-68f7c7984d-lv6sx" Jun 20 19:14:40.908875 containerd[1756]: time="2025-06-20T19:14:40.908840921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dmz2,Uid:88d7fce0-2c41-459e-b97f-2d9b475adaae,Namespace:kube-system,Attempt:0,}" Jun 20 19:14:40.946291 containerd[1756]: time="2025-06-20T19:14:40.946051556Z" level=info msg="connecting to shim fac797a181150f0704715163f47eaabcfb85d7e6234ffec1ebf6d822db302722" address="unix:///run/containerd/s/a2872300ef0203cf713c7e3ea22863101f047a6e4e483f974d967230c65e574c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:40.965511 systemd[1]: Started cri-containerd-fac797a181150f0704715163f47eaabcfb85d7e6234ffec1ebf6d822db302722.scope - libcontainer container fac797a181150f0704715163f47eaabcfb85d7e6234ffec1ebf6d822db302722. Jun 20 19:14:40.975721 containerd[1756]: time="2025-06-20T19:14:40.975484498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-lv6sx,Uid:4d56b9b6-4efd-4579-9677-abdcbdb73a2a,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:14:40.984908 containerd[1756]: time="2025-06-20T19:14:40.984878614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dmz2,Uid:88d7fce0-2c41-459e-b97f-2d9b475adaae,Namespace:kube-system,Attempt:0,} returns sandbox id \"fac797a181150f0704715163f47eaabcfb85d7e6234ffec1ebf6d822db302722\"" Jun 20 19:14:40.991910 containerd[1756]: time="2025-06-20T19:14:40.991888523Z" level=info msg="CreateContainer within sandbox \"fac797a181150f0704715163f47eaabcfb85d7e6234ffec1ebf6d822db302722\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:14:41.021186 containerd[1756]: time="2025-06-20T19:14:41.021076102Z" level=info msg="connecting to shim 1b512c2ad023cc92aa3a4cb4ffc92ef2e469a984aa6a469d8af6ca451d9885a1" address="unix:///run/containerd/s/2da2b8ef98fb62cff1dbe067ce13ab398f7cf7a374e4b5cef5734c85862ca97b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:41.030307 containerd[1756]: time="2025-06-20T19:14:41.030271731Z" level=info msg="Container 8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:41.039567 systemd[1]: Started cri-containerd-1b512c2ad023cc92aa3a4cb4ffc92ef2e469a984aa6a469d8af6ca451d9885a1.scope - libcontainer container 1b512c2ad023cc92aa3a4cb4ffc92ef2e469a984aa6a469d8af6ca451d9885a1. Jun 20 19:14:41.047186 containerd[1756]: time="2025-06-20T19:14:41.047154781Z" level=info msg="CreateContainer within sandbox \"fac797a181150f0704715163f47eaabcfb85d7e6234ffec1ebf6d822db302722\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c\"" Jun 20 19:14:41.047939 containerd[1756]: time="2025-06-20T19:14:41.047919413Z" level=info msg="StartContainer for \"8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c\"" Jun 20 19:14:41.049497 containerd[1756]: time="2025-06-20T19:14:41.049469399Z" level=info msg="connecting to shim 8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c" address="unix:///run/containerd/s/a2872300ef0203cf713c7e3ea22863101f047a6e4e483f974d967230c65e574c" protocol=ttrpc version=3 Jun 20 19:14:41.067650 systemd[1]: Started cri-containerd-8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c.scope - libcontainer container 8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c. Jun 20 19:14:41.082869 containerd[1756]: time="2025-06-20T19:14:41.082844572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-lv6sx,Uid:4d56b9b6-4efd-4579-9677-abdcbdb73a2a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b512c2ad023cc92aa3a4cb4ffc92ef2e469a984aa6a469d8af6ca451d9885a1\"" Jun 20 19:14:41.084190 containerd[1756]: time="2025-06-20T19:14:41.084034137Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:14:41.100942 containerd[1756]: time="2025-06-20T19:14:41.100918196Z" level=info msg="StartContainer for \"8db47c1dc07a958c0ff8000aba9bfe1a7da7a71a8181c1f864bc18d943d12e5c\" returns successfully" Jun 20 19:14:41.968302 kubelet[3148]: I0620 19:14:41.968243 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9dmz2" podStartSLOduration=2.968227611 podStartE2EDuration="2.968227611s" podCreationTimestamp="2025-06-20 19:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:14:41.948165293 +0000 UTC m=+7.130072596" watchObservedRunningTime="2025-06-20 19:14:41.968227611 +0000 UTC m=+7.150134917" Jun 20 19:14:42.587217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708239045.mount: Deactivated successfully. Jun 20 19:14:43.085549 containerd[1756]: time="2025-06-20T19:14:43.085514918Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:43.087897 containerd[1756]: time="2025-06-20T19:14:43.087875428Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:14:43.090621 containerd[1756]: time="2025-06-20T19:14:43.090570691Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:43.094854 containerd[1756]: time="2025-06-20T19:14:43.094129502Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:43.094854 containerd[1756]: time="2025-06-20T19:14:43.094701708Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.010643463s" Jun 20 19:14:43.094854 containerd[1756]: time="2025-06-20T19:14:43.094725628Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:14:43.105714 containerd[1756]: time="2025-06-20T19:14:43.105692387Z" level=info msg="CreateContainer within sandbox \"1b512c2ad023cc92aa3a4cb4ffc92ef2e469a984aa6a469d8af6ca451d9885a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:14:43.124370 containerd[1756]: time="2025-06-20T19:14:43.123904985Z" level=info msg="Container da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:43.136786 containerd[1756]: time="2025-06-20T19:14:43.136762484Z" level=info msg="CreateContainer within sandbox \"1b512c2ad023cc92aa3a4cb4ffc92ef2e469a984aa6a469d8af6ca451d9885a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803\"" Jun 20 19:14:43.137163 containerd[1756]: time="2025-06-20T19:14:43.137090138Z" level=info msg="StartContainer for \"da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803\"" Jun 20 19:14:43.137878 containerd[1756]: time="2025-06-20T19:14:43.137849768Z" level=info msg="connecting to shim da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803" address="unix:///run/containerd/s/2da2b8ef98fb62cff1dbe067ce13ab398f7cf7a374e4b5cef5734c85862ca97b" protocol=ttrpc version=3 Jun 20 19:14:43.160504 systemd[1]: Started cri-containerd-da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803.scope - libcontainer container da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803. Jun 20 19:14:43.183020 containerd[1756]: time="2025-06-20T19:14:43.182996444Z" level=info msg="StartContainer for \"da24a93e668cb7dd625eb4c94163e7098f6ffdcfe21b94720277c1d0250c4803\" returns successfully" Jun 20 19:14:45.785231 kubelet[3148]: I0620 19:14:45.785177 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-lv6sx" podStartSLOduration=3.77353293 podStartE2EDuration="5.785161982s" podCreationTimestamp="2025-06-20 19:14:40 +0000 UTC" firstStartedPulling="2025-06-20 19:14:41.083701162 +0000 UTC m=+6.265608457" lastFinishedPulling="2025-06-20 19:14:43.095330201 +0000 UTC m=+8.277237509" observedRunningTime="2025-06-20 19:14:43.962138927 +0000 UTC m=+9.144046230" watchObservedRunningTime="2025-06-20 19:14:45.785161982 +0000 UTC m=+10.967069289" Jun 20 19:14:48.439523 sudo[2173]: pam_unix(sudo:session): session closed for user root Jun 20 19:14:48.543684 sshd[2172]: Connection closed by 10.200.16.10 port 35692 Jun 20 19:14:48.544514 sshd-session[2170]: pam_unix(sshd:session): session closed for user core Jun 20 19:14:48.549688 systemd-logind[1712]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:14:48.550857 systemd[1]: sshd@6-10.200.4.43:22-10.200.16.10:35692.service: Deactivated successfully. Jun 20 19:14:48.554256 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:14:48.555507 systemd[1]: session-9.scope: Consumed 3.465s CPU time, 229.4M memory peak. Jun 20 19:14:48.559808 systemd-logind[1712]: Removed session 9. Jun 20 19:14:51.713986 systemd[1]: Created slice kubepods-besteffort-pod7704d81b_f4ca_49c7_8600_3b1fa36b43f1.slice - libcontainer container kubepods-besteffort-pod7704d81b_f4ca_49c7_8600_3b1fa36b43f1.slice. Jun 20 19:14:51.799074 kubelet[3148]: I0620 19:14:51.799046 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7704d81b-f4ca-49c7-8600-3b1fa36b43f1-tigera-ca-bundle\") pod \"calico-typha-568d4d4c9-rm7sx\" (UID: \"7704d81b-f4ca-49c7-8600-3b1fa36b43f1\") " pod="calico-system/calico-typha-568d4d4c9-rm7sx" Jun 20 19:14:51.799074 kubelet[3148]: I0620 19:14:51.799077 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjlk\" (UniqueName: \"kubernetes.io/projected/7704d81b-f4ca-49c7-8600-3b1fa36b43f1-kube-api-access-bwjlk\") pod \"calico-typha-568d4d4c9-rm7sx\" (UID: \"7704d81b-f4ca-49c7-8600-3b1fa36b43f1\") " pod="calico-system/calico-typha-568d4d4c9-rm7sx" Jun 20 19:14:51.799399 kubelet[3148]: I0620 19:14:51.799097 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7704d81b-f4ca-49c7-8600-3b1fa36b43f1-typha-certs\") pod \"calico-typha-568d4d4c9-rm7sx\" (UID: \"7704d81b-f4ca-49c7-8600-3b1fa36b43f1\") " pod="calico-system/calico-typha-568d4d4c9-rm7sx" Jun 20 19:14:52.020878 containerd[1756]: time="2025-06-20T19:14:52.020795749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-568d4d4c9-rm7sx,Uid:7704d81b-f4ca-49c7-8600-3b1fa36b43f1,Namespace:calico-system,Attempt:0,}" Jun 20 19:14:52.066701 containerd[1756]: time="2025-06-20T19:14:52.066651454Z" level=info msg="connecting to shim b3a3b7572de8a87ee531d196d520cc752ed3822b3ead03e62bbf51e1e69ba570" address="unix:///run/containerd/s/a195c1dfb983e6d0e62c127f7ce6e47340059097c7e12b9a3764849c61c59118" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:52.094717 systemd[1]: Started cri-containerd-b3a3b7572de8a87ee531d196d520cc752ed3822b3ead03e62bbf51e1e69ba570.scope - libcontainer container b3a3b7572de8a87ee531d196d520cc752ed3822b3ead03e62bbf51e1e69ba570. Jun 20 19:14:52.101494 systemd[1]: Created slice kubepods-besteffort-poda4d57dbd_ff0e_47c2_9793_f162ae705778.slice - libcontainer container kubepods-besteffort-poda4d57dbd_ff0e_47c2_9793_f162ae705778.slice. Jun 20 19:14:52.150408 containerd[1756]: time="2025-06-20T19:14:52.150348199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-568d4d4c9-rm7sx,Uid:7704d81b-f4ca-49c7-8600-3b1fa36b43f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3a3b7572de8a87ee531d196d520cc752ed3822b3ead03e62bbf51e1e69ba570\"" Jun 20 19:14:52.151827 containerd[1756]: time="2025-06-20T19:14:52.151632557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:14:52.201302 kubelet[3148]: I0620 19:14:52.201278 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4d57dbd-ff0e-47c2-9793-f162ae705778-tigera-ca-bundle\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201389 kubelet[3148]: I0620 19:14:52.201312 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-lib-modules\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201389 kubelet[3148]: I0620 19:14:52.201329 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-var-run-calico\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201389 kubelet[3148]: I0620 19:14:52.201344 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-cni-net-dir\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201389 kubelet[3148]: I0620 19:14:52.201370 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-cni-bin-dir\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201389 kubelet[3148]: I0620 19:14:52.201386 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4d57dbd-ff0e-47c2-9793-f162ae705778-node-certs\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201597 kubelet[3148]: I0620 19:14:52.201401 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-var-lib-calico\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201597 kubelet[3148]: I0620 19:14:52.201416 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-cni-log-dir\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201597 kubelet[3148]: I0620 19:14:52.201432 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-flexvol-driver-host\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201597 kubelet[3148]: I0620 19:14:52.201449 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-policysync\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201597 kubelet[3148]: I0620 19:14:52.201464 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wnc\" (UniqueName: \"kubernetes.io/projected/a4d57dbd-ff0e-47c2-9793-f162ae705778-kube-api-access-g7wnc\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.201691 kubelet[3148]: I0620 19:14:52.201481 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4d57dbd-ff0e-47c2-9793-f162ae705778-xtables-lock\") pod \"calico-node-5r8q4\" (UID: \"a4d57dbd-ff0e-47c2-9793-f162ae705778\") " pod="calico-system/calico-node-5r8q4" Jun 20 19:14:52.308116 kubelet[3148]: E0620 19:14:52.308054 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.308116 kubelet[3148]: W0620 19:14:52.308076 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.308116 kubelet[3148]: E0620 19:14:52.308099 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.315301 kubelet[3148]: E0620 19:14:52.315266 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.315301 kubelet[3148]: W0620 19:14:52.315280 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.315301 kubelet[3148]: E0620 19:14:52.315295 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.334595 kubelet[3148]: E0620 19:14:52.334558 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:14:52.392813 kubelet[3148]: E0620 19:14:52.392794 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.392813 kubelet[3148]: W0620 19:14:52.392811 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.392937 kubelet[3148]: E0620 19:14:52.392825 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.392937 kubelet[3148]: E0620 19:14:52.392934 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393060 kubelet[3148]: W0620 19:14:52.392940 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393060 kubelet[3148]: E0620 19:14:52.392947 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393060 kubelet[3148]: E0620 19:14:52.393042 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393060 kubelet[3148]: W0620 19:14:52.393047 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393060 kubelet[3148]: E0620 19:14:52.393054 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393199 kubelet[3148]: E0620 19:14:52.393179 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393199 kubelet[3148]: W0620 19:14:52.393184 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393199 kubelet[3148]: E0620 19:14:52.393190 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393286 kubelet[3148]: E0620 19:14:52.393285 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393312 kubelet[3148]: W0620 19:14:52.393289 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393312 kubelet[3148]: E0620 19:14:52.393295 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393402 kubelet[3148]: E0620 19:14:52.393395 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393402 kubelet[3148]: W0620 19:14:52.393401 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393469 kubelet[3148]: E0620 19:14:52.393420 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393517 kubelet[3148]: E0620 19:14:52.393507 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393517 kubelet[3148]: W0620 19:14:52.393515 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393565 kubelet[3148]: E0620 19:14:52.393521 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393609 kubelet[3148]: E0620 19:14:52.393604 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393640 kubelet[3148]: W0620 19:14:52.393609 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393640 kubelet[3148]: E0620 19:14:52.393615 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393701 kubelet[3148]: E0620 19:14:52.393699 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393738 kubelet[3148]: W0620 19:14:52.393703 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393738 kubelet[3148]: E0620 19:14:52.393708 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393785 kubelet[3148]: E0620 19:14:52.393781 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393823 kubelet[3148]: W0620 19:14:52.393785 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393823 kubelet[3148]: E0620 19:14:52.393790 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393888 kubelet[3148]: E0620 19:14:52.393877 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393888 kubelet[3148]: W0620 19:14:52.393886 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.393952 kubelet[3148]: E0620 19:14:52.393892 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.393977 kubelet[3148]: E0620 19:14:52.393972 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.393977 kubelet[3148]: W0620 19:14:52.393976 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394028 kubelet[3148]: E0620 19:14:52.393982 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394065 kubelet[3148]: E0620 19:14:52.394061 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394091 kubelet[3148]: W0620 19:14:52.394066 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394091 kubelet[3148]: E0620 19:14:52.394071 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394164 kubelet[3148]: E0620 19:14:52.394147 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394164 kubelet[3148]: W0620 19:14:52.394160 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394217 kubelet[3148]: E0620 19:14:52.394166 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394248 kubelet[3148]: E0620 19:14:52.394243 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394284 kubelet[3148]: W0620 19:14:52.394248 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394284 kubelet[3148]: E0620 19:14:52.394254 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394338 kubelet[3148]: E0620 19:14:52.394329 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394338 kubelet[3148]: W0620 19:14:52.394333 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394403 kubelet[3148]: E0620 19:14:52.394338 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394442 kubelet[3148]: E0620 19:14:52.394431 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394442 kubelet[3148]: W0620 19:14:52.394437 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394491 kubelet[3148]: E0620 19:14:52.394443 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394531 kubelet[3148]: E0620 19:14:52.394521 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394531 kubelet[3148]: W0620 19:14:52.394527 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394588 kubelet[3148]: E0620 19:14:52.394532 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394618 kubelet[3148]: E0620 19:14:52.394606 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394618 kubelet[3148]: W0620 19:14:52.394610 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394709 kubelet[3148]: E0620 19:14:52.394617 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.394709 kubelet[3148]: E0620 19:14:52.394698 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.394709 kubelet[3148]: W0620 19:14:52.394703 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.394773 kubelet[3148]: E0620 19:14:52.394708 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.402981 kubelet[3148]: E0620 19:14:52.402965 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.402981 kubelet[3148]: W0620 19:14:52.402978 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403087 kubelet[3148]: E0620 19:14:52.402990 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403087 kubelet[3148]: I0620 19:14:52.403010 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c247f1b6-5251-4f90-8193-b5662a7bfa74-socket-dir\") pod \"csi-node-driver-gp8bp\" (UID: \"c247f1b6-5251-4f90-8193-b5662a7bfa74\") " pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:14:52.403211 kubelet[3148]: E0620 19:14:52.403117 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.403211 kubelet[3148]: W0620 19:14:52.403123 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403211 kubelet[3148]: E0620 19:14:52.403131 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403211 kubelet[3148]: I0620 19:14:52.403149 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dtt\" (UniqueName: \"kubernetes.io/projected/c247f1b6-5251-4f90-8193-b5662a7bfa74-kube-api-access-v9dtt\") pod \"csi-node-driver-gp8bp\" (UID: \"c247f1b6-5251-4f90-8193-b5662a7bfa74\") " pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:14:52.403302 kubelet[3148]: E0620 19:14:52.403246 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.403302 kubelet[3148]: W0620 19:14:52.403256 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403302 kubelet[3148]: E0620 19:14:52.403264 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403394 kubelet[3148]: E0620 19:14:52.403375 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.403394 kubelet[3148]: W0620 19:14:52.403379 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403394 kubelet[3148]: E0620 19:14:52.403385 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403523 kubelet[3148]: E0620 19:14:52.403497 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.403523 kubelet[3148]: W0620 19:14:52.403520 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403593 kubelet[3148]: E0620 19:14:52.403527 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403593 kubelet[3148]: I0620 19:14:52.403545 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c247f1b6-5251-4f90-8193-b5662a7bfa74-registration-dir\") pod \"csi-node-driver-gp8bp\" (UID: \"c247f1b6-5251-4f90-8193-b5662a7bfa74\") " pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:14:52.403712 kubelet[3148]: E0620 19:14:52.403633 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.403712 kubelet[3148]: W0620 19:14:52.403638 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403712 kubelet[3148]: E0620 19:14:52.403643 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403712 kubelet[3148]: I0620 19:14:52.403655 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c247f1b6-5251-4f90-8193-b5662a7bfa74-varrun\") pod \"csi-node-driver-gp8bp\" (UID: \"c247f1b6-5251-4f90-8193-b5662a7bfa74\") " pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:14:52.403898 kubelet[3148]: E0620 19:14:52.403812 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.403898 kubelet[3148]: W0620 19:14:52.403822 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.403898 kubelet[3148]: E0620 19:14:52.403829 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.403898 kubelet[3148]: I0620 19:14:52.403841 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c247f1b6-5251-4f90-8193-b5662a7bfa74-kubelet-dir\") pod \"csi-node-driver-gp8bp\" (UID: \"c247f1b6-5251-4f90-8193-b5662a7bfa74\") " pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:14:52.404024 kubelet[3148]: E0620 19:14:52.404012 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404047 kubelet[3148]: W0620 19:14:52.404023 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404047 kubelet[3148]: E0620 19:14:52.404033 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404161 kubelet[3148]: E0620 19:14:52.404116 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404161 kubelet[3148]: W0620 19:14:52.404122 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404161 kubelet[3148]: E0620 19:14:52.404127 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404226 kubelet[3148]: E0620 19:14:52.404221 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404250 kubelet[3148]: W0620 19:14:52.404227 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404250 kubelet[3148]: E0620 19:14:52.404233 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404580 kubelet[3148]: E0620 19:14:52.404339 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404580 kubelet[3148]: W0620 19:14:52.404344 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404580 kubelet[3148]: E0620 19:14:52.404365 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404580 kubelet[3148]: E0620 19:14:52.404462 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404580 kubelet[3148]: W0620 19:14:52.404466 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404580 kubelet[3148]: E0620 19:14:52.404472 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404580 kubelet[3148]: E0620 19:14:52.404553 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404580 kubelet[3148]: W0620 19:14:52.404558 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404580 kubelet[3148]: E0620 19:14:52.404564 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404788 containerd[1756]: time="2025-06-20T19:14:52.404371152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5r8q4,Uid:a4d57dbd-ff0e-47c2-9793-f162ae705778,Namespace:calico-system,Attempt:0,}" Jun 20 19:14:52.404810 kubelet[3148]: E0620 19:14:52.404645 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404810 kubelet[3148]: W0620 19:14:52.404649 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404810 kubelet[3148]: E0620 19:14:52.404655 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.404810 kubelet[3148]: E0620 19:14:52.404719 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.404810 kubelet[3148]: W0620 19:14:52.404722 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.404810 kubelet[3148]: E0620 19:14:52.404727 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.446617 containerd[1756]: time="2025-06-20T19:14:52.446475349Z" level=info msg="connecting to shim 7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5" address="unix:///run/containerd/s/247af3280cea8a76a0799009936a8ee0b6a7e97f4b1417f99650c15258788e62" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:14:52.468497 systemd[1]: Started cri-containerd-7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5.scope - libcontainer container 7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5. Jun 20 19:14:52.486716 containerd[1756]: time="2025-06-20T19:14:52.486689982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5r8q4,Uid:a4d57dbd-ff0e-47c2-9793-f162ae705778,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\"" Jun 20 19:14:52.504771 kubelet[3148]: E0620 19:14:52.504753 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.504832 kubelet[3148]: W0620 19:14:52.504773 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.504832 kubelet[3148]: E0620 19:14:52.504787 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.504972 kubelet[3148]: E0620 19:14:52.504962 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.504972 kubelet[3148]: W0620 19:14:52.504971 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505034 kubelet[3148]: E0620 19:14:52.504979 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505166 kubelet[3148]: E0620 19:14:52.505154 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505207 kubelet[3148]: W0620 19:14:52.505166 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505207 kubelet[3148]: E0620 19:14:52.505176 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505305 kubelet[3148]: E0620 19:14:52.505297 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505305 kubelet[3148]: W0620 19:14:52.505302 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505417 kubelet[3148]: E0620 19:14:52.505310 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505452 kubelet[3148]: E0620 19:14:52.505445 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505476 kubelet[3148]: W0620 19:14:52.505453 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505476 kubelet[3148]: E0620 19:14:52.505461 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505563 kubelet[3148]: E0620 19:14:52.505551 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505563 kubelet[3148]: W0620 19:14:52.505560 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505620 kubelet[3148]: E0620 19:14:52.505567 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505679 kubelet[3148]: E0620 19:14:52.505673 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505679 kubelet[3148]: W0620 19:14:52.505679 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505732 kubelet[3148]: E0620 19:14:52.505685 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505853 kubelet[3148]: E0620 19:14:52.505843 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505853 kubelet[3148]: W0620 19:14:52.505851 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.505920 kubelet[3148]: E0620 19:14:52.505859 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.505980 kubelet[3148]: E0620 19:14:52.505970 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.505980 kubelet[3148]: W0620 19:14:52.505977 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506042 kubelet[3148]: E0620 19:14:52.505983 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506093 kubelet[3148]: E0620 19:14:52.506083 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506093 kubelet[3148]: W0620 19:14:52.506089 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506148 kubelet[3148]: E0620 19:14:52.506096 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506198 kubelet[3148]: E0620 19:14:52.506189 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506198 kubelet[3148]: W0620 19:14:52.506195 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506257 kubelet[3148]: E0620 19:14:52.506201 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506322 kubelet[3148]: E0620 19:14:52.506313 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506322 kubelet[3148]: W0620 19:14:52.506321 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506387 kubelet[3148]: E0620 19:14:52.506327 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506445 kubelet[3148]: E0620 19:14:52.506436 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506445 kubelet[3148]: W0620 19:14:52.506442 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506512 kubelet[3148]: E0620 19:14:52.506448 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506555 kubelet[3148]: E0620 19:14:52.506546 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506555 kubelet[3148]: W0620 19:14:52.506552 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506617 kubelet[3148]: E0620 19:14:52.506558 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506659 kubelet[3148]: E0620 19:14:52.506651 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506659 kubelet[3148]: W0620 19:14:52.506657 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506712 kubelet[3148]: E0620 19:14:52.506663 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506775 kubelet[3148]: E0620 19:14:52.506766 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506775 kubelet[3148]: W0620 19:14:52.506772 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506831 kubelet[3148]: E0620 19:14:52.506777 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.506882 kubelet[3148]: E0620 19:14:52.506874 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.506882 kubelet[3148]: W0620 19:14:52.506881 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.506943 kubelet[3148]: E0620 19:14:52.506887 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507033 kubelet[3148]: E0620 19:14:52.507022 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507033 kubelet[3148]: W0620 19:14:52.507030 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507136 kubelet[3148]: E0620 19:14:52.507038 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507174 kubelet[3148]: E0620 19:14:52.507141 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507174 kubelet[3148]: W0620 19:14:52.507146 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507174 kubelet[3148]: E0620 19:14:52.507153 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507287 kubelet[3148]: E0620 19:14:52.507232 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507287 kubelet[3148]: W0620 19:14:52.507238 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507287 kubelet[3148]: E0620 19:14:52.507243 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507407 kubelet[3148]: E0620 19:14:52.507372 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507407 kubelet[3148]: W0620 19:14:52.507377 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507407 kubelet[3148]: E0620 19:14:52.507383 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507539 kubelet[3148]: E0620 19:14:52.507528 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507539 kubelet[3148]: W0620 19:14:52.507536 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507597 kubelet[3148]: E0620 19:14:52.507544 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507706 kubelet[3148]: E0620 19:14:52.507697 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507706 kubelet[3148]: W0620 19:14:52.507704 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507765 kubelet[3148]: E0620 19:14:52.507710 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.507864 kubelet[3148]: E0620 19:14:52.507855 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.507864 kubelet[3148]: W0620 19:14:52.507864 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.507916 kubelet[3148]: E0620 19:14:52.507870 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.508094 kubelet[3148]: E0620 19:14:52.508084 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.508094 kubelet[3148]: W0620 19:14:52.508094 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.508184 kubelet[3148]: E0620 19:14:52.508105 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:52.512425 kubelet[3148]: E0620 19:14:52.512410 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:52.512425 kubelet[3148]: W0620 19:14:52.512422 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:52.512501 kubelet[3148]: E0620 19:14:52.512433 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:53.493390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760504877.mount: Deactivated successfully. Jun 20 19:14:53.877098 containerd[1756]: time="2025-06-20T19:14:53.876820845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:53.879398 containerd[1756]: time="2025-06-20T19:14:53.879369506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:14:53.881996 containerd[1756]: time="2025-06-20T19:14:53.881953043Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:53.885112 containerd[1756]: time="2025-06-20T19:14:53.885063814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:53.885599 containerd[1756]: time="2025-06-20T19:14:53.885424694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 1.733410922s" Jun 20 19:14:53.885599 containerd[1756]: time="2025-06-20T19:14:53.885449862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:14:53.887045 containerd[1756]: time="2025-06-20T19:14:53.887027036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:14:53.899446 kubelet[3148]: E0620 19:14:53.898523 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:14:53.905586 containerd[1756]: time="2025-06-20T19:14:53.904813500Z" level=info msg="CreateContainer within sandbox \"b3a3b7572de8a87ee531d196d520cc752ed3822b3ead03e62bbf51e1e69ba570\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:14:53.933774 containerd[1756]: time="2025-06-20T19:14:53.933749889Z" level=info msg="Container 74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:53.939578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967459332.mount: Deactivated successfully. Jun 20 19:14:53.954002 containerd[1756]: time="2025-06-20T19:14:53.953977344Z" level=info msg="CreateContainer within sandbox \"b3a3b7572de8a87ee531d196d520cc752ed3822b3ead03e62bbf51e1e69ba570\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65\"" Jun 20 19:14:53.954508 containerd[1756]: time="2025-06-20T19:14:53.954472821Z" level=info msg="StartContainer for \"74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65\"" Jun 20 19:14:53.957057 containerd[1756]: time="2025-06-20T19:14:53.957018777Z" level=info msg="connecting to shim 74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65" address="unix:///run/containerd/s/a195c1dfb983e6d0e62c127f7ce6e47340059097c7e12b9a3764849c61c59118" protocol=ttrpc version=3 Jun 20 19:14:53.982503 systemd[1]: Started cri-containerd-74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65.scope - libcontainer container 74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65. Jun 20 19:14:54.073198 containerd[1756]: time="2025-06-20T19:14:54.073138964Z" level=info msg="StartContainer for \"74c2c50f2fe01c37b870e3b036ad56325000e72890107317f3ff5c334f544e65\" returns successfully" Jun 20 19:14:55.007814 kubelet[3148]: E0620 19:14:55.007790 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.007814 kubelet[3148]: W0620 19:14:55.007806 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.007821 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.007921 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008164 kubelet[3148]: W0620 19:14:55.007925 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.007932 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.008017 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008164 kubelet[3148]: W0620 19:14:55.008021 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.008027 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.008148 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008164 kubelet[3148]: W0620 19:14:55.008153 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008164 kubelet[3148]: E0620 19:14:55.008160 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008454 kubelet[3148]: E0620 19:14:55.008250 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008454 kubelet[3148]: W0620 19:14:55.008255 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008454 kubelet[3148]: E0620 19:14:55.008261 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008454 kubelet[3148]: E0620 19:14:55.008333 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008454 kubelet[3148]: W0620 19:14:55.008338 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008454 kubelet[3148]: E0620 19:14:55.008343 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008454 kubelet[3148]: E0620 19:14:55.008434 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008454 kubelet[3148]: W0620 19:14:55.008439 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008454 kubelet[3148]: E0620 19:14:55.008443 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008686 kubelet[3148]: E0620 19:14:55.008521 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008686 kubelet[3148]: W0620 19:14:55.008526 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008686 kubelet[3148]: E0620 19:14:55.008531 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008686 kubelet[3148]: E0620 19:14:55.008610 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008686 kubelet[3148]: W0620 19:14:55.008614 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008686 kubelet[3148]: E0620 19:14:55.008619 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008853 kubelet[3148]: E0620 19:14:55.008689 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008853 kubelet[3148]: W0620 19:14:55.008693 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008853 kubelet[3148]: E0620 19:14:55.008698 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008853 kubelet[3148]: E0620 19:14:55.008769 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.008853 kubelet[3148]: W0620 19:14:55.008773 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.008853 kubelet[3148]: E0620 19:14:55.008778 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.008853 kubelet[3148]: E0620 19:14:55.008853 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.009023 kubelet[3148]: W0620 19:14:55.008857 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.009023 kubelet[3148]: E0620 19:14:55.008862 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.009023 kubelet[3148]: E0620 19:14:55.008952 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.009023 kubelet[3148]: W0620 19:14:55.008956 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.009023 kubelet[3148]: E0620 19:14:55.008966 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.009158 kubelet[3148]: E0620 19:14:55.009046 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.009158 kubelet[3148]: W0620 19:14:55.009050 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.009158 kubelet[3148]: E0620 19:14:55.009055 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.009158 kubelet[3148]: E0620 19:14:55.009146 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.009158 kubelet[3148]: W0620 19:14:55.009150 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.009158 kubelet[3148]: E0620 19:14:55.009155 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.022504 kubelet[3148]: E0620 19:14:55.022482 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.022504 kubelet[3148]: W0620 19:14:55.022497 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.022659 kubelet[3148]: E0620 19:14:55.022511 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.022659 kubelet[3148]: E0620 19:14:55.022625 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.022659 kubelet[3148]: W0620 19:14:55.022630 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.022659 kubelet[3148]: E0620 19:14:55.022636 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.022787 kubelet[3148]: E0620 19:14:55.022748 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.022787 kubelet[3148]: W0620 19:14:55.022769 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.022787 kubelet[3148]: E0620 19:14:55.022778 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.022924 kubelet[3148]: E0620 19:14:55.022905 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.022924 kubelet[3148]: W0620 19:14:55.022921 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.022977 kubelet[3148]: E0620 19:14:55.022928 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023052 kubelet[3148]: E0620 19:14:55.023037 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023052 kubelet[3148]: W0620 19:14:55.023047 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023114 kubelet[3148]: E0620 19:14:55.023055 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023154 kubelet[3148]: E0620 19:14:55.023148 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023154 kubelet[3148]: W0620 19:14:55.023152 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023209 kubelet[3148]: E0620 19:14:55.023158 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023273 kubelet[3148]: E0620 19:14:55.023261 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023308 kubelet[3148]: W0620 19:14:55.023274 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023308 kubelet[3148]: E0620 19:14:55.023282 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023472 kubelet[3148]: E0620 19:14:55.023455 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023472 kubelet[3148]: W0620 19:14:55.023469 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023539 kubelet[3148]: E0620 19:14:55.023478 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023564 kubelet[3148]: E0620 19:14:55.023562 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023591 kubelet[3148]: W0620 19:14:55.023567 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023591 kubelet[3148]: E0620 19:14:55.023573 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023674 kubelet[3148]: E0620 19:14:55.023654 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023674 kubelet[3148]: W0620 19:14:55.023672 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023724 kubelet[3148]: E0620 19:14:55.023677 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.023822 kubelet[3148]: E0620 19:14:55.023796 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.023853 kubelet[3148]: W0620 19:14:55.023819 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.023853 kubelet[3148]: E0620 19:14:55.023830 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024014 kubelet[3148]: E0620 19:14:55.023975 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024014 kubelet[3148]: W0620 19:14:55.023982 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024014 kubelet[3148]: E0620 19:14:55.023992 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024134 kubelet[3148]: E0620 19:14:55.024078 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024134 kubelet[3148]: W0620 19:14:55.024083 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024134 kubelet[3148]: E0620 19:14:55.024088 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024215 kubelet[3148]: E0620 19:14:55.024177 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024215 kubelet[3148]: W0620 19:14:55.024181 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024215 kubelet[3148]: E0620 19:14:55.024187 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024306 kubelet[3148]: E0620 19:14:55.024283 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024306 kubelet[3148]: W0620 19:14:55.024288 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024306 kubelet[3148]: E0620 19:14:55.024294 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024442 kubelet[3148]: E0620 19:14:55.024436 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024442 kubelet[3148]: W0620 19:14:55.024441 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024489 kubelet[3148]: E0620 19:14:55.024447 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024575 kubelet[3148]: E0620 19:14:55.024548 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024575 kubelet[3148]: W0620 19:14:55.024554 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024575 kubelet[3148]: E0620 19:14:55.024561 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.024845 kubelet[3148]: E0620 19:14:55.024815 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:14:55.024845 kubelet[3148]: W0620 19:14:55.024844 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:14:55.024892 kubelet[3148]: E0620 19:14:55.024853 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:14:55.312049 containerd[1756]: time="2025-06-20T19:14:55.311971789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:55.314402 containerd[1756]: time="2025-06-20T19:14:55.314372013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:14:55.317006 containerd[1756]: time="2025-06-20T19:14:55.316965277Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:55.324534 containerd[1756]: time="2025-06-20T19:14:55.324485451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:14:55.324980 containerd[1756]: time="2025-06-20T19:14:55.324842245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.437618564s" Jun 20 19:14:55.324980 containerd[1756]: time="2025-06-20T19:14:55.324867970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:14:55.332042 containerd[1756]: time="2025-06-20T19:14:55.332007628Z" level=info msg="CreateContainer within sandbox \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:14:55.348373 containerd[1756]: time="2025-06-20T19:14:55.347978004Z" level=info msg="Container bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:14:55.363107 containerd[1756]: time="2025-06-20T19:14:55.363083975Z" level=info msg="CreateContainer within sandbox \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\"" Jun 20 19:14:55.363647 containerd[1756]: time="2025-06-20T19:14:55.363437471Z" level=info msg="StartContainer for \"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\"" Jun 20 19:14:55.364894 containerd[1756]: time="2025-06-20T19:14:55.364871403Z" level=info msg="connecting to shim bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a" address="unix:///run/containerd/s/247af3280cea8a76a0799009936a8ee0b6a7e97f4b1417f99650c15258788e62" protocol=ttrpc version=3 Jun 20 19:14:55.382487 systemd[1]: Started cri-containerd-bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a.scope - libcontainer container bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a. Jun 20 19:14:55.418064 containerd[1756]: time="2025-06-20T19:14:55.418039480Z" level=info msg="StartContainer for \"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\" returns successfully" Jun 20 19:14:55.419017 systemd[1]: cri-containerd-bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a.scope: Deactivated successfully. Jun 20 19:14:55.420923 containerd[1756]: time="2025-06-20T19:14:55.420870587Z" level=info msg="received exit event container_id:\"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\" id:\"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\" pid:3815 exited_at:{seconds:1750446895 nanos:420517665}" Jun 20 19:14:55.420923 containerd[1756]: time="2025-06-20T19:14:55.420905767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\" id:\"bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a\" pid:3815 exited_at:{seconds:1750446895 nanos:420517665}" Jun 20 19:14:55.435545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb3a0712736bdd95e97261b3747d5b4692d134681129e93661659707712a579a-rootfs.mount: Deactivated successfully. Jun 20 19:14:55.899479 kubelet[3148]: E0620 19:14:55.899452 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:14:55.971414 kubelet[3148]: I0620 19:14:55.971393 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:14:55.987363 kubelet[3148]: I0620 19:14:55.987305 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-568d4d4c9-rm7sx" podStartSLOduration=3.252428177 podStartE2EDuration="4.987290176s" podCreationTimestamp="2025-06-20 19:14:51 +0000 UTC" firstStartedPulling="2025-06-20 19:14:52.15122571 +0000 UTC m=+17.333133022" lastFinishedPulling="2025-06-20 19:14:53.886087719 +0000 UTC m=+19.067995021" observedRunningTime="2025-06-20 19:14:54.980096922 +0000 UTC m=+20.162004242" watchObservedRunningTime="2025-06-20 19:14:55.987290176 +0000 UTC m=+21.169197483" Jun 20 19:14:57.898783 kubelet[3148]: E0620 19:14:57.898748 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:14:57.977328 containerd[1756]: time="2025-06-20T19:14:57.977097589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:14:59.898977 kubelet[3148]: E0620 19:14:59.898863 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:15:01.533447 containerd[1756]: time="2025-06-20T19:15:01.533410940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:01.535881 containerd[1756]: time="2025-06-20T19:15:01.535848189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:15:01.538675 containerd[1756]: time="2025-06-20T19:15:01.538634340Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:01.543405 containerd[1756]: time="2025-06-20T19:15:01.543211686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:01.543822 containerd[1756]: time="2025-06-20T19:15:01.543798757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 3.566666891s" Jun 20 19:15:01.543859 containerd[1756]: time="2025-06-20T19:15:01.543830389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:15:01.551915 containerd[1756]: time="2025-06-20T19:15:01.551892175Z" level=info msg="CreateContainer within sandbox \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:15:01.583379 containerd[1756]: time="2025-06-20T19:15:01.581530192Z" level=info msg="Container 9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:01.587004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212886896.mount: Deactivated successfully. Jun 20 19:15:01.599579 containerd[1756]: time="2025-06-20T19:15:01.599555266Z" level=info msg="CreateContainer within sandbox \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\"" Jun 20 19:15:01.600093 containerd[1756]: time="2025-06-20T19:15:01.599900052Z" level=info msg="StartContainer for \"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\"" Jun 20 19:15:01.601158 containerd[1756]: time="2025-06-20T19:15:01.601129000Z" level=info msg="connecting to shim 9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e" address="unix:///run/containerd/s/247af3280cea8a76a0799009936a8ee0b6a7e97f4b1417f99650c15258788e62" protocol=ttrpc version=3 Jun 20 19:15:01.619503 systemd[1]: Started cri-containerd-9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e.scope - libcontainer container 9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e. Jun 20 19:15:01.650369 containerd[1756]: time="2025-06-20T19:15:01.650320903Z" level=info msg="StartContainer for \"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\" returns successfully" Jun 20 19:15:01.899628 kubelet[3148]: E0620 19:15:01.899278 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:15:02.613902 kubelet[3148]: I0620 19:15:02.613871 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:02.787868 containerd[1756]: time="2025-06-20T19:15:02.787826155Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:15:02.789406 systemd[1]: cri-containerd-9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e.scope: Deactivated successfully. Jun 20 19:15:02.789654 systemd[1]: cri-containerd-9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e.scope: Consumed 368ms CPU time, 192.8M memory peak, 171.2M written to disk. Jun 20 19:15:02.790731 containerd[1756]: time="2025-06-20T19:15:02.790686235Z" level=info msg="received exit event container_id:\"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\" id:\"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\" pid:3876 exited_at:{seconds:1750446902 nanos:790459863}" Jun 20 19:15:02.790731 containerd[1756]: time="2025-06-20T19:15:02.790714143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\" id:\"9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e\" pid:3876 exited_at:{seconds:1750446902 nanos:790459863}" Jun 20 19:15:02.806138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f312c20d54c8cfddc83fed9137b76fbab2f04b15eb791c516f785845657f21e-rootfs.mount: Deactivated successfully. Jun 20 19:15:02.853072 kubelet[3148]: I0620 19:15:02.852970 3148 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:15:03.049860 systemd[1]: Created slice kubepods-besteffort-poddf054429_1b56_4c94_93cb_890bfe97c219.slice - libcontainer container kubepods-besteffort-poddf054429_1b56_4c94_93cb_890bfe97c219.slice. Jun 20 19:15:03.076456 kubelet[3148]: I0620 19:15:03.076428 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df054429-1b56-4c94-93cb-890bfe97c219-tigera-ca-bundle\") pod \"calico-kube-controllers-7f9d94d467-vmsw8\" (UID: \"df054429-1b56-4c94-93cb-890bfe97c219\") " pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" Jun 20 19:15:03.076988 kubelet[3148]: I0620 19:15:03.076480 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nsf\" (UniqueName: \"kubernetes.io/projected/df054429-1b56-4c94-93cb-890bfe97c219-kube-api-access-f6nsf\") pod \"calico-kube-controllers-7f9d94d467-vmsw8\" (UID: \"df054429-1b56-4c94-93cb-890bfe97c219\") " pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" Jun 20 19:15:03.340763 systemd[1]: Created slice kubepods-burstable-pod59e5d8d4_094b_4a27_8aa7_d10b49152c4d.slice - libcontainer container kubepods-burstable-pod59e5d8d4_094b_4a27_8aa7_d10b49152c4d.slice. Jun 20 19:15:03.451668 kubelet[3148]: I0620 19:15:03.378708 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59e5d8d4-094b-4a27-8aa7-d10b49152c4d-config-volume\") pod \"coredns-674b8bbfcf-mt5w6\" (UID: \"59e5d8d4-094b-4a27-8aa7-d10b49152c4d\") " pod="kube-system/coredns-674b8bbfcf-mt5w6" Jun 20 19:15:03.451668 kubelet[3148]: I0620 19:15:03.378756 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcgmf\" (UniqueName: \"kubernetes.io/projected/59e5d8d4-094b-4a27-8aa7-d10b49152c4d-kube-api-access-tcgmf\") pod \"coredns-674b8bbfcf-mt5w6\" (UID: \"59e5d8d4-094b-4a27-8aa7-d10b49152c4d\") " pod="kube-system/coredns-674b8bbfcf-mt5w6" Jun 20 19:15:03.452442 containerd[1756]: time="2025-06-20T19:15:03.452175610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9d94d467-vmsw8,Uid:df054429-1b56-4c94-93cb-890bfe97c219,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:03.495164 systemd[1]: Created slice kubepods-burstable-pod4dbd8824_5687_4981_a20a_4e1f5b33a154.slice - libcontainer container kubepods-burstable-pod4dbd8824_5687_4981_a20a_4e1f5b33a154.slice. Jun 20 19:15:03.580043 kubelet[3148]: I0620 19:15:03.580021 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxp7\" (UniqueName: \"kubernetes.io/projected/4dbd8824-5687-4981-a20a-4e1f5b33a154-kube-api-access-zcxp7\") pod \"coredns-674b8bbfcf-72j62\" (UID: \"4dbd8824-5687-4981-a20a-4e1f5b33a154\") " pod="kube-system/coredns-674b8bbfcf-72j62" Jun 20 19:15:03.580043 kubelet[3148]: I0620 19:15:03.580050 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbd8824-5687-4981-a20a-4e1f5b33a154-config-volume\") pod \"coredns-674b8bbfcf-72j62\" (UID: \"4dbd8824-5687-4981-a20a-4e1f5b33a154\") " pod="kube-system/coredns-674b8bbfcf-72j62" Jun 20 19:15:03.752610 containerd[1756]: time="2025-06-20T19:15:03.752540963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt5w6,Uid:59e5d8d4-094b-4a27-8aa7-d10b49152c4d,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:03.798164 containerd[1756]: time="2025-06-20T19:15:03.798133856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72j62,Uid:4dbd8824-5687-4981-a20a-4e1f5b33a154,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:04.506580 systemd[1]: Created slice kubepods-besteffort-pod27d669ba_30c6_4a02_8055_e0a275e427d6.slice - libcontainer container kubepods-besteffort-pod27d669ba_30c6_4a02_8055_e0a275e427d6.slice. Jun 20 19:15:04.562992 systemd[1]: Created slice kubepods-besteffort-podd505d167_e14e_4e9c_96db_fe9e2153cb77.slice - libcontainer container kubepods-besteffort-podd505d167_e14e_4e9c_96db_fe9e2153cb77.slice. Jun 20 19:15:04.572307 systemd[1]: Created slice kubepods-besteffort-pod27b54376_ecb2_44c4_9b0c_859b0b014285.slice - libcontainer container kubepods-besteffort-pod27b54376_ecb2_44c4_9b0c_859b0b014285.slice. Jun 20 19:15:04.578113 systemd[1]: Created slice kubepods-besteffort-podc247f1b6_5251_4f90_8193_b5662a7bfa74.slice - libcontainer container kubepods-besteffort-podc247f1b6_5251_4f90_8193_b5662a7bfa74.slice. Jun 20 19:15:04.581635 containerd[1756]: time="2025-06-20T19:15:04.581537563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8bp,Uid:c247f1b6-5251-4f90-8193-b5662a7bfa74,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:04.585948 kubelet[3148]: I0620 19:15:04.585922 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-backend-key-pair\") pod \"whisker-747cc78885-l8jvx\" (UID: \"27b54376-ecb2-44c4-9b0c-859b0b014285\") " pod="calico-system/whisker-747cc78885-l8jvx" Jun 20 19:15:04.586166 kubelet[3148]: I0620 19:15:04.585963 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj5b6\" (UniqueName: \"kubernetes.io/projected/d505d167-e14e-4e9c-96db-fe9e2153cb77-kube-api-access-gj5b6\") pod \"calico-apiserver-6d9b4ccdb4-szcct\" (UID: \"d505d167-e14e-4e9c-96db-fe9e2153cb77\") " pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" Jun 20 19:15:04.586166 kubelet[3148]: I0620 19:15:04.585980 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-ca-bundle\") pod \"whisker-747cc78885-l8jvx\" (UID: \"27b54376-ecb2-44c4-9b0c-859b0b014285\") " pod="calico-system/whisker-747cc78885-l8jvx" Jun 20 19:15:04.586166 kubelet[3148]: I0620 19:15:04.585997 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmvg\" (UniqueName: \"kubernetes.io/projected/27b54376-ecb2-44c4-9b0c-859b0b014285-kube-api-access-pgmvg\") pod \"whisker-747cc78885-l8jvx\" (UID: \"27b54376-ecb2-44c4-9b0c-859b0b014285\") " pod="calico-system/whisker-747cc78885-l8jvx" Jun 20 19:15:04.586166 kubelet[3148]: I0620 19:15:04.586015 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27d669ba-30c6-4a02-8055-e0a275e427d6-calico-apiserver-certs\") pod \"calico-apiserver-6d9b4ccdb4-bv6fw\" (UID: \"27d669ba-30c6-4a02-8055-e0a275e427d6\") " pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-bv6fw" Jun 20 19:15:04.586166 kubelet[3148]: I0620 19:15:04.586031 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwb7q\" (UniqueName: \"kubernetes.io/projected/27d669ba-30c6-4a02-8055-e0a275e427d6-kube-api-access-nwb7q\") pod \"calico-apiserver-6d9b4ccdb4-bv6fw\" (UID: \"27d669ba-30c6-4a02-8055-e0a275e427d6\") " pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-bv6fw" Jun 20 19:15:04.586282 kubelet[3148]: I0620 19:15:04.586050 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c89f0f-ffeb-41ca-bd71-31c366413c04-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-6pzql\" (UID: \"d1c89f0f-ffeb-41ca-bd71-31c366413c04\") " pod="calico-system/goldmane-5bd85449d4-6pzql" Jun 20 19:15:04.586282 kubelet[3148]: I0620 19:15:04.586067 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1c89f0f-ffeb-41ca-bd71-31c366413c04-goldmane-key-pair\") pod \"goldmane-5bd85449d4-6pzql\" (UID: \"d1c89f0f-ffeb-41ca-bd71-31c366413c04\") " pod="calico-system/goldmane-5bd85449d4-6pzql" Jun 20 19:15:04.586282 kubelet[3148]: I0620 19:15:04.586087 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8tqn\" (UniqueName: \"kubernetes.io/projected/d1c89f0f-ffeb-41ca-bd71-31c366413c04-kube-api-access-z8tqn\") pod \"goldmane-5bd85449d4-6pzql\" (UID: \"d1c89f0f-ffeb-41ca-bd71-31c366413c04\") " pod="calico-system/goldmane-5bd85449d4-6pzql" Jun 20 19:15:04.586282 kubelet[3148]: I0620 19:15:04.586106 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d505d167-e14e-4e9c-96db-fe9e2153cb77-calico-apiserver-certs\") pod \"calico-apiserver-6d9b4ccdb4-szcct\" (UID: \"d505d167-e14e-4e9c-96db-fe9e2153cb77\") " pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" Jun 20 19:15:04.586282 kubelet[3148]: I0620 19:15:04.586123 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c89f0f-ffeb-41ca-bd71-31c366413c04-config\") pod \"goldmane-5bd85449d4-6pzql\" (UID: \"d1c89f0f-ffeb-41ca-bd71-31c366413c04\") " pod="calico-system/goldmane-5bd85449d4-6pzql" Jun 20 19:15:04.586950 systemd[1]: Created slice kubepods-besteffort-podd1c89f0f_ffeb_41ca_bd71_31c366413c04.slice - libcontainer container kubepods-besteffort-podd1c89f0f_ffeb_41ca_bd71_31c366413c04.slice. Jun 20 19:15:04.659862 containerd[1756]: time="2025-06-20T19:15:04.659831889Z" level=error msg="Failed to destroy network for sandbox \"c37c00275d13611f2761b92f47dfa2c917574a903f1b0500b15ef1b078806b8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.664153 systemd[1]: run-netns-cni\x2d93a1bd41\x2dae81\x2d55af\x2dd20f\x2d26adde705f02.mount: Deactivated successfully. Jun 20 19:15:04.668046 containerd[1756]: time="2025-06-20T19:15:04.667948804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt5w6,Uid:59e5d8d4-094b-4a27-8aa7-d10b49152c4d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37c00275d13611f2761b92f47dfa2c917574a903f1b0500b15ef1b078806b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.668155 kubelet[3148]: E0620 19:15:04.668104 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37c00275d13611f2761b92f47dfa2c917574a903f1b0500b15ef1b078806b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.668199 kubelet[3148]: E0620 19:15:04.668160 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37c00275d13611f2761b92f47dfa2c917574a903f1b0500b15ef1b078806b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mt5w6" Jun 20 19:15:04.668199 kubelet[3148]: E0620 19:15:04.668179 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37c00275d13611f2761b92f47dfa2c917574a903f1b0500b15ef1b078806b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mt5w6" Jun 20 19:15:04.668244 kubelet[3148]: E0620 19:15:04.668222 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mt5w6_kube-system(59e5d8d4-094b-4a27-8aa7-d10b49152c4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mt5w6_kube-system(59e5d8d4-094b-4a27-8aa7-d10b49152c4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c37c00275d13611f2761b92f47dfa2c917574a903f1b0500b15ef1b078806b8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mt5w6" podUID="59e5d8d4-094b-4a27-8aa7-d10b49152c4d" Jun 20 19:15:04.675774 containerd[1756]: time="2025-06-20T19:15:04.675639607Z" level=error msg="Failed to destroy network for sandbox \"f02277b4d81aa117aa337feee9445d0cf2b177df2e92c840322e49b4dae2948c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.678460 containerd[1756]: time="2025-06-20T19:15:04.678430678Z" level=error msg="Failed to destroy network for sandbox \"532c151c17cd269ca94fb6d296c68280cac60fd8b2e261f94d5e572dcd6c05ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.679234 containerd[1756]: time="2025-06-20T19:15:04.679201172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9d94d467-vmsw8,Uid:df054429-1b56-4c94-93cb-890bfe97c219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02277b4d81aa117aa337feee9445d0cf2b177df2e92c840322e49b4dae2948c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.679918 kubelet[3148]: E0620 19:15:04.679557 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02277b4d81aa117aa337feee9445d0cf2b177df2e92c840322e49b4dae2948c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.679918 kubelet[3148]: E0620 19:15:04.679595 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02277b4d81aa117aa337feee9445d0cf2b177df2e92c840322e49b4dae2948c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" Jun 20 19:15:04.679918 kubelet[3148]: E0620 19:15:04.679613 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f02277b4d81aa117aa337feee9445d0cf2b177df2e92c840322e49b4dae2948c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" Jun 20 19:15:04.680133 kubelet[3148]: E0620 19:15:04.679651 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9d94d467-vmsw8_calico-system(df054429-1b56-4c94-93cb-890bfe97c219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9d94d467-vmsw8_calico-system(df054429-1b56-4c94-93cb-890bfe97c219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f02277b4d81aa117aa337feee9445d0cf2b177df2e92c840322e49b4dae2948c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" podUID="df054429-1b56-4c94-93cb-890bfe97c219" Jun 20 19:15:04.682286 containerd[1756]: time="2025-06-20T19:15:04.682247328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72j62,Uid:4dbd8824-5687-4981-a20a-4e1f5b33a154,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c151c17cd269ca94fb6d296c68280cac60fd8b2e261f94d5e572dcd6c05ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.682458 kubelet[3148]: E0620 19:15:04.682429 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c151c17cd269ca94fb6d296c68280cac60fd8b2e261f94d5e572dcd6c05ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.682501 kubelet[3148]: E0620 19:15:04.682478 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c151c17cd269ca94fb6d296c68280cac60fd8b2e261f94d5e572dcd6c05ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-72j62" Jun 20 19:15:04.682501 kubelet[3148]: E0620 19:15:04.682495 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c151c17cd269ca94fb6d296c68280cac60fd8b2e261f94d5e572dcd6c05ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-72j62" Jun 20 19:15:04.682601 kubelet[3148]: E0620 19:15:04.682560 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-72j62_kube-system(4dbd8824-5687-4981-a20a-4e1f5b33a154)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-72j62_kube-system(4dbd8824-5687-4981-a20a-4e1f5b33a154)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"532c151c17cd269ca94fb6d296c68280cac60fd8b2e261f94d5e572dcd6c05ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-72j62" podUID="4dbd8824-5687-4981-a20a-4e1f5b33a154" Jun 20 19:15:04.684423 containerd[1756]: time="2025-06-20T19:15:04.684390719Z" level=error msg="Failed to destroy network for sandbox \"66a5197769a1bdb9c8927734fa2813587d726e56651cb8387d357125fea35c1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.687739 containerd[1756]: time="2025-06-20T19:15:04.687703489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8bp,Uid:c247f1b6-5251-4f90-8193-b5662a7bfa74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a5197769a1bdb9c8927734fa2813587d726e56651cb8387d357125fea35c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.697302 kubelet[3148]: E0620 19:15:04.697236 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a5197769a1bdb9c8927734fa2813587d726e56651cb8387d357125fea35c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.697302 kubelet[3148]: E0620 19:15:04.697271 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a5197769a1bdb9c8927734fa2813587d726e56651cb8387d357125fea35c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:15:04.697302 kubelet[3148]: E0620 19:15:04.697298 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66a5197769a1bdb9c8927734fa2813587d726e56651cb8387d357125fea35c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gp8bp" Jun 20 19:15:04.697546 kubelet[3148]: E0620 19:15:04.697330 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gp8bp_calico-system(c247f1b6-5251-4f90-8193-b5662a7bfa74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gp8bp_calico-system(c247f1b6-5251-4f90-8193-b5662a7bfa74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66a5197769a1bdb9c8927734fa2813587d726e56651cb8387d357125fea35c1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gp8bp" podUID="c247f1b6-5251-4f90-8193-b5662a7bfa74" Jun 20 19:15:04.809335 containerd[1756]: time="2025-06-20T19:15:04.809266612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-bv6fw,Uid:27d669ba-30c6-4a02-8055-e0a275e427d6,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:04.848006 containerd[1756]: time="2025-06-20T19:15:04.847983783Z" level=error msg="Failed to destroy network for sandbox \"94daf0fcaa7eb6a381d87a78da224f5d6a92f9b2946d1a648190fc421aa768ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.851476 containerd[1756]: time="2025-06-20T19:15:04.851442928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-bv6fw,Uid:27d669ba-30c6-4a02-8055-e0a275e427d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94daf0fcaa7eb6a381d87a78da224f5d6a92f9b2946d1a648190fc421aa768ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.851620 kubelet[3148]: E0620 19:15:04.851599 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94daf0fcaa7eb6a381d87a78da224f5d6a92f9b2946d1a648190fc421aa768ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.851665 kubelet[3148]: E0620 19:15:04.851636 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94daf0fcaa7eb6a381d87a78da224f5d6a92f9b2946d1a648190fc421aa768ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-bv6fw" Jun 20 19:15:04.851665 kubelet[3148]: E0620 19:15:04.851653 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94daf0fcaa7eb6a381d87a78da224f5d6a92f9b2946d1a648190fc421aa768ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-bv6fw" Jun 20 19:15:04.851727 kubelet[3148]: E0620 19:15:04.851702 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d9b4ccdb4-bv6fw_calico-apiserver(27d669ba-30c6-4a02-8055-e0a275e427d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d9b4ccdb4-bv6fw_calico-apiserver(27d669ba-30c6-4a02-8055-e0a275e427d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94daf0fcaa7eb6a381d87a78da224f5d6a92f9b2946d1a648190fc421aa768ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-bv6fw" podUID="27d669ba-30c6-4a02-8055-e0a275e427d6" Jun 20 19:15:04.867972 containerd[1756]: time="2025-06-20T19:15:04.867949642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-szcct,Uid:d505d167-e14e-4e9c-96db-fe9e2153cb77,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:04.876666 containerd[1756]: time="2025-06-20T19:15:04.876505298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-747cc78885-l8jvx,Uid:27b54376-ecb2-44c4-9b0c-859b0b014285,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:04.890791 containerd[1756]: time="2025-06-20T19:15:04.890768110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-6pzql,Uid:d1c89f0f-ffeb-41ca-bd71-31c366413c04,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:04.935993 containerd[1756]: time="2025-06-20T19:15:04.935965863Z" level=error msg="Failed to destroy network for sandbox \"3631e0d634c6f67672feafe6e71832b0d9a81ae005d87d900c7278c02b9a1134\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.939984 containerd[1756]: time="2025-06-20T19:15:04.939858665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-szcct,Uid:d505d167-e14e-4e9c-96db-fe9e2153cb77,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631e0d634c6f67672feafe6e71832b0d9a81ae005d87d900c7278c02b9a1134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.940878 kubelet[3148]: E0620 19:15:04.940405 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631e0d634c6f67672feafe6e71832b0d9a81ae005d87d900c7278c02b9a1134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.940878 kubelet[3148]: E0620 19:15:04.940445 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631e0d634c6f67672feafe6e71832b0d9a81ae005d87d900c7278c02b9a1134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" Jun 20 19:15:04.940878 kubelet[3148]: E0620 19:15:04.940550 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3631e0d634c6f67672feafe6e71832b0d9a81ae005d87d900c7278c02b9a1134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" Jun 20 19:15:04.941001 kubelet[3148]: E0620 19:15:04.940600 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d9b4ccdb4-szcct_calico-apiserver(d505d167-e14e-4e9c-96db-fe9e2153cb77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d9b4ccdb4-szcct_calico-apiserver(d505d167-e14e-4e9c-96db-fe9e2153cb77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3631e0d634c6f67672feafe6e71832b0d9a81ae005d87d900c7278c02b9a1134\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" podUID="d505d167-e14e-4e9c-96db-fe9e2153cb77" Jun 20 19:15:04.946707 containerd[1756]: time="2025-06-20T19:15:04.946677750Z" level=error msg="Failed to destroy network for sandbox \"fdb1b3c14ceae1bb3ea64fe7cebbac6e12619f0715d2ea895deb9442afa16368\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.951609 containerd[1756]: time="2025-06-20T19:15:04.951300054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-747cc78885-l8jvx,Uid:27b54376-ecb2-44c4-9b0c-859b0b014285,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb1b3c14ceae1bb3ea64fe7cebbac6e12619f0715d2ea895deb9442afa16368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.952370 kubelet[3148]: E0620 19:15:04.952265 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb1b3c14ceae1bb3ea64fe7cebbac6e12619f0715d2ea895deb9442afa16368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.952370 kubelet[3148]: E0620 19:15:04.952301 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb1b3c14ceae1bb3ea64fe7cebbac6e12619f0715d2ea895deb9442afa16368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-747cc78885-l8jvx" Jun 20 19:15:04.952370 kubelet[3148]: E0620 19:15:04.952318 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb1b3c14ceae1bb3ea64fe7cebbac6e12619f0715d2ea895deb9442afa16368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-747cc78885-l8jvx" Jun 20 19:15:04.954011 kubelet[3148]: E0620 19:15:04.953499 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-747cc78885-l8jvx_calico-system(27b54376-ecb2-44c4-9b0c-859b0b014285)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-747cc78885-l8jvx_calico-system(27b54376-ecb2-44c4-9b0c-859b0b014285)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdb1b3c14ceae1bb3ea64fe7cebbac6e12619f0715d2ea895deb9442afa16368\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-747cc78885-l8jvx" podUID="27b54376-ecb2-44c4-9b0c-859b0b014285" Jun 20 19:15:04.956464 containerd[1756]: time="2025-06-20T19:15:04.956439391Z" level=error msg="Failed to destroy network for sandbox \"4a40b6eec11dc7e180f207216e9210a191a8aa0c244890288c4fe9eb85c234ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.961743 containerd[1756]: time="2025-06-20T19:15:04.961714733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-6pzql,Uid:d1c89f0f-ffeb-41ca-bd71-31c366413c04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a40b6eec11dc7e180f207216e9210a191a8aa0c244890288c4fe9eb85c234ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.961882 kubelet[3148]: E0620 19:15:04.961860 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a40b6eec11dc7e180f207216e9210a191a8aa0c244890288c4fe9eb85c234ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:04.961931 kubelet[3148]: E0620 19:15:04.961894 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a40b6eec11dc7e180f207216e9210a191a8aa0c244890288c4fe9eb85c234ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-6pzql" Jun 20 19:15:04.961931 kubelet[3148]: E0620 19:15:04.961911 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a40b6eec11dc7e180f207216e9210a191a8aa0c244890288c4fe9eb85c234ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-6pzql" Jun 20 19:15:04.961977 kubelet[3148]: E0620 19:15:04.961950 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-6pzql_calico-system(d1c89f0f-ffeb-41ca-bd71-31c366413c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-6pzql_calico-system(d1c89f0f-ffeb-41ca-bd71-31c366413c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a40b6eec11dc7e180f207216e9210a191a8aa0c244890288c4fe9eb85c234ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-6pzql" podUID="d1c89f0f-ffeb-41ca-bd71-31c366413c04" Jun 20 19:15:04.994078 containerd[1756]: time="2025-06-20T19:15:04.994018652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:15:05.500813 systemd[1]: run-netns-cni\x2dddd14451\x2daa65\x2d0296\x2d5061\x2d1c80eea5d3be.mount: Deactivated successfully. Jun 20 19:15:05.500904 systemd[1]: run-netns-cni\x2dc8332552\x2d2111\x2dee0a\x2da6e6\x2dda390c1eec5f.mount: Deactivated successfully. Jun 20 19:15:05.500952 systemd[1]: run-netns-cni\x2de0bfd40c\x2ddb5b\x2d87e0\x2d3d04\x2ddf3ab07b3c03.mount: Deactivated successfully. Jun 20 19:15:15.902373 containerd[1756]: time="2025-06-20T19:15:15.901479323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72j62,Uid:4dbd8824-5687-4981-a20a-4e1f5b33a154,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:15.902896 containerd[1756]: time="2025-06-20T19:15:15.902873759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9d94d467-vmsw8,Uid:df054429-1b56-4c94-93cb-890bfe97c219,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:15.903015 containerd[1756]: time="2025-06-20T19:15:15.903002144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-szcct,Uid:d505d167-e14e-4e9c-96db-fe9e2153cb77,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:15.903098 containerd[1756]: time="2025-06-20T19:15:15.903088048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt5w6,Uid:59e5d8d4-094b-4a27-8aa7-d10b49152c4d,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:16.051382 containerd[1756]: time="2025-06-20T19:15:16.051325253Z" level=error msg="Failed to destroy network for sandbox \"efe68f46bcd52a2032c154c1f97f6063fc8d4de4b4add7557fdf0f5bbcf8a65d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.052707 containerd[1756]: time="2025-06-20T19:15:16.052673540Z" level=error msg="Failed to destroy network for sandbox \"44ece4bcdc75a4f6ffc4aa7aeb9cdc165caaf67aef9164445a0ebeb0c33a6089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.056974 containerd[1756]: time="2025-06-20T19:15:16.056926778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9d94d467-vmsw8,Uid:df054429-1b56-4c94-93cb-890bfe97c219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe68f46bcd52a2032c154c1f97f6063fc8d4de4b4add7557fdf0f5bbcf8a65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.057228 kubelet[3148]: E0620 19:15:16.057135 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe68f46bcd52a2032c154c1f97f6063fc8d4de4b4add7557fdf0f5bbcf8a65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.057228 kubelet[3148]: E0620 19:15:16.057183 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe68f46bcd52a2032c154c1f97f6063fc8d4de4b4add7557fdf0f5bbcf8a65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" Jun 20 19:15:16.057228 kubelet[3148]: E0620 19:15:16.057203 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efe68f46bcd52a2032c154c1f97f6063fc8d4de4b4add7557fdf0f5bbcf8a65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" Jun 20 19:15:16.057643 kubelet[3148]: E0620 19:15:16.057254 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9d94d467-vmsw8_calico-system(df054429-1b56-4c94-93cb-890bfe97c219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9d94d467-vmsw8_calico-system(df054429-1b56-4c94-93cb-890bfe97c219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efe68f46bcd52a2032c154c1f97f6063fc8d4de4b4add7557fdf0f5bbcf8a65d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" podUID="df054429-1b56-4c94-93cb-890bfe97c219" Jun 20 19:15:16.060198 containerd[1756]: time="2025-06-20T19:15:16.060123738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-szcct,Uid:d505d167-e14e-4e9c-96db-fe9e2153cb77,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ece4bcdc75a4f6ffc4aa7aeb9cdc165caaf67aef9164445a0ebeb0c33a6089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.060296 kubelet[3148]: E0620 19:15:16.060261 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ece4bcdc75a4f6ffc4aa7aeb9cdc165caaf67aef9164445a0ebeb0c33a6089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.060335 kubelet[3148]: E0620 19:15:16.060297 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ece4bcdc75a4f6ffc4aa7aeb9cdc165caaf67aef9164445a0ebeb0c33a6089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" Jun 20 19:15:16.060335 kubelet[3148]: E0620 19:15:16.060315 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ece4bcdc75a4f6ffc4aa7aeb9cdc165caaf67aef9164445a0ebeb0c33a6089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" Jun 20 19:15:16.060397 kubelet[3148]: E0620 19:15:16.060365 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d9b4ccdb4-szcct_calico-apiserver(d505d167-e14e-4e9c-96db-fe9e2153cb77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d9b4ccdb4-szcct_calico-apiserver(d505d167-e14e-4e9c-96db-fe9e2153cb77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44ece4bcdc75a4f6ffc4aa7aeb9cdc165caaf67aef9164445a0ebeb0c33a6089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" podUID="d505d167-e14e-4e9c-96db-fe9e2153cb77" Jun 20 19:15:16.066448 containerd[1756]: time="2025-06-20T19:15:16.066379441Z" level=error msg="Failed to destroy network for sandbox \"ef03a680f463dc8d87fbe8a01493a95854b26d4237008824012de3f2af26c777\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.069001 containerd[1756]: time="2025-06-20T19:15:16.068931159Z" level=error msg="Failed to destroy network for sandbox \"e95702b6b2485f3b7a515d3c71957fa9c4cc340ba64b0f85dbd3e35dec91684f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.072499 containerd[1756]: time="2025-06-20T19:15:16.072461820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72j62,Uid:4dbd8824-5687-4981-a20a-4e1f5b33a154,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef03a680f463dc8d87fbe8a01493a95854b26d4237008824012de3f2af26c777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.072902 kubelet[3148]: E0620 19:15:16.072881 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef03a680f463dc8d87fbe8a01493a95854b26d4237008824012de3f2af26c777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.073029 kubelet[3148]: E0620 19:15:16.073016 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef03a680f463dc8d87fbe8a01493a95854b26d4237008824012de3f2af26c777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-72j62" Jun 20 19:15:16.073157 kubelet[3148]: E0620 19:15:16.073083 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef03a680f463dc8d87fbe8a01493a95854b26d4237008824012de3f2af26c777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-72j62" Jun 20 19:15:16.073157 kubelet[3148]: E0620 19:15:16.073131 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-72j62_kube-system(4dbd8824-5687-4981-a20a-4e1f5b33a154)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-72j62_kube-system(4dbd8824-5687-4981-a20a-4e1f5b33a154)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef03a680f463dc8d87fbe8a01493a95854b26d4237008824012de3f2af26c777\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-72j62" podUID="4dbd8824-5687-4981-a20a-4e1f5b33a154" Jun 20 19:15:16.077297 containerd[1756]: time="2025-06-20T19:15:16.077264604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt5w6,Uid:59e5d8d4-094b-4a27-8aa7-d10b49152c4d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95702b6b2485f3b7a515d3c71957fa9c4cc340ba64b0f85dbd3e35dec91684f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.077668 kubelet[3148]: E0620 19:15:16.077639 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95702b6b2485f3b7a515d3c71957fa9c4cc340ba64b0f85dbd3e35dec91684f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:15:16.077721 kubelet[3148]: E0620 19:15:16.077683 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95702b6b2485f3b7a515d3c71957fa9c4cc340ba64b0f85dbd3e35dec91684f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mt5w6" Jun 20 19:15:16.077721 kubelet[3148]: E0620 19:15:16.077702 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95702b6b2485f3b7a515d3c71957fa9c4cc340ba64b0f85dbd3e35dec91684f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mt5w6" Jun 20 19:15:16.077857 kubelet[3148]: E0620 19:15:16.077802 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mt5w6_kube-system(59e5d8d4-094b-4a27-8aa7-d10b49152c4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mt5w6_kube-system(59e5d8d4-094b-4a27-8aa7-d10b49152c4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95702b6b2485f3b7a515d3c71957fa9c4cc340ba64b0f85dbd3e35dec91684f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mt5w6" podUID="59e5d8d4-094b-4a27-8aa7-d10b49152c4d" Jun 20 19:15:16.348297 systemd[1]: run-netns-cni\x2d8ce43b1d\x2d9ac3\x2d6bff\x2d4df3\x2d0d78aa9f6ed9.mount: Deactivated successfully. Jun 20 19:15:16.348425 systemd[1]: run-netns-cni\x2df3d05934\x2dbb39\x2dbab9\x2d8d6e\x2df8ba8dee7443.mount: Deactivated successfully. Jun 20 19:15:16.348482 systemd[1]: run-netns-cni\x2d8f11c93d\x2d08eb\x2d47d2\x2d2db0\x2dd1128fed48e1.mount: Deactivated successfully. Jun 20 19:15:16.348532 systemd[1]: run-netns-cni\x2de0845dbc\x2db88d\x2de8b3\x2d4c27\x2d9f36ddf107be.mount: Deactivated successfully. Jun 20 19:15:17.139699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111434204.mount: Deactivated successfully. Jun 20 19:15:17.165837 containerd[1756]: time="2025-06-20T19:15:17.165804199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:17.168476 containerd[1756]: time="2025-06-20T19:15:17.168455745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:15:17.171772 containerd[1756]: time="2025-06-20T19:15:17.171750090Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:17.175159 containerd[1756]: time="2025-06-20T19:15:17.175135940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:17.175442 containerd[1756]: time="2025-06-20T19:15:17.175418954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 12.18131143s" Jun 20 19:15:17.175484 containerd[1756]: time="2025-06-20T19:15:17.175449798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:15:17.194110 containerd[1756]: time="2025-06-20T19:15:17.194085787Z" level=info msg="CreateContainer within sandbox \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:15:17.214697 containerd[1756]: time="2025-06-20T19:15:17.214673617Z" level=info msg="Container 306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:17.233746 containerd[1756]: time="2025-06-20T19:15:17.233723130Z" level=info msg="CreateContainer within sandbox \"7c0b3f03e92848f3e2687e0b3693c86d151fbf54faee23becee9f55074215ca5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\"" Jun 20 19:15:17.234102 containerd[1756]: time="2025-06-20T19:15:17.234085912Z" level=info msg="StartContainer for \"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\"" Jun 20 19:15:17.235677 containerd[1756]: time="2025-06-20T19:15:17.235653612Z" level=info msg="connecting to shim 306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54" address="unix:///run/containerd/s/247af3280cea8a76a0799009936a8ee0b6a7e97f4b1417f99650c15258788e62" protocol=ttrpc version=3 Jun 20 19:15:17.257466 systemd[1]: Started cri-containerd-306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54.scope - libcontainer container 306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54. Jun 20 19:15:17.286897 containerd[1756]: time="2025-06-20T19:15:17.286824395Z" level=info msg="StartContainer for \"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" returns successfully" Jun 20 19:15:17.637964 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:15:17.638036 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:15:17.855748 kubelet[3148]: I0620 19:15:17.855384 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-backend-key-pair\") pod \"27b54376-ecb2-44c4-9b0c-859b0b014285\" (UID: \"27b54376-ecb2-44c4-9b0c-859b0b014285\") " Jun 20 19:15:17.855748 kubelet[3148]: I0620 19:15:17.855423 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-ca-bundle\") pod \"27b54376-ecb2-44c4-9b0c-859b0b014285\" (UID: \"27b54376-ecb2-44c4-9b0c-859b0b014285\") " Jun 20 19:15:17.855748 kubelet[3148]: I0620 19:15:17.855443 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmvg\" (UniqueName: \"kubernetes.io/projected/27b54376-ecb2-44c4-9b0c-859b0b014285-kube-api-access-pgmvg\") pod \"27b54376-ecb2-44c4-9b0c-859b0b014285\" (UID: \"27b54376-ecb2-44c4-9b0c-859b0b014285\") " Jun 20 19:15:17.859833 kubelet[3148]: I0620 19:15:17.859806 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "27b54376-ecb2-44c4-9b0c-859b0b014285" (UID: "27b54376-ecb2-44c4-9b0c-859b0b014285"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:15:17.860282 systemd[1]: var-lib-kubelet-pods-27b54376\x2decb2\x2d44c4\x2d9b0c\x2d859b0b014285-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpgmvg.mount: Deactivated successfully. Jun 20 19:15:17.863532 kubelet[3148]: I0620 19:15:17.863495 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "27b54376-ecb2-44c4-9b0c-859b0b014285" (UID: "27b54376-ecb2-44c4-9b0c-859b0b014285"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:15:17.864640 kubelet[3148]: I0620 19:15:17.863688 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b54376-ecb2-44c4-9b0c-859b0b014285-kube-api-access-pgmvg" (OuterVolumeSpecName: "kube-api-access-pgmvg") pod "27b54376-ecb2-44c4-9b0c-859b0b014285" (UID: "27b54376-ecb2-44c4-9b0c-859b0b014285"). InnerVolumeSpecName "kube-api-access-pgmvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:15:17.864437 systemd[1]: var-lib-kubelet-pods-27b54376\x2decb2\x2d44c4\x2d9b0c\x2d859b0b014285-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:15:17.956472 kubelet[3148]: I0620 19:15:17.956451 3148 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-backend-key-pair\") on node \"ci-4344.1.0-a-dde0712b19\" DevicePath \"\"" Jun 20 19:15:17.956560 kubelet[3148]: I0620 19:15:17.956475 3148 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27b54376-ecb2-44c4-9b0c-859b0b014285-whisker-ca-bundle\") on node \"ci-4344.1.0-a-dde0712b19\" DevicePath \"\"" Jun 20 19:15:17.956560 kubelet[3148]: I0620 19:15:17.956487 3148 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgmvg\" (UniqueName: \"kubernetes.io/projected/27b54376-ecb2-44c4-9b0c-859b0b014285-kube-api-access-pgmvg\") on node \"ci-4344.1.0-a-dde0712b19\" DevicePath \"\"" Jun 20 19:15:18.021455 systemd[1]: Removed slice kubepods-besteffort-pod27b54376_ecb2_44c4_9b0c_859b0b014285.slice - libcontainer container kubepods-besteffort-pod27b54376_ecb2_44c4_9b0c_859b0b014285.slice. Jun 20 19:15:18.038622 kubelet[3148]: I0620 19:15:18.037249 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5r8q4" podStartSLOduration=1.349126344 podStartE2EDuration="26.037233304s" podCreationTimestamp="2025-06-20 19:14:52 +0000 UTC" firstStartedPulling="2025-06-20 19:14:52.487901585 +0000 UTC m=+17.669808890" lastFinishedPulling="2025-06-20 19:15:17.176008538 +0000 UTC m=+42.357915850" observedRunningTime="2025-06-20 19:15:18.036488882 +0000 UTC m=+43.218396186" watchObservedRunningTime="2025-06-20 19:15:18.037233304 +0000 UTC m=+43.219140615" Jun 20 19:15:18.092242 containerd[1756]: time="2025-06-20T19:15:18.092209931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" id:\"7d36d35a8c078db0d09426a8c6eec1030a562d5f46102b50085f919cf28ef945\" pid:4324 exit_status:1 exited_at:{seconds:1750446918 nanos:92016867}" Jun 20 19:15:18.107197 systemd[1]: Created slice kubepods-besteffort-pod0c110b17_58c8_4748_9962_f047cbbb9d65.slice - libcontainer container kubepods-besteffort-pod0c110b17_58c8_4748_9962_f047cbbb9d65.slice. Jun 20 19:15:18.157417 kubelet[3148]: I0620 19:15:18.157394 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c110b17-58c8-4748-9962-f047cbbb9d65-whisker-backend-key-pair\") pod \"whisker-9869c7c5d-mg8rc\" (UID: \"0c110b17-58c8-4748-9962-f047cbbb9d65\") " pod="calico-system/whisker-9869c7c5d-mg8rc" Jun 20 19:15:18.157497 kubelet[3148]: I0620 19:15:18.157422 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfbz\" (UniqueName: \"kubernetes.io/projected/0c110b17-58c8-4748-9962-f047cbbb9d65-kube-api-access-4qfbz\") pod \"whisker-9869c7c5d-mg8rc\" (UID: \"0c110b17-58c8-4748-9962-f047cbbb9d65\") " pod="calico-system/whisker-9869c7c5d-mg8rc" Jun 20 19:15:18.157497 kubelet[3148]: I0620 19:15:18.157441 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c110b17-58c8-4748-9962-f047cbbb9d65-whisker-ca-bundle\") pod \"whisker-9869c7c5d-mg8rc\" (UID: \"0c110b17-58c8-4748-9962-f047cbbb9d65\") " pod="calico-system/whisker-9869c7c5d-mg8rc" Jun 20 19:15:18.410609 containerd[1756]: time="2025-06-20T19:15:18.410528095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9869c7c5d-mg8rc,Uid:0c110b17-58c8-4748-9962-f047cbbb9d65,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:18.495591 systemd-networkd[1360]: cali83224c30329: Link UP Jun 20 19:15:18.496498 systemd-networkd[1360]: cali83224c30329: Gained carrier Jun 20 19:15:18.510426 containerd[1756]: 2025-06-20 19:15:18.432 [INFO][4340] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:15:18.510426 containerd[1756]: 2025-06-20 19:15:18.439 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0 whisker-9869c7c5d- calico-system 0c110b17-58c8-4748-9962-f047cbbb9d65 932 0 2025-06-20 19:15:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9869c7c5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 whisker-9869c7c5d-mg8rc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali83224c30329 [] [] }} ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-" Jun 20 19:15:18.510426 containerd[1756]: 2025-06-20 19:15:18.439 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.510426 containerd[1756]: 2025-06-20 19:15:18.458 [INFO][4351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" HandleID="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Workload="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.458 [INFO][4351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" HandleID="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Workload="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-dde0712b19", "pod":"whisker-9869c7c5d-mg8rc", "timestamp":"2025-06-20 19:15:18.457998076 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.458 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.458 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.458 [INFO][4351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.466 [INFO][4351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.470 [INFO][4351] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.473 [INFO][4351] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.474 [INFO][4351] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510633 containerd[1756]: 2025-06-20 19:15:18.475 [INFO][4351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.475 [INFO][4351] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.477 [INFO][4351] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5 Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.483 [INFO][4351] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.487 [INFO][4351] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.193/26] block=192.168.120.192/26 handle="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.488 [INFO][4351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.193/26] handle="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.488 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:18.510831 containerd[1756]: 2025-06-20 19:15:18.488 [INFO][4351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.193/26] IPv6=[] ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" HandleID="k8s-pod-network.3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Workload="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.510963 containerd[1756]: 2025-06-20 19:15:18.490 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0", GenerateName:"whisker-9869c7c5d-", Namespace:"calico-system", SelfLink:"", UID:"0c110b17-58c8-4748-9962-f047cbbb9d65", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9869c7c5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"whisker-9869c7c5d-mg8rc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83224c30329", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:18.510963 containerd[1756]: 2025-06-20 19:15:18.490 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.193/32] ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.511043 containerd[1756]: 2025-06-20 19:15:18.490 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83224c30329 ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.511043 containerd[1756]: 2025-06-20 19:15:18.497 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.511088 containerd[1756]: 2025-06-20 19:15:18.497 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0", GenerateName:"whisker-9869c7c5d-", Namespace:"calico-system", SelfLink:"", UID:"0c110b17-58c8-4748-9962-f047cbbb9d65", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9869c7c5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5", Pod:"whisker-9869c7c5d-mg8rc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83224c30329", MAC:"fe:64:aa:74:45:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:18.511147 containerd[1756]: 2025-06-20 19:15:18.509 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" Namespace="calico-system" Pod="whisker-9869c7c5d-mg8rc" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-whisker--9869c7c5d--mg8rc-eth0" Jun 20 19:15:18.545809 containerd[1756]: time="2025-06-20T19:15:18.545776022Z" level=info msg="connecting to shim 3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5" address="unix:///run/containerd/s/95b55270bd4b818a22abe5aeb06fa8945ac6595b295a7238958206f961bfd6b1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:18.565485 systemd[1]: Started cri-containerd-3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5.scope - libcontainer container 3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5. Jun 20 19:15:18.605659 containerd[1756]: time="2025-06-20T19:15:18.605637590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9869c7c5d-mg8rc,Uid:0c110b17-58c8-4748-9962-f047cbbb9d65,Namespace:calico-system,Attempt:0,} returns sandbox id \"3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5\"" Jun 20 19:15:18.606628 containerd[1756]: time="2025-06-20T19:15:18.606589897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:15:18.899850 containerd[1756]: time="2025-06-20T19:15:18.899741935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-6pzql,Uid:d1c89f0f-ffeb-41ca-bd71-31c366413c04,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:18.899850 containerd[1756]: time="2025-06-20T19:15:18.899759996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-bv6fw,Uid:27d669ba-30c6-4a02-8055-e0a275e427d6,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:18.901240 kubelet[3148]: I0620 19:15:18.901212 3148 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b54376-ecb2-44c4-9b0c-859b0b014285" path="/var/lib/kubelet/pods/27b54376-ecb2-44c4-9b0c-859b0b014285/volumes" Jun 20 19:15:19.044522 systemd-networkd[1360]: cali8ce35e2e942: Link UP Jun 20 19:15:19.046398 systemd-networkd[1360]: cali8ce35e2e942: Gained carrier Jun 20 19:15:19.066794 containerd[1756]: 2025-06-20 19:15:18.936 [INFO][4410] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:15:19.066794 containerd[1756]: 2025-06-20 19:15:18.949 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0 calico-apiserver-6d9b4ccdb4- calico-apiserver 27d669ba-30c6-4a02-8055-e0a275e427d6 850 0 2025-06-20 19:14:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d9b4ccdb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 calico-apiserver-6d9b4ccdb4-bv6fw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ce35e2e942 [] [] }} ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-" Jun 20 19:15:19.066794 containerd[1756]: 2025-06-20 19:15:18.949 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.066794 containerd[1756]: 2025-06-20 19:15:18.985 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" HandleID="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:18.986 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" HandleID="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-dde0712b19", "pod":"calico-apiserver-6d9b4ccdb4-bv6fw", "timestamp":"2025-06-20 19:15:18.985018228 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:18.986 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:18.986 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:18.986 [INFO][4435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:18.996 [INFO][4435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:19.000 [INFO][4435] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:19.008 [INFO][4435] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:19.009 [INFO][4435] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067016 containerd[1756]: 2025-06-20 19:15:19.011 [INFO][4435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.011 [INFO][4435] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.013 [INFO][4435] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0 Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.019 [INFO][4435] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.026 [INFO][4435] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.194/26] block=192.168.120.192/26 handle="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.026 [INFO][4435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.194/26] handle="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.026 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:19.067214 containerd[1756]: 2025-06-20 19:15:19.026 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.194/26] IPv6=[] ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" HandleID="k8s-pod-network.24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.067344 containerd[1756]: 2025-06-20 19:15:19.029 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0", GenerateName:"calico-apiserver-6d9b4ccdb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"27d669ba-30c6-4a02-8055-e0a275e427d6", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9b4ccdb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"calico-apiserver-6d9b4ccdb4-bv6fw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ce35e2e942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:19.068548 containerd[1756]: 2025-06-20 19:15:19.029 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.194/32] ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.068548 containerd[1756]: 2025-06-20 19:15:19.029 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ce35e2e942 ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.068548 containerd[1756]: 2025-06-20 19:15:19.048 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.068621 containerd[1756]: 2025-06-20 19:15:19.049 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0", GenerateName:"calico-apiserver-6d9b4ccdb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"27d669ba-30c6-4a02-8055-e0a275e427d6", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9b4ccdb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0", Pod:"calico-apiserver-6d9b4ccdb4-bv6fw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ce35e2e942", MAC:"e6:22:69:a5:94:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:19.068676 containerd[1756]: 2025-06-20 19:15:19.064 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-bv6fw" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--bv6fw-eth0" Jun 20 19:15:19.124171 containerd[1756]: time="2025-06-20T19:15:19.123982936Z" level=info msg="connecting to shim 24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0" address="unix:///run/containerd/s/37af6c7a2cd2940fb483b814a8e019f2c600f3278632f9ad7cfe40738dfc4f9b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:19.158498 systemd[1]: Started cri-containerd-24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0.scope - libcontainer container 24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0. Jun 20 19:15:19.161061 systemd-networkd[1360]: cali30c63536c46: Link UP Jun 20 19:15:19.161197 systemd-networkd[1360]: cali30c63536c46: Gained carrier Jun 20 19:15:19.183546 containerd[1756]: 2025-06-20 19:15:18.945 [INFO][4422] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:15:19.183546 containerd[1756]: 2025-06-20 19:15:18.953 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0 goldmane-5bd85449d4- calico-system d1c89f0f-ffeb-41ca-bd71-31c366413c04 853 0 2025-06-20 19:14:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 goldmane-5bd85449d4-6pzql eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali30c63536c46 [] [] }} ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-" Jun 20 19:15:19.183546 containerd[1756]: 2025-06-20 19:15:18.953 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.183546 containerd[1756]: 2025-06-20 19:15:18.991 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" HandleID="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Workload="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:18.994 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" HandleID="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Workload="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-dde0712b19", "pod":"goldmane-5bd85449d4-6pzql", "timestamp":"2025-06-20 19:15:18.991306027 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:18.994 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.026 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.027 [INFO][4440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.097 [INFO][4440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.102 [INFO][4440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.109 [INFO][4440] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.112 [INFO][4440] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.183801 containerd[1756]: 2025-06-20 19:15:19.118 [INFO][4440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.118 [INFO][4440] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.132 [INFO][4440] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.140 [INFO][4440] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.152 [INFO][4440] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.195/26] block=192.168.120.192/26 handle="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.152 [INFO][4440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.195/26] handle="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.152 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:19.184203 containerd[1756]: 2025-06-20 19:15:19.152 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.195/26] IPv6=[] ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" HandleID="k8s-pod-network.4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Workload="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.184470 containerd[1756]: 2025-06-20 19:15:19.156 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"d1c89f0f-ffeb-41ca-bd71-31c366413c04", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"goldmane-5bd85449d4-6pzql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali30c63536c46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:19.184581 containerd[1756]: 2025-06-20 19:15:19.156 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.195/32] ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.184581 containerd[1756]: 2025-06-20 19:15:19.156 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30c63536c46 ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.184581 containerd[1756]: 2025-06-20 19:15:19.163 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.184652 containerd[1756]: 2025-06-20 19:15:19.164 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"d1c89f0f-ffeb-41ca-bd71-31c366413c04", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d", Pod:"goldmane-5bd85449d4-6pzql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali30c63536c46", MAC:"da:e1:d8:c5:f5:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:19.184705 containerd[1756]: 2025-06-20 19:15:19.180 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" Namespace="calico-system" Pod="goldmane-5bd85449d4-6pzql" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-goldmane--5bd85449d4--6pzql-eth0" Jun 20 19:15:19.232796 containerd[1756]: time="2025-06-20T19:15:19.232770007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" id:\"3ba88c1f31b6748c96580cf1aa9321ab58c493a9f618463fe98af193013ccbfb\" pid:4526 exit_status:1 exited_at:{seconds:1750446919 nanos:232183122}" Jun 20 19:15:19.235499 containerd[1756]: time="2025-06-20T19:15:19.235460974Z" level=info msg="connecting to shim 4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d" address="unix:///run/containerd/s/9b67637c47b425bcc0016e586cbb503791fb554c0c54ddb57b4533cff2af0f58" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:19.278621 systemd[1]: Started cri-containerd-4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d.scope - libcontainer container 4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d. Jun 20 19:15:19.285073 containerd[1756]: time="2025-06-20T19:15:19.285007536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-bv6fw,Uid:27d669ba-30c6-4a02-8055-e0a275e427d6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0\"" Jun 20 19:15:19.357669 containerd[1756]: time="2025-06-20T19:15:19.357531233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-6pzql,Uid:d1c89f0f-ffeb-41ca-bd71-31c366413c04,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d\"" Jun 20 19:15:19.684560 systemd-networkd[1360]: vxlan.calico: Link UP Jun 20 19:15:19.684567 systemd-networkd[1360]: vxlan.calico: Gained carrier Jun 20 19:15:19.899517 containerd[1756]: time="2025-06-20T19:15:19.899482367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8bp,Uid:c247f1b6-5251-4f90-8193-b5662a7bfa74,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:20.002674 systemd-networkd[1360]: cali87d53fd852a: Link UP Jun 20 19:15:20.003183 systemd-networkd[1360]: cali87d53fd852a: Gained carrier Jun 20 19:15:20.017897 containerd[1756]: 2025-06-20 19:15:19.941 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0 csi-node-driver- calico-system c247f1b6-5251-4f90-8193-b5662a7bfa74 737 0 2025-06-20 19:14:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 csi-node-driver-gp8bp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87d53fd852a [] [] }} ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-" Jun 20 19:15:20.017897 containerd[1756]: 2025-06-20 19:15:19.941 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.017897 containerd[1756]: 2025-06-20 19:15:19.968 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" HandleID="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Workload="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.968 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" HandleID="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Workload="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-dde0712b19", "pod":"csi-node-driver-gp8bp", "timestamp":"2025-06-20 19:15:19.968730235 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.968 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.968 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.968 [INFO][4772] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.974 [INFO][4772] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.979 [INFO][4772] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.983 [INFO][4772] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.985 [INFO][4772] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018476 containerd[1756]: 2025-06-20 19:15:19.987 [INFO][4772] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.987 [INFO][4772] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.988 [INFO][4772] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.994 [INFO][4772] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.999 [INFO][4772] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.196/26] block=192.168.120.192/26 handle="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.999 [INFO][4772] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.196/26] handle="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.999 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:20.018704 containerd[1756]: 2025-06-20 19:15:19.999 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.196/26] IPv6=[] ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" HandleID="k8s-pod-network.6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Workload="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.018848 containerd[1756]: 2025-06-20 19:15:20.000 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c247f1b6-5251-4f90-8193-b5662a7bfa74", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"csi-node-driver-gp8bp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87d53fd852a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:20.018907 containerd[1756]: 2025-06-20 19:15:20.001 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.196/32] ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.018907 containerd[1756]: 2025-06-20 19:15:20.001 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87d53fd852a ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.018907 containerd[1756]: 2025-06-20 19:15:20.003 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.018977 containerd[1756]: 2025-06-20 19:15:20.003 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c247f1b6-5251-4f90-8193-b5662a7bfa74", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b", Pod:"csi-node-driver-gp8bp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87d53fd852a", MAC:"6e:0c:84:02:3e:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:20.019733 containerd[1756]: 2025-06-20 19:15:20.014 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" Namespace="calico-system" Pod="csi-node-driver-gp8bp" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-csi--node--driver--gp8bp-eth0" Jun 20 19:15:20.062244 containerd[1756]: time="2025-06-20T19:15:20.062205216Z" level=info msg="connecting to shim 6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b" address="unix:///run/containerd/s/a43e572b0afe3d49a2812a0fefe337ac08dec2d44593d6a68d517fe60e745365" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:20.082484 systemd[1]: Started cri-containerd-6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b.scope - libcontainer container 6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b. Jun 20 19:15:20.102165 containerd[1756]: time="2025-06-20T19:15:20.102144784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gp8bp,Uid:c247f1b6-5251-4f90-8193-b5662a7bfa74,Namespace:calico-system,Attempt:0,} returns sandbox id \"6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b\"" Jun 20 19:15:20.297482 systemd-networkd[1360]: cali83224c30329: Gained IPv6LL Jun 20 19:15:20.348821 containerd[1756]: time="2025-06-20T19:15:20.348793304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:20.351291 containerd[1756]: time="2025-06-20T19:15:20.351259482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:15:20.355110 containerd[1756]: time="2025-06-20T19:15:20.355079975Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:20.358782 containerd[1756]: time="2025-06-20T19:15:20.358729846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:20.359287 containerd[1756]: time="2025-06-20T19:15:20.359210411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.752574624s" Jun 20 19:15:20.359287 containerd[1756]: time="2025-06-20T19:15:20.359233279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:15:20.360033 containerd[1756]: time="2025-06-20T19:15:20.359986208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:15:20.365735 containerd[1756]: time="2025-06-20T19:15:20.365482307Z" level=info msg="CreateContainer within sandbox \"3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:15:20.384851 containerd[1756]: time="2025-06-20T19:15:20.384071968Z" level=info msg="Container f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:20.398341 containerd[1756]: time="2025-06-20T19:15:20.398316929Z" level=info msg="CreateContainer within sandbox \"3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2\"" Jun 20 19:15:20.398768 containerd[1756]: time="2025-06-20T19:15:20.398724452Z" level=info msg="StartContainer for \"f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2\"" Jun 20 19:15:20.400050 containerd[1756]: time="2025-06-20T19:15:20.400013547Z" level=info msg="connecting to shim f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2" address="unix:///run/containerd/s/95b55270bd4b818a22abe5aeb06fa8945ac6595b295a7238958206f961bfd6b1" protocol=ttrpc version=3 Jun 20 19:15:20.414493 systemd[1]: Started cri-containerd-f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2.scope - libcontainer container f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2. Jun 20 19:15:20.460592 containerd[1756]: time="2025-06-20T19:15:20.460552656Z" level=info msg="StartContainer for \"f629052f12f6308c00d9e811124322448c5c28641f790ce6a63a9e7d211b7cd2\" returns successfully" Jun 20 19:15:20.809449 systemd-networkd[1360]: cali8ce35e2e942: Gained IPv6LL Jun 20 19:15:20.809740 systemd-networkd[1360]: cali30c63536c46: Gained IPv6LL Jun 20 19:15:21.449486 systemd-networkd[1360]: vxlan.calico: Gained IPv6LL Jun 20 19:15:21.577529 systemd-networkd[1360]: cali87d53fd852a: Gained IPv6LL Jun 20 19:15:23.190297 containerd[1756]: time="2025-06-20T19:15:23.190252282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:23.193046 containerd[1756]: time="2025-06-20T19:15:23.193009952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:15:23.198376 containerd[1756]: time="2025-06-20T19:15:23.198327705Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:23.203364 containerd[1756]: time="2025-06-20T19:15:23.203303433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:23.203858 containerd[1756]: time="2025-06-20T19:15:23.203763687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 2.84375233s" Jun 20 19:15:23.203858 containerd[1756]: time="2025-06-20T19:15:23.203790585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:15:23.205121 containerd[1756]: time="2025-06-20T19:15:23.204775335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:15:23.210201 containerd[1756]: time="2025-06-20T19:15:23.210166571Z" level=info msg="CreateContainer within sandbox \"24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:15:23.229450 containerd[1756]: time="2025-06-20T19:15:23.229429108Z" level=info msg="Container dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:23.246523 containerd[1756]: time="2025-06-20T19:15:23.246498512Z" level=info msg="CreateContainer within sandbox \"24f61458fbe5dbe87fb70d8edecae34858b4add7eb9d8d78fd14a219ccef42d0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d\"" Jun 20 19:15:23.247373 containerd[1756]: time="2025-06-20T19:15:23.246905837Z" level=info msg="StartContainer for \"dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d\"" Jun 20 19:15:23.248007 containerd[1756]: time="2025-06-20T19:15:23.247978400Z" level=info msg="connecting to shim dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d" address="unix:///run/containerd/s/37af6c7a2cd2940fb483b814a8e019f2c600f3278632f9ad7cfe40738dfc4f9b" protocol=ttrpc version=3 Jun 20 19:15:23.267499 systemd[1]: Started cri-containerd-dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d.scope - libcontainer container dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d. Jun 20 19:15:23.306660 containerd[1756]: time="2025-06-20T19:15:23.306412126Z" level=info msg="StartContainer for \"dcf0f0dcde46d5fb1a8c751c1a2cdc851d4cb99d4503837b2c727982021d990d\" returns successfully" Jun 20 19:15:25.034958 kubelet[3148]: I0620 19:15:25.034931 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:25.620253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430986978.mount: Deactivated successfully. Jun 20 19:15:26.324028 containerd[1756]: time="2025-06-20T19:15:26.323993226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:26.327500 containerd[1756]: time="2025-06-20T19:15:26.327473095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:15:26.330651 containerd[1756]: time="2025-06-20T19:15:26.330604246Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:26.334303 containerd[1756]: time="2025-06-20T19:15:26.334177815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:26.334670 containerd[1756]: time="2025-06-20T19:15:26.334651082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 3.129847876s" Jun 20 19:15:26.334720 containerd[1756]: time="2025-06-20T19:15:26.334679779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:15:26.335552 containerd[1756]: time="2025-06-20T19:15:26.335401197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:15:26.340865 containerd[1756]: time="2025-06-20T19:15:26.340842038Z" level=info msg="CreateContainer within sandbox \"4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:15:26.361153 containerd[1756]: time="2025-06-20T19:15:26.360442693Z" level=info msg="Container 4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:26.378974 containerd[1756]: time="2025-06-20T19:15:26.378950127Z" level=info msg="CreateContainer within sandbox \"4d12e1055e0b4ec169f3a5a6f9966f59bcb68aa356f80ec34045e9c40f4cc28d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\"" Jun 20 19:15:26.380159 containerd[1756]: time="2025-06-20T19:15:26.379339788Z" level=info msg="StartContainer for \"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\"" Jun 20 19:15:26.380591 containerd[1756]: time="2025-06-20T19:15:26.380568446Z" level=info msg="connecting to shim 4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0" address="unix:///run/containerd/s/9b67637c47b425bcc0016e586cbb503791fb554c0c54ddb57b4533cff2af0f58" protocol=ttrpc version=3 Jun 20 19:15:26.398507 systemd[1]: Started cri-containerd-4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0.scope - libcontainer container 4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0. Jun 20 19:15:26.439704 containerd[1756]: time="2025-06-20T19:15:26.439683123Z" level=info msg="StartContainer for \"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" returns successfully" Jun 20 19:15:26.877319 kubelet[3148]: I0620 19:15:26.877295 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:26.894303 kubelet[3148]: I0620 19:15:26.894050 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-bv6fw" podStartSLOduration=33.981443082 podStartE2EDuration="37.894035824s" podCreationTimestamp="2025-06-20 19:14:49 +0000 UTC" firstStartedPulling="2025-06-20 19:15:19.291833219 +0000 UTC m=+44.473740513" lastFinishedPulling="2025-06-20 19:15:23.204425956 +0000 UTC m=+48.386333255" observedRunningTime="2025-06-20 19:15:24.045617201 +0000 UTC m=+49.227524509" watchObservedRunningTime="2025-06-20 19:15:26.894035824 +0000 UTC m=+52.075943133" Jun 20 19:15:26.900090 containerd[1756]: time="2025-06-20T19:15:26.899975278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt5w6,Uid:59e5d8d4-094b-4a27-8aa7-d10b49152c4d,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:27.019301 systemd-networkd[1360]: cali3a60392672d: Link UP Jun 20 19:15:27.020205 systemd-networkd[1360]: cali3a60392672d: Gained carrier Jun 20 19:15:27.035092 containerd[1756]: 2025-06-20 19:15:26.947 [INFO][4984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0 coredns-674b8bbfcf- kube-system 59e5d8d4-094b-4a27-8aa7-d10b49152c4d 846 0 2025-06-20 19:14:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 coredns-674b8bbfcf-mt5w6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a60392672d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-" Jun 20 19:15:27.035092 containerd[1756]: 2025-06-20 19:15:26.947 [INFO][4984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.035092 containerd[1756]: 2025-06-20 19:15:26.977 [INFO][4996] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" HandleID="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Workload="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.977 [INFO][4996] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" HandleID="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Workload="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-dde0712b19", "pod":"coredns-674b8bbfcf-mt5w6", "timestamp":"2025-06-20 19:15:26.977065777 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.977 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.977 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.977 [INFO][4996] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.983 [INFO][4996] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.987 [INFO][4996] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.991 [INFO][4996] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.993 [INFO][4996] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035252 containerd[1756]: 2025-06-20 19:15:26.995 [INFO][4996] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:26.995 [INFO][4996] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:26.996 [INFO][4996] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743 Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:27.002 [INFO][4996] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:27.013 [INFO][4996] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.197/26] block=192.168.120.192/26 handle="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:27.013 [INFO][4996] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.197/26] handle="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:27.013 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:27.035582 containerd[1756]: 2025-06-20 19:15:27.013 [INFO][4996] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.197/26] IPv6=[] ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" HandleID="k8s-pod-network.2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Workload="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.035746 containerd[1756]: 2025-06-20 19:15:27.015 [INFO][4984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"59e5d8d4-094b-4a27-8aa7-d10b49152c4d", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"coredns-674b8bbfcf-mt5w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a60392672d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.035746 containerd[1756]: 2025-06-20 19:15:27.016 [INFO][4984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.197/32] ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.035746 containerd[1756]: 2025-06-20 19:15:27.016 [INFO][4984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a60392672d ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.035746 containerd[1756]: 2025-06-20 19:15:27.017 [INFO][4984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.035746 containerd[1756]: 2025-06-20 19:15:27.017 [INFO][4984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"59e5d8d4-094b-4a27-8aa7-d10b49152c4d", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743", Pod:"coredns-674b8bbfcf-mt5w6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a60392672d", MAC:"2a:0e:73:83:eb:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:27.035746 containerd[1756]: 2025-06-20 19:15:27.029 [INFO][4984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt5w6" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--mt5w6-eth0" Jun 20 19:15:27.065426 kubelet[3148]: I0620 19:15:27.064910 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-6pzql" podStartSLOduration=29.089938669 podStartE2EDuration="36.064893523s" podCreationTimestamp="2025-06-20 19:14:51 +0000 UTC" firstStartedPulling="2025-06-20 19:15:19.360315999 +0000 UTC m=+44.542223302" lastFinishedPulling="2025-06-20 19:15:26.335270849 +0000 UTC m=+51.517178156" observedRunningTime="2025-06-20 19:15:27.064134936 +0000 UTC m=+52.246042240" watchObservedRunningTime="2025-06-20 19:15:27.064893523 +0000 UTC m=+52.246800835" Jun 20 19:15:27.085216 containerd[1756]: time="2025-06-20T19:15:27.084864727Z" level=info msg="connecting to shim 2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743" address="unix:///run/containerd/s/d3abb945ca69bacfaa467420d60fab79135780e96d73695bdd899249508e9fa9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:27.113742 systemd[1]: Started cri-containerd-2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743.scope - libcontainer container 2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743. Jun 20 19:15:27.140734 containerd[1756]: time="2025-06-20T19:15:27.140655081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"4ce4c7ec99c21ec827c889168d729d7ac60680727961b2e6658215579ccdcb51\" pid:5028 exit_status:1 exited_at:{seconds:1750446927 nanos:140019935}" Jun 20 19:15:27.156337 containerd[1756]: time="2025-06-20T19:15:27.156302876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt5w6,Uid:59e5d8d4-094b-4a27-8aa7-d10b49152c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743\"" Jun 20 19:15:27.163190 containerd[1756]: time="2025-06-20T19:15:27.163132021Z" level=info msg="CreateContainer within sandbox \"2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:15:27.179076 containerd[1756]: time="2025-06-20T19:15:27.179055150Z" level=info msg="Container cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:27.189817 containerd[1756]: time="2025-06-20T19:15:27.189794015Z" level=info msg="CreateContainer within sandbox \"2e40d6be62085c92f0c81bb3b2ecef3d4d52cddb3c167cdb386fd1781e2ef743\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b\"" Jun 20 19:15:27.190178 containerd[1756]: time="2025-06-20T19:15:27.190160756Z" level=info msg="StartContainer for \"cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b\"" Jun 20 19:15:27.191102 containerd[1756]: time="2025-06-20T19:15:27.191055577Z" level=info msg="connecting to shim cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b" address="unix:///run/containerd/s/d3abb945ca69bacfaa467420d60fab79135780e96d73695bdd899249508e9fa9" protocol=ttrpc version=3 Jun 20 19:15:27.204488 systemd[1]: Started cri-containerd-cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b.scope - libcontainer container cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b. Jun 20 19:15:27.226162 containerd[1756]: time="2025-06-20T19:15:27.226100202Z" level=info msg="StartContainer for \"cb77faef6bdfdfbd38876c7f829f3afc16ad9ae2d37c1f3799c3b3cfb988257b\" returns successfully" Jun 20 19:15:27.900268 containerd[1756]: time="2025-06-20T19:15:27.900209178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-szcct,Uid:d505d167-e14e-4e9c-96db-fe9e2153cb77,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:15:28.044204 systemd-networkd[1360]: cali3c0cce7d67c: Link UP Jun 20 19:15:28.044725 systemd-networkd[1360]: cali3c0cce7d67c: Gained carrier Jun 20 19:15:28.047297 containerd[1756]: time="2025-06-20T19:15:28.047164604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:28.050250 containerd[1756]: time="2025-06-20T19:15:28.049988211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:15:28.056155 containerd[1756]: time="2025-06-20T19:15:28.054934331Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:28.060733 containerd[1756]: time="2025-06-20T19:15:28.060710597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:28.061735 containerd[1756]: time="2025-06-20T19:15:28.061408526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.725981994s" Jun 20 19:15:28.061839 containerd[1756]: time="2025-06-20T19:15:28.061826349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:15:28.063612 containerd[1756]: time="2025-06-20T19:15:28.063583384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.954 [INFO][5122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0 calico-apiserver-6d9b4ccdb4- calico-apiserver d505d167-e14e-4e9c-96db-fe9e2153cb77 851 0 2025-06-20 19:14:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d9b4ccdb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 calico-apiserver-6d9b4ccdb4-szcct eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c0cce7d67c [] [] }} ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.954 [INFO][5122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.992 [INFO][5135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" HandleID="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.992 [INFO][5135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" HandleID="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-dde0712b19", "pod":"calico-apiserver-6d9b4ccdb4-szcct", "timestamp":"2025-06-20 19:15:27.992345043 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.992 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.993 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:27.993 [INFO][5135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.000 [INFO][5135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.005 [INFO][5135] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.014 [INFO][5135] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.016 [INFO][5135] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.019 [INFO][5135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.019 [INFO][5135] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.021 [INFO][5135] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2 Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.029 [INFO][5135] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.039 [INFO][5135] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.198/26] block=192.168.120.192/26 handle="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.039 [INFO][5135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.198/26] handle="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.039 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:28.070274 containerd[1756]: 2025-06-20 19:15:28.039 [INFO][5135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.198/26] IPv6=[] ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" HandleID="k8s-pod-network.ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.070786 containerd[1756]: 2025-06-20 19:15:28.040 [INFO][5122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0", GenerateName:"calico-apiserver-6d9b4ccdb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d505d167-e14e-4e9c-96db-fe9e2153cb77", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9b4ccdb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"calico-apiserver-6d9b4ccdb4-szcct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c0cce7d67c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:28.070786 containerd[1756]: 2025-06-20 19:15:28.040 [INFO][5122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.198/32] ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.070786 containerd[1756]: 2025-06-20 19:15:28.040 [INFO][5122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0cce7d67c ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.070786 containerd[1756]: 2025-06-20 19:15:28.045 [INFO][5122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.070786 containerd[1756]: 2025-06-20 19:15:28.045 [INFO][5122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0", GenerateName:"calico-apiserver-6d9b4ccdb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d505d167-e14e-4e9c-96db-fe9e2153cb77", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d9b4ccdb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2", Pod:"calico-apiserver-6d9b4ccdb4-szcct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c0cce7d67c", MAC:"46:6a:08:6d:12:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:28.070786 containerd[1756]: 2025-06-20 19:15:28.065 [INFO][5122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" Namespace="calico-apiserver" Pod="calico-apiserver-6d9b4ccdb4-szcct" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--apiserver--6d9b4ccdb4--szcct-eth0" Jun 20 19:15:28.071630 containerd[1756]: time="2025-06-20T19:15:28.071602853Z" level=info msg="CreateContainer within sandbox \"6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:15:28.082763 kubelet[3148]: I0620 19:15:28.082498 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mt5w6" podStartSLOduration=48.082479386 podStartE2EDuration="48.082479386s" podCreationTimestamp="2025-06-20 19:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:28.080961399 +0000 UTC m=+53.262868706" watchObservedRunningTime="2025-06-20 19:15:28.082479386 +0000 UTC m=+53.264386691" Jun 20 19:15:28.100643 containerd[1756]: time="2025-06-20T19:15:28.100611161Z" level=info msg="Container 35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:28.110774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352886732.mount: Deactivated successfully. Jun 20 19:15:28.143045 containerd[1756]: time="2025-06-20T19:15:28.143015644Z" level=info msg="CreateContainer within sandbox \"6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1\"" Jun 20 19:15:28.144390 containerd[1756]: time="2025-06-20T19:15:28.143665660Z" level=info msg="StartContainer for \"35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1\"" Jun 20 19:15:28.145205 containerd[1756]: time="2025-06-20T19:15:28.145179183Z" level=info msg="connecting to shim 35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1" address="unix:///run/containerd/s/a43e572b0afe3d49a2812a0fefe337ac08dec2d44593d6a68d517fe60e745365" protocol=ttrpc version=3 Jun 20 19:15:28.157425 containerd[1756]: time="2025-06-20T19:15:28.156859981Z" level=info msg="connecting to shim ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2" address="unix:///run/containerd/s/6312c4d9d2e1e4bed6804d00a003b8bb920de6bddbd64e78bbffbf43a30adf7e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:28.173518 systemd[1]: Started cri-containerd-35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1.scope - libcontainer container 35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1. Jun 20 19:15:28.197032 systemd[1]: Started cri-containerd-ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2.scope - libcontainer container ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2. Jun 20 19:15:28.229055 containerd[1756]: time="2025-06-20T19:15:28.228766042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"57585a0fc0e4d08ba6396533c47df20f9ea54b602c84496f885eb58d53454f42\" pid:5161 exit_status:1 exited_at:{seconds:1750446928 nanos:228565068}" Jun 20 19:15:28.256475 containerd[1756]: time="2025-06-20T19:15:28.256453416Z" level=info msg="StartContainer for \"35d299dca77d8406d0e26610d3cb682a1dec3fa2de323c76cd59d99ef97005e1\" returns successfully" Jun 20 19:15:28.275971 containerd[1756]: time="2025-06-20T19:15:28.275943630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d9b4ccdb4-szcct,Uid:d505d167-e14e-4e9c-96db-fe9e2153cb77,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2\"" Jun 20 19:15:28.283732 containerd[1756]: time="2025-06-20T19:15:28.283695198Z" level=info msg="CreateContainer within sandbox \"ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:15:28.301010 containerd[1756]: time="2025-06-20T19:15:28.300885053Z" level=info msg="Container 8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:28.318659 containerd[1756]: time="2025-06-20T19:15:28.318636261Z" level=info msg="CreateContainer within sandbox \"ce7e75e24b7f8d895105dd2fd569d0d500f004d828cdc7a0219762c87f26c4b2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b\"" Jun 20 19:15:28.319246 containerd[1756]: time="2025-06-20T19:15:28.319222338Z" level=info msg="StartContainer for \"8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b\"" Jun 20 19:15:28.320317 containerd[1756]: time="2025-06-20T19:15:28.320292881Z" level=info msg="connecting to shim 8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b" address="unix:///run/containerd/s/6312c4d9d2e1e4bed6804d00a003b8bb920de6bddbd64e78bbffbf43a30adf7e" protocol=ttrpc version=3 Jun 20 19:15:28.335634 systemd[1]: Started cri-containerd-8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b.scope - libcontainer container 8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b. Jun 20 19:15:28.397107 containerd[1756]: time="2025-06-20T19:15:28.397077784Z" level=info msg="StartContainer for \"8c3380ffaeea686c0fec9eeeeb5e6d9ef6afd17963dcbd47a7788124903b8f5b\" returns successfully" Jun 20 19:15:28.809499 systemd-networkd[1360]: cali3a60392672d: Gained IPv6LL Jun 20 19:15:28.899531 containerd[1756]: time="2025-06-20T19:15:28.899221179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9d94d467-vmsw8,Uid:df054429-1b56-4c94-93cb-890bfe97c219,Namespace:calico-system,Attempt:0,}" Jun 20 19:15:29.017739 systemd-networkd[1360]: cali46c81d1d673: Link UP Jun 20 19:15:29.018527 systemd-networkd[1360]: cali46c81d1d673: Gained carrier Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.943 [INFO][5288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0 calico-kube-controllers-7f9d94d467- calico-system df054429-1b56-4c94-93cb-890bfe97c219 845 0 2025-06-20 19:14:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f9d94d467 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 calico-kube-controllers-7f9d94d467-vmsw8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali46c81d1d673 [] [] }} ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.943 [INFO][5288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.966 [INFO][5300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" HandleID="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.966 [INFO][5300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" HandleID="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-dde0712b19", "pod":"calico-kube-controllers-7f9d94d467-vmsw8", "timestamp":"2025-06-20 19:15:28.966310149 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.966 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.966 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.966 [INFO][5300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.972 [INFO][5300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.978 [INFO][5300] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.984 [INFO][5300] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.991 [INFO][5300] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.993 [INFO][5300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.993 [INFO][5300] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.994 [INFO][5300] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699 Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:28.998 [INFO][5300] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:29.008 [INFO][5300] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.199/26] block=192.168.120.192/26 handle="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:29.008 [INFO][5300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.199/26] handle="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:29.008 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:29.043542 containerd[1756]: 2025-06-20 19:15:29.008 [INFO][5300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.199/26] IPv6=[] ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" HandleID="k8s-pod-network.341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Workload="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.044257 containerd[1756]: 2025-06-20 19:15:29.011 [INFO][5288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0", GenerateName:"calico-kube-controllers-7f9d94d467-", Namespace:"calico-system", SelfLink:"", UID:"df054429-1b56-4c94-93cb-890bfe97c219", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9d94d467", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"calico-kube-controllers-7f9d94d467-vmsw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46c81d1d673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:29.044257 containerd[1756]: 2025-06-20 19:15:29.012 [INFO][5288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.199/32] ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.044257 containerd[1756]: 2025-06-20 19:15:29.012 [INFO][5288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c81d1d673 ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.044257 containerd[1756]: 2025-06-20 19:15:29.022 [INFO][5288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.044257 containerd[1756]: 2025-06-20 19:15:29.022 [INFO][5288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0", GenerateName:"calico-kube-controllers-7f9d94d467-", Namespace:"calico-system", SelfLink:"", UID:"df054429-1b56-4c94-93cb-890bfe97c219", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9d94d467", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699", Pod:"calico-kube-controllers-7f9d94d467-vmsw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46c81d1d673", MAC:"0a:f3:6f:92:e4:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:29.044257 containerd[1756]: 2025-06-20 19:15:29.039 [INFO][5288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" Namespace="calico-system" Pod="calico-kube-controllers-7f9d94d467-vmsw8" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-calico--kube--controllers--7f9d94d467--vmsw8-eth0" Jun 20 19:15:29.084996 kubelet[3148]: I0620 19:15:29.084388 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d9b4ccdb4-szcct" podStartSLOduration=40.084372413 podStartE2EDuration="40.084372413s" podCreationTimestamp="2025-06-20 19:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:29.081409433 +0000 UTC m=+54.263316738" watchObservedRunningTime="2025-06-20 19:15:29.084372413 +0000 UTC m=+54.266279721" Jun 20 19:15:29.111363 containerd[1756]: time="2025-06-20T19:15:29.111272525Z" level=info msg="connecting to shim 341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699" address="unix:///run/containerd/s/b5a32b3b9a5855a9ecfeccb8683b47a63e25e7852e06edb46135ae1aec27c49c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:29.145792 systemd[1]: Started cri-containerd-341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699.scope - libcontainer container 341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699. Jun 20 19:15:29.194410 systemd-networkd[1360]: cali3c0cce7d67c: Gained IPv6LL Jun 20 19:15:29.254836 containerd[1756]: time="2025-06-20T19:15:29.254812471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"721c257e5885aaf4449d0e1e5c4178f13acf139a88fe87cebb26a94ecb49fec1\" pid:5326 exit_status:1 exited_at:{seconds:1750446929 nanos:246115157}" Jun 20 19:15:29.594003 containerd[1756]: time="2025-06-20T19:15:29.593894747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9d94d467-vmsw8,Uid:df054429-1b56-4c94-93cb-890bfe97c219,Namespace:calico-system,Attempt:0,} returns sandbox id \"341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699\"" Jun 20 19:15:30.067280 kubelet[3148]: I0620 19:15:30.067204 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:30.458125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100121373.mount: Deactivated successfully. Jun 20 19:15:30.899965 containerd[1756]: time="2025-06-20T19:15:30.899834014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72j62,Uid:4dbd8824-5687-4981-a20a-4e1f5b33a154,Namespace:kube-system,Attempt:0,}" Jun 20 19:15:30.921555 systemd-networkd[1360]: cali46c81d1d673: Gained IPv6LL Jun 20 19:15:30.981769 containerd[1756]: time="2025-06-20T19:15:30.981677599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:30.984291 containerd[1756]: time="2025-06-20T19:15:30.984244683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:15:30.988579 containerd[1756]: time="2025-06-20T19:15:30.988555710Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:30.993861 containerd[1756]: time="2025-06-20T19:15:30.993839473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:30.995052 containerd[1756]: time="2025-06-20T19:15:30.995029758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 2.931320132s" Jun 20 19:15:30.995188 containerd[1756]: time="2025-06-20T19:15:30.995129294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:15:30.996470 containerd[1756]: time="2025-06-20T19:15:30.995815806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:15:31.001949 containerd[1756]: time="2025-06-20T19:15:31.001930220Z" level=info msg="CreateContainer within sandbox \"3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:15:31.021997 containerd[1756]: time="2025-06-20T19:15:31.020547423Z" level=info msg="Container c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:31.035889 systemd-networkd[1360]: cali9fc2c848be6: Link UP Jun 20 19:15:31.036004 systemd-networkd[1360]: cali9fc2c848be6: Gained carrier Jun 20 19:15:31.051204 containerd[1756]: time="2025-06-20T19:15:31.050773018Z" level=info msg="CreateContainer within sandbox \"3008c68090e34f5c9443b72a73ee4080f753b4f5d4f239f82017cb963f641fd5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3\"" Jun 20 19:15:31.051733 containerd[1756]: time="2025-06-20T19:15:31.051503978Z" level=info msg="StartContainer for \"c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3\"" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.971 [INFO][5397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0 coredns-674b8bbfcf- kube-system 4dbd8824-5687-4981-a20a-4e1f5b33a154 848 0 2025-06-20 19:14:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-dde0712b19 coredns-674b8bbfcf-72j62 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9fc2c848be6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.971 [INFO][5397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.993 [INFO][5413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" HandleID="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Workload="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.993 [INFO][5413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" HandleID="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Workload="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-dde0712b19", "pod":"coredns-674b8bbfcf-72j62", "timestamp":"2025-06-20 19:15:30.993239624 +0000 UTC"}, Hostname:"ci-4344.1.0-a-dde0712b19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.993 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.993 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:30.993 [INFO][5413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-dde0712b19' Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.001 [INFO][5413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.005 [INFO][5413] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.008 [INFO][5413] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.009 [INFO][5413] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.011 [INFO][5413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.011 [INFO][5413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.013 [INFO][5413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08 Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.020 [INFO][5413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.031 [INFO][5413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.200/26] block=192.168.120.192/26 handle="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.031 [INFO][5413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.200/26] handle="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" host="ci-4344.1.0-a-dde0712b19" Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.032 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:15:31.051844 containerd[1756]: 2025-06-20 19:15:31.032 [INFO][5413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.200/26] IPv6=[] ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" HandleID="k8s-pod-network.88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Workload="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.053630 containerd[1756]: 2025-06-20 19:15:31.033 [INFO][5397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dbd8824-5687-4981-a20a-4e1f5b33a154", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"", Pod:"coredns-674b8bbfcf-72j62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fc2c848be6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:31.053630 containerd[1756]: 2025-06-20 19:15:31.033 [INFO][5397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.200/32] ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.053630 containerd[1756]: 2025-06-20 19:15:31.033 [INFO][5397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fc2c848be6 ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.053630 containerd[1756]: 2025-06-20 19:15:31.035 [INFO][5397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.053630 containerd[1756]: 2025-06-20 19:15:31.035 [INFO][5397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4dbd8824-5687-4981-a20a-4e1f5b33a154", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-dde0712b19", ContainerID:"88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08", Pod:"coredns-674b8bbfcf-72j62", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fc2c848be6", MAC:"92:5a:79:a4:d9:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:15:31.053630 containerd[1756]: 2025-06-20 19:15:31.049 [INFO][5397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" Namespace="kube-system" Pod="coredns-674b8bbfcf-72j62" WorkloadEndpoint="ci--4344.1.0--a--dde0712b19-k8s-coredns--674b8bbfcf--72j62-eth0" Jun 20 19:15:31.054291 containerd[1756]: time="2025-06-20T19:15:31.054163974Z" level=info msg="connecting to shim c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3" address="unix:///run/containerd/s/95b55270bd4b818a22abe5aeb06fa8945ac6595b295a7238958206f961bfd6b1" protocol=ttrpc version=3 Jun 20 19:15:31.076615 systemd[1]: Started cri-containerd-c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3.scope - libcontainer container c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3. Jun 20 19:15:31.100115 containerd[1756]: time="2025-06-20T19:15:31.100093110Z" level=info msg="connecting to shim 88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08" address="unix:///run/containerd/s/8f6698bc6e6fbdb5cf2635b13c3aeadfa032d035078bddf4096d0603583baf95" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:15:31.121514 systemd[1]: Started cri-containerd-88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08.scope - libcontainer container 88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08. Jun 20 19:15:31.132809 containerd[1756]: time="2025-06-20T19:15:31.132769327Z" level=info msg="StartContainer for \"c52324b856dda30d4e1df2ea0d9a3268ed9660eac7f7474278134a75372e7ed3\" returns successfully" Jun 20 19:15:31.168666 containerd[1756]: time="2025-06-20T19:15:31.168593684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-72j62,Uid:4dbd8824-5687-4981-a20a-4e1f5b33a154,Namespace:kube-system,Attempt:0,} returns sandbox id \"88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08\"" Jun 20 19:15:31.177191 containerd[1756]: time="2025-06-20T19:15:31.177174261Z" level=info msg="CreateContainer within sandbox \"88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:15:31.199261 containerd[1756]: time="2025-06-20T19:15:31.196695162Z" level=info msg="Container 5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:31.209989 containerd[1756]: time="2025-06-20T19:15:31.209965254Z" level=info msg="CreateContainer within sandbox \"88e48bca2551f0c23d5951155ccf72c7f0586e6492c759dce3daebe184acbb08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a\"" Jun 20 19:15:31.210405 containerd[1756]: time="2025-06-20T19:15:31.210385145Z" level=info msg="StartContainer for \"5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a\"" Jun 20 19:15:31.211269 containerd[1756]: time="2025-06-20T19:15:31.211235361Z" level=info msg="connecting to shim 5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a" address="unix:///run/containerd/s/8f6698bc6e6fbdb5cf2635b13c3aeadfa032d035078bddf4096d0603583baf95" protocol=ttrpc version=3 Jun 20 19:15:31.230472 systemd[1]: Started cri-containerd-5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a.scope - libcontainer container 5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a. Jun 20 19:15:32.783439 containerd[1756]: time="2025-06-20T19:15:32.783333288Z" level=info msg="StartContainer for \"5d81724f688c1fa5aaa7b7273c73a153f6dc116cba74fd1536cd4907c1da698a\" returns successfully" Jun 20 19:15:32.894099 kubelet[3148]: I0620 19:15:32.894037 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-9869c7c5d-mg8rc" podStartSLOduration=2.504582219 podStartE2EDuration="14.894020571s" podCreationTimestamp="2025-06-20 19:15:18 +0000 UTC" firstStartedPulling="2025-06-20 19:15:18.606389384 +0000 UTC m=+43.788296682" lastFinishedPulling="2025-06-20 19:15:30.995827726 +0000 UTC m=+56.177735034" observedRunningTime="2025-06-20 19:15:32.892622469 +0000 UTC m=+58.074529777" watchObservedRunningTime="2025-06-20 19:15:32.894020571 +0000 UTC m=+58.075927878" Jun 20 19:15:33.033474 systemd-networkd[1360]: cali9fc2c848be6: Gained IPv6LL Jun 20 19:15:34.853037 containerd[1756]: time="2025-06-20T19:15:34.852996911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:34.855504 containerd[1756]: time="2025-06-20T19:15:34.855470353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:15:34.879615 containerd[1756]: time="2025-06-20T19:15:34.879569926Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:34.946568 containerd[1756]: time="2025-06-20T19:15:34.944749578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:34.946568 containerd[1756]: time="2025-06-20T19:15:34.946339723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 3.950032114s" Jun 20 19:15:34.946568 containerd[1756]: time="2025-06-20T19:15:34.946513282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:15:34.949812 containerd[1756]: time="2025-06-20T19:15:34.949791131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:15:34.996434 containerd[1756]: time="2025-06-20T19:15:34.996411771Z" level=info msg="CreateContainer within sandbox \"6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:15:35.195831 containerd[1756]: time="2025-06-20T19:15:35.195756495Z" level=info msg="Container 436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:35.338111 containerd[1756]: time="2025-06-20T19:15:35.338088891Z" level=info msg="CreateContainer within sandbox \"6738b60babbfaf153e42c7c9d40fdd67ea79e550ecf7b2dda907b976ae3d7f3b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12\"" Jun 20 19:15:35.339388 containerd[1756]: time="2025-06-20T19:15:35.338542849Z" level=info msg="StartContainer for \"436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12\"" Jun 20 19:15:35.339861 containerd[1756]: time="2025-06-20T19:15:35.339839050Z" level=info msg="connecting to shim 436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12" address="unix:///run/containerd/s/a43e572b0afe3d49a2812a0fefe337ac08dec2d44593d6a68d517fe60e745365" protocol=ttrpc version=3 Jun 20 19:15:35.358493 systemd[1]: Started cri-containerd-436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12.scope - libcontainer container 436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12. Jun 20 19:15:35.445554 containerd[1756]: time="2025-06-20T19:15:35.445525967Z" level=info msg="StartContainer for \"436419504cc2a0388ba4119bd4cfb46d07447437fcd9046f612340006abe2b12\" returns successfully" Jun 20 19:15:35.806096 kubelet[3148]: I0620 19:15:35.806048 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-72j62" podStartSLOduration=55.80603476 podStartE2EDuration="55.80603476s" podCreationTimestamp="2025-06-20 19:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:15:33.800428215 +0000 UTC m=+58.982335522" watchObservedRunningTime="2025-06-20 19:15:35.80603476 +0000 UTC m=+60.987942062" Jun 20 19:15:35.806407 kubelet[3148]: I0620 19:15:35.806377 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gp8bp" podStartSLOduration=28.959617823 podStartE2EDuration="43.806369934s" podCreationTimestamp="2025-06-20 19:14:52 +0000 UTC" firstStartedPulling="2025-06-20 19:15:20.103084915 +0000 UTC m=+45.284992218" lastFinishedPulling="2025-06-20 19:15:34.949837028 +0000 UTC m=+60.131744329" observedRunningTime="2025-06-20 19:15:35.805781399 +0000 UTC m=+60.987688705" watchObservedRunningTime="2025-06-20 19:15:35.806369934 +0000 UTC m=+60.988277232" Jun 20 19:15:35.980304 kubelet[3148]: I0620 19:15:35.980286 3148 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:15:35.980436 kubelet[3148]: I0620 19:15:35.980318 3148 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:15:38.009374 kubelet[3148]: I0620 19:15:38.009219 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:15:40.650315 containerd[1756]: time="2025-06-20T19:15:40.650274749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:40.653483 containerd[1756]: time="2025-06-20T19:15:40.653444686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:15:40.657635 containerd[1756]: time="2025-06-20T19:15:40.657594034Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:40.664612 containerd[1756]: time="2025-06-20T19:15:40.664562381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:15:40.665033 containerd[1756]: time="2025-06-20T19:15:40.664896011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 5.714975119s" Jun 20 19:15:40.665033 containerd[1756]: time="2025-06-20T19:15:40.664931873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:15:40.680796 containerd[1756]: time="2025-06-20T19:15:40.680770561Z" level=info msg="CreateContainer within sandbox \"341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:15:40.699060 containerd[1756]: time="2025-06-20T19:15:40.697662657Z" level=info msg="Container e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:15:40.800136 containerd[1756]: time="2025-06-20T19:15:40.800111067Z" level=info msg="CreateContainer within sandbox \"341ec7561477586dcf81239a9f76198923297b3d1abcead720c126c9868be699\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\"" Jun 20 19:15:40.800531 containerd[1756]: time="2025-06-20T19:15:40.800501658Z" level=info msg="StartContainer for \"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\"" Jun 20 19:15:40.801798 containerd[1756]: time="2025-06-20T19:15:40.801766052Z" level=info msg="connecting to shim e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf" address="unix:///run/containerd/s/b5a32b3b9a5855a9ecfeccb8683b47a63e25e7852e06edb46135ae1aec27c49c" protocol=ttrpc version=3 Jun 20 19:15:40.823480 systemd[1]: Started cri-containerd-e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf.scope - libcontainer container e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf. Jun 20 19:15:40.863261 containerd[1756]: time="2025-06-20T19:15:40.863241372Z" level=info msg="StartContainer for \"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" returns successfully" Jun 20 19:15:42.847234 containerd[1756]: time="2025-06-20T19:15:42.847190978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" id:\"abc17c022b65b583a7ae082d4340fb8a384b6cc68b3f5efa4a7cde8c80c9cf55\" pid:5655 exited_at:{seconds:1750446942 nanos:846813675}" Jun 20 19:15:42.860040 kubelet[3148]: I0620 19:15:42.859823 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f9d94d467-vmsw8" podStartSLOduration=39.78940498 podStartE2EDuration="50.859805782s" podCreationTimestamp="2025-06-20 19:14:52 +0000 UTC" firstStartedPulling="2025-06-20 19:15:29.595199755 +0000 UTC m=+54.777107054" lastFinishedPulling="2025-06-20 19:15:40.665600555 +0000 UTC m=+65.847507856" observedRunningTime="2025-06-20 19:15:41.827061479 +0000 UTC m=+67.008968786" watchObservedRunningTime="2025-06-20 19:15:42.859805782 +0000 UTC m=+68.041713086" Jun 20 19:15:45.047633 containerd[1756]: time="2025-06-20T19:15:45.047579134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"e1429a9eaa2b4f205ce2ae23accafe63646e6c49f00750af1eb2f7b4507ff5c6\" pid:5678 exited_at:{seconds:1750446945 nanos:47164264}" Jun 20 19:15:49.078909 containerd[1756]: time="2025-06-20T19:15:49.078860450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" id:\"898f9c4d61d98f1770bacfe9fad0ff79bbc5df31d359225c372c5374fd468976\" pid:5701 exited_at:{seconds:1750446949 nanos:78537826}" Jun 20 19:15:59.169027 containerd[1756]: time="2025-06-20T19:15:59.168956437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"b9783ea498fc9df988083e15245820107bcbff478ed8da64fd4e85f1df37c342\" pid:5730 exited_at:{seconds:1750446959 nanos:168005882}" Jun 20 19:16:08.026530 containerd[1756]: time="2025-06-20T19:16:08.026487031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" id:\"5875aefee8b00f306b5f40ca6c912de56f5cf156e65b566e4a7b1a9e4c000a1a\" pid:5762 exited_at:{seconds:1750446968 nanos:26181706}" Jun 20 19:16:12.849487 containerd[1756]: time="2025-06-20T19:16:12.849441962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" id:\"6d328a24773ae83662712ca34b98c9112d410ea0a3e178851e219de339f45d2b\" pid:5789 exited_at:{seconds:1750446972 nanos:849252235}" Jun 20 19:16:19.080776 containerd[1756]: time="2025-06-20T19:16:19.080712443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" id:\"eb71bc0edbeba580618320c962e145cfd8c082b546d05b881b89020c904bae16\" pid:5810 exited_at:{seconds:1750446979 nanos:80376811}" Jun 20 19:16:23.239601 systemd[1]: Started sshd@7-10.200.4.43:22-10.200.16.10:44608.service - OpenSSH per-connection server daemon (10.200.16.10:44608). Jun 20 19:16:23.845039 sshd[5825]: Accepted publickey for core from 10.200.16.10 port 44608 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:23.846115 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:23.850173 systemd-logind[1712]: New session 10 of user core. Jun 20 19:16:23.853509 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:16:24.345757 sshd[5827]: Connection closed by 10.200.16.10 port 44608 Jun 20 19:16:24.346553 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:24.351726 systemd[1]: sshd@7-10.200.4.43:22-10.200.16.10:44608.service: Deactivated successfully. Jun 20 19:16:24.355522 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:16:24.357460 systemd-logind[1712]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:16:24.359506 systemd-logind[1712]: Removed session 10. Jun 20 19:16:29.138958 containerd[1756]: time="2025-06-20T19:16:29.138916933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"89756b26687266704fda780b3603042845810f65882e0934afbe0eb704e8ef21\" pid:5851 exited_at:{seconds:1750446989 nanos:138633058}" Jun 20 19:16:29.460591 systemd[1]: Started sshd@8-10.200.4.43:22-10.200.16.10:45094.service - OpenSSH per-connection server daemon (10.200.16.10:45094). Jun 20 19:16:30.064779 sshd[5862]: Accepted publickey for core from 10.200.16.10 port 45094 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:30.065742 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:30.069964 systemd-logind[1712]: New session 11 of user core. Jun 20 19:16:30.078529 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:16:30.532626 sshd[5864]: Connection closed by 10.200.16.10 port 45094 Jun 20 19:16:30.533499 sshd-session[5862]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:30.535637 systemd[1]: sshd@8-10.200.4.43:22-10.200.16.10:45094.service: Deactivated successfully. Jun 20 19:16:30.537251 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:16:30.538869 systemd-logind[1712]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:16:30.539667 systemd-logind[1712]: Removed session 11. Jun 20 19:16:35.639712 systemd[1]: Started sshd@9-10.200.4.43:22-10.200.16.10:45104.service - OpenSSH per-connection server daemon (10.200.16.10:45104). Jun 20 19:16:36.240967 sshd[5879]: Accepted publickey for core from 10.200.16.10 port 45104 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:36.242081 sshd-session[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:36.246070 systemd-logind[1712]: New session 12 of user core. Jun 20 19:16:36.250517 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:16:36.711768 sshd[5881]: Connection closed by 10.200.16.10 port 45104 Jun 20 19:16:36.712226 sshd-session[5879]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:36.715220 systemd[1]: sshd@9-10.200.4.43:22-10.200.16.10:45104.service: Deactivated successfully. Jun 20 19:16:36.716954 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:16:36.717740 systemd-logind[1712]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:16:36.718847 systemd-logind[1712]: Removed session 12. Jun 20 19:16:36.834629 systemd[1]: Started sshd@10-10.200.4.43:22-10.200.16.10:45118.service - OpenSSH per-connection server daemon (10.200.16.10:45118). Jun 20 19:16:37.444661 sshd[5895]: Accepted publickey for core from 10.200.16.10 port 45118 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:37.445789 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:37.452785 systemd-logind[1712]: New session 13 of user core. Jun 20 19:16:37.458540 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:16:37.935073 sshd[5897]: Connection closed by 10.200.16.10 port 45118 Jun 20 19:16:37.935577 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:37.938769 systemd[1]: sshd@10-10.200.4.43:22-10.200.16.10:45118.service: Deactivated successfully. Jun 20 19:16:37.941510 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:16:37.942896 systemd-logind[1712]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:16:37.944391 systemd-logind[1712]: Removed session 13. Jun 20 19:16:38.057327 systemd[1]: Started sshd@11-10.200.4.43:22-10.200.16.10:45130.service - OpenSSH per-connection server daemon (10.200.16.10:45130). Jun 20 19:16:38.711300 sshd[5907]: Accepted publickey for core from 10.200.16.10 port 45130 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:38.712382 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:38.716442 systemd-logind[1712]: New session 14 of user core. Jun 20 19:16:38.719504 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:16:39.203538 sshd[5909]: Connection closed by 10.200.16.10 port 45130 Jun 20 19:16:39.203997 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:39.206344 systemd[1]: sshd@11-10.200.4.43:22-10.200.16.10:45130.service: Deactivated successfully. Jun 20 19:16:39.207977 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:16:39.209654 systemd-logind[1712]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:16:39.210634 systemd-logind[1712]: Removed session 14. Jun 20 19:16:42.850262 containerd[1756]: time="2025-06-20T19:16:42.850219102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" id:\"7cc7a97732e6485777df95b3cc6eec7ca0a1ef64584c2207f2a4b5985d68115f\" pid:5944 exited_at:{seconds:1750447002 nanos:850012003}" Jun 20 19:16:44.309101 systemd[1]: Started sshd@12-10.200.4.43:22-10.200.16.10:48144.service - OpenSSH per-connection server daemon (10.200.16.10:48144). Jun 20 19:16:44.901713 sshd[5956]: Accepted publickey for core from 10.200.16.10 port 48144 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:44.902711 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:44.907053 systemd-logind[1712]: New session 15 of user core. Jun 20 19:16:44.910486 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:16:45.051100 containerd[1756]: time="2025-06-20T19:16:45.051047451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"9e88e3e8c8ca16ee52808e95321b538d95053b6f207831782f7b1c2c36011a76\" pid:5971 exited_at:{seconds:1750447005 nanos:50864208}" Jun 20 19:16:45.379208 sshd[5958]: Connection closed by 10.200.16.10 port 48144 Jun 20 19:16:45.379590 sshd-session[5956]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:45.382194 systemd[1]: sshd@12-10.200.4.43:22-10.200.16.10:48144.service: Deactivated successfully. Jun 20 19:16:45.383827 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:16:45.384559 systemd-logind[1712]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:16:45.385610 systemd-logind[1712]: Removed session 15. Jun 20 19:16:49.081455 containerd[1756]: time="2025-06-20T19:16:49.081410684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" id:\"4e02bcfb17409ec932728ac0cd0e7f55a8b3ec3e769ddd5da10e2935c6bbba6e\" pid:6004 exited_at:{seconds:1750447009 nanos:81111466}" Jun 20 19:16:50.485468 systemd[1]: Started sshd@13-10.200.4.43:22-10.200.16.10:54846.service - OpenSSH per-connection server daemon (10.200.16.10:54846). Jun 20 19:16:51.080667 sshd[6023]: Accepted publickey for core from 10.200.16.10 port 54846 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:51.081858 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:51.086438 systemd-logind[1712]: New session 16 of user core. Jun 20 19:16:51.093497 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:16:51.553431 sshd[6027]: Connection closed by 10.200.16.10 port 54846 Jun 20 19:16:51.553788 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:51.556634 systemd[1]: sshd@13-10.200.4.43:22-10.200.16.10:54846.service: Deactivated successfully. Jun 20 19:16:51.558101 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:16:51.558686 systemd-logind[1712]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:16:51.559804 systemd-logind[1712]: Removed session 16. Jun 20 19:16:56.665866 systemd[1]: Started sshd@14-10.200.4.43:22-10.200.16.10:54850.service - OpenSSH per-connection server daemon (10.200.16.10:54850). Jun 20 19:16:57.261115 sshd[6053]: Accepted publickey for core from 10.200.16.10 port 54850 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:57.262124 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:57.265980 systemd-logind[1712]: New session 17 of user core. Jun 20 19:16:57.271504 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:16:57.724303 sshd[6055]: Connection closed by 10.200.16.10 port 54850 Jun 20 19:16:57.724783 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:57.727725 systemd[1]: sshd@14-10.200.4.43:22-10.200.16.10:54850.service: Deactivated successfully. Jun 20 19:16:57.729489 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:16:57.730132 systemd-logind[1712]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:16:57.731175 systemd-logind[1712]: Removed session 17. Jun 20 19:16:57.828734 systemd[1]: Started sshd@15-10.200.4.43:22-10.200.16.10:54854.service - OpenSSH per-connection server daemon (10.200.16.10:54854). Jun 20 19:16:58.422725 sshd[6067]: Accepted publickey for core from 10.200.16.10 port 54854 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:58.424225 sshd-session[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:58.428719 systemd-logind[1712]: New session 18 of user core. Jun 20 19:16:58.434851 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:16:58.917758 sshd[6069]: Connection closed by 10.200.16.10 port 54854 Jun 20 19:16:58.918215 sshd-session[6067]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:58.921114 systemd[1]: sshd@15-10.200.4.43:22-10.200.16.10:54854.service: Deactivated successfully. Jun 20 19:16:58.922616 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:16:58.923405 systemd-logind[1712]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:16:58.924615 systemd-logind[1712]: Removed session 18. Jun 20 19:16:59.021961 systemd[1]: Started sshd@16-10.200.4.43:22-10.200.16.10:58454.service - OpenSSH per-connection server daemon (10.200.16.10:58454). Jun 20 19:16:59.125099 containerd[1756]: time="2025-06-20T19:16:59.125069891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"b45763a03847e1e03abe708fb531268ef029591f99f2907b3fec1417043fb358\" pid:6093 exited_at:{seconds:1750447019 nanos:124830856}" Jun 20 19:16:59.615940 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 58454 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:16:59.617384 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:59.622202 systemd-logind[1712]: New session 19 of user core. Jun 20 19:16:59.631571 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:17:00.918386 sshd[6104]: Connection closed by 10.200.16.10 port 58454 Jun 20 19:17:00.918908 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:00.922005 systemd[1]: sshd@16-10.200.4.43:22-10.200.16.10:58454.service: Deactivated successfully. Jun 20 19:17:00.923625 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:17:00.924299 systemd-logind[1712]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:17:00.925580 systemd-logind[1712]: Removed session 19. Jun 20 19:17:01.022869 systemd[1]: Started sshd@17-10.200.4.43:22-10.200.16.10:58462.service - OpenSSH per-connection server daemon (10.200.16.10:58462). Jun 20 19:17:01.621841 sshd[6121]: Accepted publickey for core from 10.200.16.10 port 58462 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:01.624150 sshd-session[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:01.631136 systemd-logind[1712]: New session 20 of user core. Jun 20 19:17:01.637811 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:17:02.162678 sshd[6123]: Connection closed by 10.200.16.10 port 58462 Jun 20 19:17:02.163128 sshd-session[6121]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:02.166271 systemd[1]: sshd@17-10.200.4.43:22-10.200.16.10:58462.service: Deactivated successfully. Jun 20 19:17:02.168178 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:17:02.168986 systemd-logind[1712]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:17:02.170495 systemd-logind[1712]: Removed session 20. Jun 20 19:17:02.271938 systemd[1]: Started sshd@18-10.200.4.43:22-10.200.16.10:58468.service - OpenSSH per-connection server daemon (10.200.16.10:58468). Jun 20 19:17:02.870371 sshd[6133]: Accepted publickey for core from 10.200.16.10 port 58468 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:02.871070 sshd-session[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:02.875498 systemd-logind[1712]: New session 21 of user core. Jun 20 19:17:02.882523 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:17:03.354901 sshd[6135]: Connection closed by 10.200.16.10 port 58468 Jun 20 19:17:03.357051 sshd-session[6133]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:03.360482 systemd[1]: sshd@18-10.200.4.43:22-10.200.16.10:58468.service: Deactivated successfully. Jun 20 19:17:03.363750 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:17:03.365518 systemd-logind[1712]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:17:03.366588 systemd-logind[1712]: Removed session 21. Jun 20 19:17:08.019160 containerd[1756]: time="2025-06-20T19:17:08.019102025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" id:\"8f38fdfe542d95bab37a283d4800c3a1f0c4b0f3b709c7f52695d7a84d5b8fdd\" pid:6159 exited_at:{seconds:1750447028 nanos:18834430}" Jun 20 19:17:08.459210 systemd[1]: Started sshd@19-10.200.4.43:22-10.200.16.10:58482.service - OpenSSH per-connection server daemon (10.200.16.10:58482). Jun 20 19:17:09.054260 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 58482 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:09.055280 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:09.059485 systemd-logind[1712]: New session 22 of user core. Jun 20 19:17:09.064501 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:17:09.538232 sshd[6172]: Connection closed by 10.200.16.10 port 58482 Jun 20 19:17:09.538702 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:09.541082 systemd[1]: sshd@19-10.200.4.43:22-10.200.16.10:58482.service: Deactivated successfully. Jun 20 19:17:09.542672 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:17:09.544330 systemd-logind[1712]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:17:09.545165 systemd-logind[1712]: Removed session 22. Jun 20 19:17:12.851644 containerd[1756]: time="2025-06-20T19:17:12.851604089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0f8842b04ac5f0b1c6ea306f93f684f99a75ba1bbb39829c440dbe96a4c29bf\" id:\"3df6b69f6ac72f6a1ca3cc0ce53e7534289530ca19d4fb4245943b7d993dda68\" pid:6198 exited_at:{seconds:1750447032 nanos:850965191}" Jun 20 19:17:14.650337 systemd[1]: Started sshd@20-10.200.4.43:22-10.200.16.10:46494.service - OpenSSH per-connection server daemon (10.200.16.10:46494). Jun 20 19:17:15.258561 sshd[6210]: Accepted publickey for core from 10.200.16.10 port 46494 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:15.261728 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:15.268134 systemd-logind[1712]: New session 23 of user core. Jun 20 19:17:15.275648 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:17:15.730261 sshd[6212]: Connection closed by 10.200.16.10 port 46494 Jun 20 19:17:15.730741 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:15.733562 systemd[1]: sshd@20-10.200.4.43:22-10.200.16.10:46494.service: Deactivated successfully. Jun 20 19:17:15.735165 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:17:15.735800 systemd-logind[1712]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:17:15.736835 systemd-logind[1712]: Removed session 23. Jun 20 19:17:19.080544 containerd[1756]: time="2025-06-20T19:17:19.080504094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306f922b6e2715c215a8b1461d02e1d8cb2a0daa5065a3230ea04e43c3ddfb54\" id:\"4e69bbf52c71a39eadf941b8e91fa99effb6e16049d18f073d34b74ec3c21800\" pid:6235 exited_at:{seconds:1750447039 nanos:80246035}" Jun 20 19:17:20.842266 systemd[1]: Started sshd@21-10.200.4.43:22-10.200.16.10:40700.service - OpenSSH per-connection server daemon (10.200.16.10:40700). Jun 20 19:17:21.441171 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 40700 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:21.442109 sshd-session[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:21.446028 systemd-logind[1712]: New session 24 of user core. Jun 20 19:17:21.447528 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:17:21.903606 sshd[6250]: Connection closed by 10.200.16.10 port 40700 Jun 20 19:17:21.904086 sshd-session[6248]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:21.906440 systemd[1]: sshd@21-10.200.4.43:22-10.200.16.10:40700.service: Deactivated successfully. Jun 20 19:17:21.908002 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:17:21.909173 systemd-logind[1712]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:17:21.910296 systemd-logind[1712]: Removed session 24. Jun 20 19:17:27.009681 systemd[1]: Started sshd@22-10.200.4.43:22-10.200.16.10:40714.service - OpenSSH per-connection server daemon (10.200.16.10:40714). Jun 20 19:17:27.607457 sshd[6262]: Accepted publickey for core from 10.200.16.10 port 40714 ssh2: RSA SHA256:xD0kfKmJ7EC4AAoCWFs/jHoVnPZ/qqmZ1Ve/vcfGzM8 Jun 20 19:17:27.608640 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:27.612608 systemd-logind[1712]: New session 25 of user core. Jun 20 19:17:27.616478 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:17:28.075108 sshd[6264]: Connection closed by 10.200.16.10 port 40714 Jun 20 19:17:28.075514 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:28.077988 systemd[1]: sshd@22-10.200.4.43:22-10.200.16.10:40714.service: Deactivated successfully. Jun 20 19:17:28.079582 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:17:28.080183 systemd-logind[1712]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:17:28.081308 systemd-logind[1712]: Removed session 25. Jun 20 19:17:29.144573 containerd[1756]: time="2025-06-20T19:17:29.144535292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e69d3f1855ead6ab3f4cdc89809e1fa4751e99e0c075254e4e05dd9fe0543a0\" id:\"c53594f02462a1cf366cba1f69ac29ba2941a11051a3e38e0feb34c2e0e3d1e0\" pid:6286 exited_at:{seconds:1750447049 nanos:144002510}"