Sep 4 00:03:56.014183 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:03:56.014212 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:03:56.014223 kernel: BIOS-provided physical RAM map: Sep 4 00:03:56.014230 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 00:03:56.014237 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 00:03:56.014244 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 4 00:03:56.014254 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 4 00:03:56.014261 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 4 00:03:56.014268 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 4 00:03:56.014274 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 00:03:56.014281 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 00:03:56.014288 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 00:03:56.014295 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 00:03:56.014302 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 4 00:03:56.014312 kernel: NX (Execute Disable) protection: active Sep 4 00:03:56.014320 kernel: APIC: Static calls initialized Sep 4 00:03:56.014327 kernel: efi: EFI v2.7 by Microsoft Sep 4 00:03:56.014335 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Sep 4 00:03:56.014342 kernel: random: crng init done Sep 4 00:03:56.014350 kernel: secureboot: Secure boot disabled Sep 4 00:03:56.014357 kernel: SMBIOS 3.1.0 present. Sep 4 00:03:56.014365 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 4 00:03:56.014374 kernel: DMI: Memory slots populated: 2/2 Sep 4 00:03:56.014381 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 00:03:56.014388 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 4 00:03:56.014395 kernel: Hyper-V: Nested features: 0x3e0101 Sep 4 00:03:56.014403 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 00:03:56.014410 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 00:03:56.014419 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 00:03:56.014427 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 00:03:56.014435 kernel: tsc: Detected 2299.999 MHz processor Sep 4 00:03:56.014444 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:03:56.014454 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:03:56.014464 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 4 00:03:56.014472 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 00:03:56.014481 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:03:56.014489 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 4 00:03:56.014496 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 4 00:03:56.014505 kernel: Using GB pages for direct mapping Sep 4 00:03:56.014514 kernel: ACPI: Early table checksum verification disabled Sep 4 00:03:56.014528 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 00:03:56.014542 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:03:56.014552 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:03:56.014560 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 4 00:03:56.014568 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 00:03:56.014577 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:03:56.014585 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:03:56.014599 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:03:56.014609 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 4 00:03:56.014616 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 4 00:03:56.014623 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:03:56.014631 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 00:03:56.014637 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 4 00:03:56.014642 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 00:03:56.014647 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 00:03:56.014653 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 00:03:56.014658 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 00:03:56.014663 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 4 00:03:56.014668 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 4 00:03:56.014674 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 00:03:56.014679 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 00:03:56.014684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 4 00:03:56.014689 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 4 00:03:56.014694 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 4 00:03:56.014701 kernel: Zone ranges: Sep 4 00:03:56.014706 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:03:56.014711 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 00:03:56.014716 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 00:03:56.014721 kernel: Device empty Sep 4 00:03:56.014726 kernel: Movable zone start for each node Sep 4 00:03:56.014731 kernel: Early memory node ranges Sep 4 00:03:56.014736 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 00:03:56.014741 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 4 00:03:56.014749 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 4 00:03:56.014758 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 00:03:56.014762 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 00:03:56.014768 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 00:03:56.014772 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:03:56.014778 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 00:03:56.014783 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 4 00:03:56.014788 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 4 00:03:56.014793 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 00:03:56.014798 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:03:56.014804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:03:56.014809 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:03:56.014816 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 00:03:56.014825 kernel: TSC deadline timer available Sep 4 00:03:56.014835 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:03:56.014842 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:03:56.014848 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:03:56.014853 kernel: CPU topo: Max. threads per core: 2 Sep 4 00:03:56.014858 kernel: CPU topo: Num. cores per package: 1 Sep 4 00:03:56.014864 kernel: CPU topo: Num. threads per package: 2 Sep 4 00:03:56.014869 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 4 00:03:56.014874 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 00:03:56.014879 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 00:03:56.014884 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:03:56.014891 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 00:03:56.014900 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 4 00:03:56.014908 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 4 00:03:56.014917 kernel: pcpu-alloc: [0] 0 1 Sep 4 00:03:56.014925 kernel: Hyper-V: PV spinlocks enabled Sep 4 00:03:56.014931 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:03:56.014937 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:03:56.014943 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:03:56.014948 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 00:03:56.014953 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:03:56.014958 kernel: Fallback order for Node 0: 0 Sep 4 00:03:56.014963 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 4 00:03:56.014970 kernel: Policy zone: Normal Sep 4 00:03:56.014978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:03:56.014986 kernel: software IO TLB: area num 2. Sep 4 00:03:56.014994 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 00:03:56.015000 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:03:56.015016 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:03:56.015025 kernel: Dynamic Preempt: voluntary Sep 4 00:03:56.015034 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:03:56.015043 kernel: rcu: RCU event tracing is enabled. Sep 4 00:03:56.015060 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 00:03:56.015067 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:03:56.015073 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:03:56.015079 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:03:56.015085 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:03:56.015094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 00:03:56.015103 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:03:56.015112 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:03:56.015118 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:03:56.015124 kernel: Using NULL legacy PIC Sep 4 00:03:56.015131 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 00:03:56.015136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:03:56.015142 kernel: Console: colour dummy device 80x25 Sep 4 00:03:56.015147 kernel: printk: legacy console [tty1] enabled Sep 4 00:03:56.015153 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:03:56.015158 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 4 00:03:56.015169 kernel: ACPI: Core revision 20240827 Sep 4 00:03:56.015178 kernel: Failed to register legacy timer interrupt Sep 4 00:03:56.015187 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:03:56.015194 kernel: x2apic enabled Sep 4 00:03:56.015199 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:03:56.015205 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 4 00:03:56.015210 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 00:03:56.015218 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 4 00:03:56.015228 kernel: Hyper-V: Using IPI hypercalls Sep 4 00:03:56.015237 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 00:03:56.015243 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 00:03:56.015249 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 00:03:56.015254 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 00:03:56.015260 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 00:03:56.015267 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 00:03:56.015279 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 4 00:03:56.015287 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Sep 4 00:03:56.015293 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 00:03:56.015300 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 00:03:56.015305 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 00:03:56.015311 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:03:56.015320 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:03:56.015330 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:03:56.015339 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 00:03:56.015346 kernel: RETBleed: Vulnerable Sep 4 00:03:56.015352 kernel: Speculative Store Bypass: Vulnerable Sep 4 00:03:56.015357 kernel: active return thunk: its_return_thunk Sep 4 00:03:56.015362 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 00:03:56.015367 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:03:56.015374 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:03:56.015382 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:03:56.015391 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 00:03:56.015399 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 00:03:56.015406 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 00:03:56.015412 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 4 00:03:56.015417 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 4 00:03:56.015422 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 4 00:03:56.015428 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:03:56.015436 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 00:03:56.015446 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 00:03:56.015455 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 00:03:56.015461 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 4 00:03:56.015466 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 4 00:03:56.015472 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 4 00:03:56.015477 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 4 00:03:56.015485 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:03:56.015494 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:03:56.015503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:03:56.015511 kernel: landlock: Up and running. Sep 4 00:03:56.015516 kernel: SELinux: Initializing. Sep 4 00:03:56.015521 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:03:56.015528 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:03:56.015534 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 4 00:03:56.015543 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 4 00:03:56.015552 kernel: signal: max sigframe size: 11952 Sep 4 00:03:56.015563 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:03:56.015571 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:03:56.015576 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:03:56.015582 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 00:03:56.015588 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:03:56.015598 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:03:56.015612 kernel: .... node #0, CPUs: #1 Sep 4 00:03:56.015621 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 00:03:56.015631 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 00:03:56.015639 kernel: Memory: 8079080K/8383228K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 297940K reserved, 0K cma-reserved) Sep 4 00:03:56.015645 kernel: devtmpfs: initialized Sep 4 00:03:56.015650 kernel: x86/mm: Memory block size: 128MB Sep 4 00:03:56.015656 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 00:03:56.015661 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:03:56.015670 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 00:03:56.015683 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:03:56.015692 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:03:56.015701 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:03:56.015708 kernel: audit: type=2000 audit(1756944232.078:1): state=initialized audit_enabled=0 res=1 Sep 4 00:03:56.015713 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:03:56.015719 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:03:56.015724 kernel: cpuidle: using governor menu Sep 4 00:03:56.015730 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:03:56.015735 kernel: dca service started, version 1.12.1 Sep 4 00:03:56.015745 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 4 00:03:56.015755 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 4 00:03:56.015763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:03:56.015769 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:03:56.015775 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:03:56.015780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:03:56.015786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:03:56.015791 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:03:56.015800 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:03:56.015809 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:03:56.015820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 00:03:56.015828 kernel: ACPI: Interpreter enabled Sep 4 00:03:56.015833 kernel: ACPI: PM: (supports S0 S5) Sep 4 00:03:56.015839 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:03:56.015845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:03:56.015853 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 00:03:56.015863 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 00:03:56.015872 kernel: iommu: Default domain type: Translated Sep 4 00:03:56.015884 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:03:56.015890 kernel: efivars: Registered efivars operations Sep 4 00:03:56.015896 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:03:56.015901 kernel: PCI: System does not support PCI Sep 4 00:03:56.015907 kernel: vgaarb: loaded Sep 4 00:03:56.015915 kernel: clocksource: Switched to clocksource tsc-early Sep 4 00:03:56.015923 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:03:56.015932 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:03:56.015940 kernel: pnp: PnP ACPI init Sep 4 00:03:56.015949 kernel: pnp: PnP ACPI: found 3 devices Sep 4 00:03:56.015957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:03:56.015965 kernel: NET: Registered PF_INET protocol family Sep 4 00:03:56.015974 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 00:03:56.015983 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 00:03:56.015992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:03:56.016000 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:03:56.018051 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 00:03:56.018065 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 00:03:56.018073 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:03:56.018082 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:03:56.018090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:03:56.018099 kernel: NET: Registered PF_XDP protocol family Sep 4 00:03:56.018107 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:03:56.018115 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 00:03:56.018124 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 4 00:03:56.018133 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 4 00:03:56.018144 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 4 00:03:56.018153 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 4 00:03:56.018161 kernel: clocksource: Switched to clocksource tsc Sep 4 00:03:56.018170 kernel: Initialise system trusted keyrings Sep 4 00:03:56.018179 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 00:03:56.018188 kernel: Key type asymmetric registered Sep 4 00:03:56.018197 kernel: Asymmetric key parser 'x509' registered Sep 4 00:03:56.018205 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:03:56.018214 kernel: io scheduler mq-deadline registered Sep 4 00:03:56.018224 kernel: io scheduler kyber registered Sep 4 00:03:56.018233 kernel: io scheduler bfq registered Sep 4 00:03:56.018241 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:03:56.018250 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:03:56.018259 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:03:56.018268 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 00:03:56.018277 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:03:56.018286 kernel: i8042: PNP: No PS/2 controller found. Sep 4 00:03:56.018429 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 00:03:56.018509 kernel: rtc_cmos 00:02: setting system clock to 2025-09-04T00:03:55 UTC (1756944235) Sep 4 00:03:56.018581 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 00:03:56.018591 kernel: intel_pstate: Intel P-state driver initializing Sep 4 00:03:56.018600 kernel: efifb: probing for efifb Sep 4 00:03:56.018609 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 00:03:56.018618 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 00:03:56.018627 kernel: efifb: scrolling: redraw Sep 4 00:03:56.018636 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 00:03:56.018646 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 00:03:56.018655 kernel: fb0: EFI VGA frame buffer device Sep 4 00:03:56.018664 kernel: pstore: Using crash dump compression: deflate Sep 4 00:03:56.018672 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:03:56.018681 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:03:56.018689 kernel: Segment Routing with IPv6 Sep 4 00:03:56.018698 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:03:56.018707 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:03:56.018716 kernel: Key type dns_resolver registered Sep 4 00:03:56.018726 kernel: IPI shorthand broadcast: enabled Sep 4 00:03:56.018735 kernel: sched_clock: Marking stable (3219004259, 112211208)->(3694519214, -363303747) Sep 4 00:03:56.018744 kernel: registered taskstats version 1 Sep 4 00:03:56.018752 kernel: Loading compiled-in X.509 certificates Sep 4 00:03:56.018761 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:03:56.018770 kernel: Demotion targets for Node 0: null Sep 4 00:03:56.018778 kernel: Key type .fscrypt registered Sep 4 00:03:56.018787 kernel: Key type fscrypt-provisioning registered Sep 4 00:03:56.018796 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:03:56.018807 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:03:56.018816 kernel: ima: No architecture policies found Sep 4 00:03:56.018824 kernel: clk: Disabling unused clocks Sep 4 00:03:56.018832 kernel: Warning: unable to open an initial console. Sep 4 00:03:56.018841 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:03:56.018850 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:03:56.018859 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:03:56.018867 kernel: Run /init as init process Sep 4 00:03:56.018876 kernel: with arguments: Sep 4 00:03:56.018886 kernel: /init Sep 4 00:03:56.018895 kernel: with environment: Sep 4 00:03:56.018903 kernel: HOME=/ Sep 4 00:03:56.018911 kernel: TERM=linux Sep 4 00:03:56.018919 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:03:56.018930 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:03:56.018942 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:03:56.018954 systemd[1]: Detected virtualization microsoft. Sep 4 00:03:56.018963 systemd[1]: Detected architecture x86-64. Sep 4 00:03:56.018972 systemd[1]: Running in initrd. Sep 4 00:03:56.018981 systemd[1]: No hostname configured, using default hostname. Sep 4 00:03:56.018990 systemd[1]: Hostname set to . Sep 4 00:03:56.018999 systemd[1]: Initializing machine ID from random generator. Sep 4 00:03:56.019031 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:03:56.019040 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:03:56.019050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:03:56.019062 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:03:56.019072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:03:56.019081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:03:56.019091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:03:56.019102 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:03:56.019111 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:03:56.019123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:03:56.019132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:03:56.019142 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:03:56.019151 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:03:56.019160 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:03:56.019169 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:03:56.019179 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:03:56.019188 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:03:56.019198 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:03:56.019209 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:03:56.019218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:03:56.019228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:03:56.019237 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:03:56.019246 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:03:56.019255 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:03:56.019264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:03:56.019273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:03:56.019285 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:03:56.019295 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:03:56.019305 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:03:56.019322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:03:56.019333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:56.019359 systemd-journald[204]: Collecting audit messages is disabled. Sep 4 00:03:56.019385 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:03:56.019396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:03:56.019408 systemd-journald[204]: Journal started Sep 4 00:03:56.019432 systemd-journald[204]: Runtime Journal (/run/log/journal/ded64ceba3554d63ad5974e3e4344232) is 8M, max 158.9M, 150.9M free. Sep 4 00:03:56.025581 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:03:56.028284 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:03:56.033233 systemd-modules-load[206]: Inserted module 'overlay' Sep 4 00:03:56.035119 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:03:56.036070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:03:56.051323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:56.055164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:03:56.069283 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:03:56.071119 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:03:56.074907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:03:56.077107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:03:56.090396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:03:56.104128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:03:56.106862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:03:56.114080 kernel: Bridge firewalling registered Sep 4 00:03:56.108511 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:03:56.111919 systemd-modules-load[206]: Inserted module 'br_netfilter' Sep 4 00:03:56.115990 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:03:56.122116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:03:56.135111 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:03:56.138851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:03:56.148497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:03:56.190845 systemd-resolved[264]: Positive Trust Anchors: Sep 4 00:03:56.190858 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:03:56.190897 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:03:56.213253 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 4 00:03:56.216174 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:03:56.222863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:03:56.231428 kernel: SCSI subsystem initialized Sep 4 00:03:56.238020 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:03:56.247021 kernel: iscsi: registered transport (tcp) Sep 4 00:03:56.266025 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:03:56.266064 kernel: QLogic iSCSI HBA Driver Sep 4 00:03:56.279129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:03:56.292111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:03:56.295526 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:03:56.330044 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:03:56.333334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:03:56.381026 kernel: raid6: avx512x4 gen() 44103 MB/s Sep 4 00:03:56.398016 kernel: raid6: avx512x2 gen() 42569 MB/s Sep 4 00:03:56.416015 kernel: raid6: avx512x1 gen() 24987 MB/s Sep 4 00:03:56.434020 kernel: raid6: avx2x4 gen() 34584 MB/s Sep 4 00:03:56.451014 kernel: raid6: avx2x2 gen() 37217 MB/s Sep 4 00:03:56.468458 kernel: raid6: avx2x1 gen() 30792 MB/s Sep 4 00:03:56.468543 kernel: raid6: using algorithm avx512x4 gen() 44103 MB/s Sep 4 00:03:56.487019 kernel: raid6: .... xor() 7562 MB/s, rmw enabled Sep 4 00:03:56.487038 kernel: raid6: using avx512x2 recovery algorithm Sep 4 00:03:56.506028 kernel: xor: automatically using best checksumming function avx Sep 4 00:03:56.632027 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:03:56.636952 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:03:56.641193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:03:56.668763 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 4 00:03:56.673508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:03:56.680713 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:03:56.700483 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Sep 4 00:03:56.718983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:03:56.722118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:03:56.758054 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:03:56.764668 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:03:56.808020 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:03:56.828644 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 00:03:56.833037 kernel: AES CTR mode by8 optimization enabled Sep 4 00:03:56.840819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:03:56.842917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:56.849318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:56.853239 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 00:03:56.865045 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 00:03:56.862644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:56.868840 kernel: hv_vmbus: registering driver hv_pci Sep 4 00:03:56.872225 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 00:03:56.872338 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 00:03:56.884783 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 4 00:03:56.881092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:03:56.892596 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 00:03:56.881174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:56.884459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:03:56.898024 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 00:03:56.900027 kernel: PTP clock support registered Sep 4 00:03:56.917240 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 4 00:03:56.917665 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2d355f (unnamed net_device) (uninitialized): VF slot 1 added Sep 4 00:03:56.923024 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 00:03:56.927298 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 4 00:03:56.927585 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 00:03:56.943605 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 4 00:03:56.943666 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 4 00:03:56.943689 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 00:03:56.943702 kernel: hv_vmbus: registering driver hv_utils Sep 4 00:03:56.943713 kernel: scsi host0: storvsc_host_t Sep 4 00:03:56.943857 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 4 00:03:56.943877 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 00:03:56.948022 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 00:03:56.952812 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 00:03:56.957418 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 00:03:56.957466 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 00:03:57.240535 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 00:03:57.244327 systemd-resolved[264]: Clock change detected. Flushing caches. Sep 4 00:03:57.254113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:03:57.271482 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 00:03:57.271649 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 00:03:57.271775 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 4 00:03:57.271881 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 00:03:57.274513 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 00:03:57.287523 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 4 00:03:57.290737 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 4 00:03:57.297558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#81 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:03:57.314521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#114 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:03:57.450536 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 00:03:57.461521 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:03:57.715528 kernel: nvme nvme0: using unchecked data buffer Sep 4 00:03:58.150305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 4 00:03:58.170613 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 4 00:03:58.217384 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 4 00:03:58.224437 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 4 00:03:58.224581 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 4 00:03:58.239370 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 4 00:03:58.240020 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 4 00:03:58.240140 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 00:03:58.234617 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:03:58.256532 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 4 00:03:58.256593 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 4 00:03:58.253973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 4 00:03:58.265564 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 4 00:03:58.265586 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 4 00:03:58.280914 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:03:58.283416 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 00:03:58.283568 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 4 00:03:58.288030 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:03:58.289227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:03:58.291722 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 4 00:03:58.298590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:03:58.304606 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:03:58.312189 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:03:58.314999 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 4 00:03:58.331662 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 4 00:03:58.331846 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2d355f eth0: VF registering: eth1 Sep 4 00:03:58.333595 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 4 00:03:58.337932 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:03:58.343036 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 4 00:03:59.324539 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:03:59.324608 disk-uuid[679]: The operation has completed successfully. Sep 4 00:03:59.372037 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:03:59.372135 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:03:59.413973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:03:59.427660 sh[714]: Success Sep 4 00:03:59.465591 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:03:59.465644 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:03:59.467106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:03:59.476542 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 00:03:59.706196 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:03:59.712603 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:03:59.730955 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:03:59.741519 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (727) Sep 4 00:03:59.741555 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:03:59.744089 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:03:59.993657 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 00:03:59.993880 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:03:59.993895 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:04:00.031202 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:04:00.032230 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:04:00.043424 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:04:00.046828 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:04:00.057224 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:04:00.078646 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (750) Sep 4 00:04:00.078684 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:00.080448 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:00.104296 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:04:00.104333 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 4 00:04:00.104347 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:04:00.110518 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:00.111655 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:04:00.117613 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:04:00.141757 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:04:00.144615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:04:00.178684 systemd-networkd[896]: lo: Link UP Sep 4 00:04:00.178692 systemd-networkd[896]: lo: Gained carrier Sep 4 00:04:00.180198 systemd-networkd[896]: Enumeration completed Sep 4 00:04:00.189311 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 4 00:04:00.189658 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 4 00:04:00.189805 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2d355f eth0: Data path switched to VF: enP30832s1 Sep 4 00:04:00.180581 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:04:00.181469 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:00.181473 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:04:00.184704 systemd[1]: Reached target network.target - Network. Sep 4 00:04:00.193341 systemd-networkd[896]: enP30832s1: Link UP Sep 4 00:04:00.193411 systemd-networkd[896]: eth0: Link UP Sep 4 00:04:00.193539 systemd-networkd[896]: eth0: Gained carrier Sep 4 00:04:00.193551 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:00.203722 systemd-networkd[896]: enP30832s1: Gained carrier Sep 4 00:04:00.227530 systemd-networkd[896]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 00:04:01.307693 ignition[855]: Ignition 2.21.0 Sep 4 00:04:01.307706 ignition[855]: Stage: fetch-offline Sep 4 00:04:01.309583 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:04:01.307802 ignition[855]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:01.314868 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 00:04:01.307809 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:01.307903 ignition[855]: parsed url from cmdline: "" Sep 4 00:04:01.307906 ignition[855]: no config URL provided Sep 4 00:04:01.307911 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:04:01.307917 ignition[855]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:04:01.307921 ignition[855]: failed to fetch config: resource requires networking Sep 4 00:04:01.308167 ignition[855]: Ignition finished successfully Sep 4 00:04:01.337572 ignition[912]: Ignition 2.21.0 Sep 4 00:04:01.337581 ignition[912]: Stage: fetch Sep 4 00:04:01.337758 ignition[912]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:01.337765 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:01.337830 ignition[912]: parsed url from cmdline: "" Sep 4 00:04:01.337833 ignition[912]: no config URL provided Sep 4 00:04:01.337837 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:04:01.337843 ignition[912]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:04:01.337875 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 00:04:01.400603 ignition[912]: GET result: OK Sep 4 00:04:01.400669 ignition[912]: config has been read from IMDS userdata Sep 4 00:04:01.400696 ignition[912]: parsing config with SHA512: a4498e91acfaac71a8db99e59bbd03f2f165dc8fdc02a1495517877017c559fa3bde51afbe6b36d3da28dd2278d90601e55ca8b0be581d8b4ff480b8006d050a Sep 4 00:04:01.404199 unknown[912]: fetched base config from "system" Sep 4 00:04:01.404309 unknown[912]: fetched base config from "system" Sep 4 00:04:01.404662 ignition[912]: fetch: fetch complete Sep 4 00:04:01.404315 unknown[912]: fetched user config from "azure" Sep 4 00:04:01.404666 ignition[912]: fetch: fetch passed Sep 4 00:04:01.407009 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 00:04:01.404700 ignition[912]: Ignition finished successfully Sep 4 00:04:01.416647 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:04:01.437307 ignition[918]: Ignition 2.21.0 Sep 4 00:04:01.437318 ignition[918]: Stage: kargs Sep 4 00:04:01.437996 ignition[918]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:01.440805 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:04:01.438007 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:01.444764 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:04:01.439401 ignition[918]: kargs: kargs passed Sep 4 00:04:01.439442 ignition[918]: Ignition finished successfully Sep 4 00:04:01.465814 ignition[925]: Ignition 2.21.0 Sep 4 00:04:01.465824 ignition[925]: Stage: disks Sep 4 00:04:01.466012 ignition[925]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:01.468608 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:04:01.466019 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:01.472439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:04:01.466773 ignition[925]: disks: disks passed Sep 4 00:04:01.474449 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:04:01.466804 ignition[925]: Ignition finished successfully Sep 4 00:04:01.478087 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:04:01.484111 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:04:01.493013 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:04:01.495928 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:04:01.569018 systemd-fsck[933]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 4 00:04:01.572481 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:04:01.578570 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:04:01.680625 systemd-networkd[896]: eth0: Gained IPv6LL Sep 4 00:04:01.822028 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:04:01.821363 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:04:01.824030 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:04:01.845535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:04:01.856578 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:04:01.861589 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 00:04:01.865962 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:04:01.865994 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:04:01.871517 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (942) Sep 4 00:04:01.876053 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:04:01.882524 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:01.882560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:01.884124 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:04:01.892226 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:04:01.892267 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 4 00:04:01.893577 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:04:01.895242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:04:02.375244 coreos-metadata[944]: Sep 04 00:04:02.375 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 00:04:02.414059 coreos-metadata[944]: Sep 04 00:04:02.414 INFO Fetch successful Sep 4 00:04:02.415648 coreos-metadata[944]: Sep 04 00:04:02.414 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 00:04:02.425678 coreos-metadata[944]: Sep 04 00:04:02.425 INFO Fetch successful Sep 4 00:04:02.435541 coreos-metadata[944]: Sep 04 00:04:02.435 INFO wrote hostname ci-4372.1.0-n-8c113b52d8 to /sysroot/etc/hostname Sep 4 00:04:02.438290 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 00:04:02.442993 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:04:02.480066 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:04:02.512286 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:04:02.532555 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:04:03.382065 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:04:03.385598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:04:03.392622 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:04:03.398867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:04:03.402734 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:03.426971 ignition[1061]: INFO : Ignition 2.21.0 Sep 4 00:04:03.428391 ignition[1061]: INFO : Stage: mount Sep 4 00:04:03.428391 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:03.428391 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:03.436867 ignition[1061]: INFO : mount: mount passed Sep 4 00:04:03.436867 ignition[1061]: INFO : Ignition finished successfully Sep 4 00:04:03.431833 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:04:03.433932 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:04:03.439638 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:04:03.453573 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:04:03.476517 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1072) Sep 4 00:04:03.490144 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:03.490171 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:03.495175 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:04:03.495207 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 4 00:04:03.496608 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:04:03.498455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:04:03.522321 ignition[1088]: INFO : Ignition 2.21.0 Sep 4 00:04:03.522321 ignition[1088]: INFO : Stage: files Sep 4 00:04:03.525538 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:03.525538 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:03.525538 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:04:03.525538 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:04:03.525538 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:04:03.558795 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:04:03.563583 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:04:03.563583 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:04:03.559116 unknown[1088]: wrote ssh authorized keys file for user: core Sep 4 00:04:03.611868 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:04:03.615034 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 00:04:04.196796 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:04:04.439874 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:04:04.442908 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:04:04.468238 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:04:04.468238 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:04:04.468238 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:04:04.468238 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:04:04.468238 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:04:04.468238 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 00:04:04.949945 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 00:04:05.577403 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:04:05.577403 ignition[1088]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 00:04:05.594151 ignition[1088]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:04:05.599240 ignition[1088]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:04:05.599240 ignition[1088]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 00:04:05.603875 ignition[1088]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:04:05.603875 ignition[1088]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:04:05.603875 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:04:05.603875 ignition[1088]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:04:05.603875 ignition[1088]: INFO : files: files passed Sep 4 00:04:05.603875 ignition[1088]: INFO : Ignition finished successfully Sep 4 00:04:05.603475 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:04:05.618581 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:04:05.627116 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:04:05.636310 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:04:05.636395 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:04:05.643146 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:04:05.645838 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:04:05.645838 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:04:05.645370 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:04:05.655897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:04:05.660613 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:04:05.682847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:04:05.682932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:04:05.686382 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:04:05.690630 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:04:05.694602 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:04:05.696143 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:04:05.717676 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:04:05.721837 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:04:05.742321 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:04:05.743082 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:04:05.743567 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:04:05.743734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:04:05.743862 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:04:05.744682 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:04:05.745021 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:04:05.745343 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:04:05.745963 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:04:05.746288 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:04:05.746897 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:04:05.747223 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:04:05.747545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:04:05.748153 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:04:05.748762 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:04:05.749145 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:04:05.749487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:04:05.749614 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:04:05.750147 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:04:05.750513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:04:05.750786 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:04:05.752856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:04:05.771927 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:04:05.772060 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:04:05.785611 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:04:05.785763 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:04:05.787670 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:04:05.787785 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:04:05.788338 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 00:04:05.788436 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 00:04:05.797043 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:04:05.826026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:04:05.826174 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:04:05.841678 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:04:05.845804 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:04:05.845963 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:04:05.848720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:04:05.861283 ignition[1143]: INFO : Ignition 2.21.0 Sep 4 00:04:05.861283 ignition[1143]: INFO : Stage: umount Sep 4 00:04:05.848828 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:04:05.872873 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:05.872873 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:05.872873 ignition[1143]: INFO : umount: umount passed Sep 4 00:04:05.872873 ignition[1143]: INFO : Ignition finished successfully Sep 4 00:04:05.863342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:04:05.863429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:04:05.868743 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:04:05.868833 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:04:05.876394 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:04:05.876453 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:04:05.883775 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:04:05.883829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:04:05.886009 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 00:04:05.886050 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 00:04:05.901797 systemd[1]: Stopped target network.target - Network. Sep 4 00:04:05.902283 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:04:05.902331 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:04:05.906384 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:04:05.907585 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:04:05.911906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:04:05.915671 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:04:05.919180 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:04:05.927067 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:04:05.927145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:04:05.929370 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:04:05.929528 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:04:05.932723 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:04:05.935150 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:04:05.941602 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:04:05.941650 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:04:05.945781 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:04:05.948369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:04:05.953583 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:04:05.959317 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:04:05.959414 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:04:05.968829 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:04:05.969006 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:04:05.969088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:04:05.976781 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:04:05.977190 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:04:05.977834 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:04:05.977862 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:04:05.989932 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:04:05.993703 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:04:05.993761 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:04:05.997281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:04:05.997328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:04:06.003310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:04:06.003355 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:04:06.009389 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:04:06.010627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:04:06.016136 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:04:06.030317 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:04:06.030376 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:04:06.030694 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:04:06.030812 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:04:06.039644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:04:06.039701 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:04:06.042723 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:04:06.042757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:04:06.043171 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:04:06.043208 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:04:06.043471 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:04:06.081584 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2d355f eth0: Data path switched from VF: enP30832s1 Sep 4 00:04:06.081757 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 4 00:04:06.043512 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:04:06.043789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:04:06.043818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:04:06.062563 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:04:06.071274 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:04:06.071330 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:04:06.077139 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:04:06.077190 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:04:06.084041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:04:06.084094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:06.090298 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:04:06.090336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:04:06.090361 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:04:06.090620 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:04:06.090687 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:04:06.095761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:04:06.095836 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:04:06.283942 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:04:06.284052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:04:06.286968 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:04:06.287073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:04:06.287121 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:04:06.288692 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:04:06.316531 systemd[1]: Switching root. Sep 4 00:04:06.385220 systemd-journald[204]: Journal stopped Sep 4 00:04:14.777783 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Sep 4 00:04:14.777823 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:04:14.777836 kernel: SELinux: policy capability open_perms=1 Sep 4 00:04:14.777845 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:04:14.777854 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:04:14.777862 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:04:14.777874 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:04:14.777884 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:04:14.777893 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:04:14.777902 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:04:14.777911 kernel: audit: type=1403 audit(1756944249.038:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:04:14.777922 systemd[1]: Successfully loaded SELinux policy in 253.576ms. Sep 4 00:04:14.777934 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.338ms. Sep 4 00:04:14.777948 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:04:14.777958 systemd[1]: Detected virtualization microsoft. Sep 4 00:04:14.777968 systemd[1]: Detected architecture x86-64. Sep 4 00:04:14.777978 systemd[1]: Detected first boot. Sep 4 00:04:14.777988 systemd[1]: Hostname set to . Sep 4 00:04:14.778000 systemd[1]: Initializing machine ID from random generator. Sep 4 00:04:14.778009 zram_generator::config[1187]: No configuration found. Sep 4 00:04:14.778020 kernel: Guest personality initialized and is inactive Sep 4 00:04:14.778029 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 4 00:04:14.778038 kernel: Initialized host personality Sep 4 00:04:14.778047 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:04:14.778057 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:04:14.778069 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:04:14.778081 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:04:14.778092 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:04:14.778101 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:04:14.778111 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:04:14.778123 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:04:14.778134 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:04:14.781408 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:04:14.781434 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:04:14.781445 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:04:14.781455 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:04:14.781465 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:04:14.781476 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:04:14.781487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:04:14.781497 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:04:14.781526 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:04:14.781539 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:04:14.781549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:04:14.781560 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:04:14.781570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:04:14.781581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:04:14.781591 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:04:14.781601 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:04:14.781613 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:04:14.781623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:04:14.781634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:04:14.781644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:04:14.781654 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:04:14.781665 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:04:14.781675 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:04:14.781685 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:04:14.781698 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:04:14.781708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:04:14.781719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:04:14.781730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:04:14.781740 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:04:14.781752 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:04:14.781763 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:04:14.781772 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:04:14.781783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:04:14.781793 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:04:14.781808 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:04:14.781818 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:04:14.781830 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:04:14.781843 systemd[1]: Reached target machines.target - Containers. Sep 4 00:04:14.781853 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:04:14.781864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:04:14.781874 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:04:14.781884 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:04:14.781895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:04:14.781905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:04:14.781915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:04:14.781928 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:04:14.781938 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:04:14.781948 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:04:14.781958 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:04:14.781968 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:04:14.781979 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:04:14.781989 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:04:14.782000 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:04:14.782011 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:04:14.782023 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:04:14.782033 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:04:14.782044 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:04:14.782054 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:04:14.782064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:04:14.782074 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:04:14.782085 systemd[1]: Stopped verity-setup.service. Sep 4 00:04:14.782096 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:04:14.782107 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:04:14.782117 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:04:14.782127 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:04:14.782138 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:04:14.782148 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:04:14.782158 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:04:14.782168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:04:14.782179 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:04:14.782190 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:04:14.782201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:04:14.782484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:04:14.782497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:04:14.782519 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:04:14.782529 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:04:14.782539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:04:14.782555 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:04:14.782595 systemd-journald[1280]: Collecting audit messages is disabled. Sep 4 00:04:14.782624 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:04:14.782637 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:04:14.782648 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:04:14.782659 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:04:14.782672 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:04:14.782685 systemd-journald[1280]: Journal started Sep 4 00:04:14.782713 systemd-journald[1280]: Runtime Journal (/run/log/journal/96e88f96133c46509084c3f299dd0474) is 8M, max 158.9M, 150.9M free. Sep 4 00:04:14.094402 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:04:14.099135 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 00:04:14.099473 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:04:14.788406 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:04:14.797522 kernel: loop: module loaded Sep 4 00:04:14.797565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:04:14.799518 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:04:14.805517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:04:14.812531 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:04:14.819032 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:04:14.824007 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:04:14.826947 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:04:14.827169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:04:14.834565 kernel: fuse: init (API version 7.41) Sep 4 00:04:14.834739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:04:14.836795 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:04:14.841942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:04:14.842144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:04:14.857612 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:04:14.862648 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:04:14.864477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:04:14.870886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:04:14.876643 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:04:14.889684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:04:14.999844 systemd-journald[1280]: Time spent on flushing to /var/log/journal/96e88f96133c46509084c3f299dd0474 is 22.418ms for 986 entries. Sep 4 00:04:14.999844 systemd-journald[1280]: System Journal (/var/log/journal/96e88f96133c46509084c3f299dd0474) is 8M, max 2.6G, 2.6G free. Sep 4 00:04:15.162860 systemd-journald[1280]: Received client request to flush runtime journal. Sep 4 00:04:15.162897 kernel: ACPI: bus type drm_connector registered Sep 4 00:04:15.162912 kernel: loop0: detected capacity change from 0 to 224512 Sep 4 00:04:15.124463 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:04:15.124681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:04:15.133406 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:04:15.140675 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:04:15.143119 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:04:15.151928 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:04:15.157540 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:04:15.165675 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:04:15.168439 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:04:15.176517 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:04:15.901538 kernel: loop1: detected capacity change from 0 to 146240 Sep 4 00:04:16.197332 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:04:16.199983 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:04:16.298568 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 4 00:04:16.298585 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 4 00:04:16.301988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:04:18.205537 kernel: loop2: detected capacity change from 0 to 113872 Sep 4 00:04:19.184704 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:04:19.185413 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:04:19.771074 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:04:19.774284 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:04:19.808746 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Sep 4 00:04:20.355440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:04:20.359627 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:04:20.401401 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:04:20.446812 kernel: loop3: detected capacity change from 0 to 28504 Sep 4 00:04:20.456540 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 00:04:20.461037 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 00:04:20.461091 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 00:04:20.462557 kernel: Console: switching to colour dummy device 80x25 Sep 4 00:04:20.467233 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 00:04:20.573969 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:04:20.574052 kernel: hv_vmbus: registering driver hv_balloon Sep 4 00:04:20.583521 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 00:04:20.623594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#207 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:04:21.012294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:04:21.032184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:04:21.033773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:21.041716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:04:21.058716 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:04:21.107511 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:04:21.505599 systemd-networkd[1356]: lo: Link UP Sep 4 00:04:21.505608 systemd-networkd[1356]: lo: Gained carrier Sep 4 00:04:21.507268 systemd-networkd[1356]: Enumeration completed Sep 4 00:04:21.507419 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:04:21.516237 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 00:04:21.512635 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:04:21.518621 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:04:21.524101 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:21.524109 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:04:21.527524 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 4 00:04:21.532543 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 4 00:04:21.535274 systemd-networkd[1356]: enP30832s1: Link UP Sep 4 00:04:21.535356 systemd-networkd[1356]: eth0: Link UP Sep 4 00:04:21.535359 systemd-networkd[1356]: eth0: Gained carrier Sep 4 00:04:21.535376 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:21.535561 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2d355f eth0: Data path switched to VF: enP30832s1 Sep 4 00:04:21.544704 systemd-networkd[1356]: enP30832s1: Gained carrier Sep 4 00:04:21.554530 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 00:04:21.792788 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:04:21.998260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 4 00:04:22.001158 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:04:22.202725 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:04:22.915521 kernel: loop4: detected capacity change from 0 to 224512 Sep 4 00:04:23.184634 systemd-networkd[1356]: eth0: Gained IPv6LL Sep 4 00:04:23.187929 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:04:23.301199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:23.726579 kernel: loop5: detected capacity change from 0 to 146240 Sep 4 00:04:23.737984 kernel: loop6: detected capacity change from 0 to 113872 Sep 4 00:04:23.747519 kernel: loop7: detected capacity change from 0 to 28504 Sep 4 00:04:23.790755 (sd-merge)[1447]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 00:04:23.791145 (sd-merge)[1447]: Merged extensions into '/usr'. Sep 4 00:04:23.836172 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:04:23.836187 systemd[1]: Reloading... Sep 4 00:04:23.882536 zram_generator::config[1478]: No configuration found. Sep 4 00:04:24.188149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:04:24.282762 systemd[1]: Reloading finished in 446 ms. Sep 4 00:04:24.314464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:04:24.324305 systemd[1]: Starting ensure-sysext.service... Sep 4 00:04:24.327568 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:04:24.343913 systemd[1]: Reload requested from client PID 1537 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:04:24.343932 systemd[1]: Reloading... Sep 4 00:04:24.350646 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:04:24.350908 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:04:24.351175 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:04:24.351405 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:04:24.351938 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:04:24.352133 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Sep 4 00:04:24.352181 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Sep 4 00:04:24.355152 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:04:24.355162 systemd-tmpfiles[1538]: Skipping /boot Sep 4 00:04:24.362211 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:04:24.362220 systemd-tmpfiles[1538]: Skipping /boot Sep 4 00:04:24.399606 zram_generator::config[1568]: No configuration found. Sep 4 00:04:24.492191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:04:24.584671 systemd[1]: Reloading finished in 240 ms. Sep 4 00:04:24.615122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:04:24.622842 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:04:24.632918 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:04:24.637828 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:04:24.642749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:04:24.646732 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:04:24.654252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:04:24.654463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:04:24.661599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:04:24.666746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:04:24.677606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:04:24.679743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:04:24.679871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:04:24.679965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:04:24.684190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:04:24.684733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:04:24.687371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:04:24.687716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:04:24.690422 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:04:24.690684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:04:24.700230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:04:24.709070 systemd[1]: Finished ensure-sysext.service. Sep 4 00:04:24.713176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:04:24.713376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:04:24.716619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:04:24.720115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:04:24.725677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:04:24.732450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:04:24.734184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:04:24.734286 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:04:24.734390 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:04:24.737626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:04:24.738017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:04:24.738321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:04:24.740308 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:04:24.743806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:04:24.747032 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:04:24.747425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:04:24.749302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:04:24.749449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:04:24.754286 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:04:24.754365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:04:24.788369 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:04:24.805255 systemd-resolved[1631]: Positive Trust Anchors: Sep 4 00:04:24.805267 systemd-resolved[1631]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:04:24.805299 systemd-resolved[1631]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:04:24.822035 systemd-resolved[1631]: Using system hostname 'ci-4372.1.0-n-8c113b52d8'. Sep 4 00:04:24.823400 augenrules[1670]: No rules Sep 4 00:04:24.823973 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:04:24.824187 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:04:24.827748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:04:24.829968 systemd[1]: Reached target network.target - Network. Sep 4 00:04:24.832580 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:04:24.836592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:04:25.060723 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:04:25.062817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:04:27.001244 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:04:27.009659 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:04:27.012436 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:04:27.031412 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:04:27.033222 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:04:27.036730 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:04:27.039598 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:04:27.041454 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:04:27.044670 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:04:27.047595 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:04:27.050555 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:04:27.053565 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:04:27.053598 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:04:27.056544 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:04:27.060466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:04:27.063533 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:04:27.067557 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:04:27.069375 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:04:27.073566 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:04:27.090985 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:04:27.093879 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:04:27.095902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:04:27.099269 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:04:27.100600 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:04:27.103586 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:04:27.103611 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:04:27.105458 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 00:04:27.107337 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:04:27.112867 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 00:04:27.120690 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:04:27.124827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:04:27.128329 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:04:27.131757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:04:27.133816 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:04:27.135863 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:04:27.142403 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 4 00:04:27.143346 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 4 00:04:27.149034 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 4 00:04:27.151853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:27.154942 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:04:27.158987 jq[1690]: false Sep 4 00:04:27.162632 KVP[1693]: KVP starting; pid is:1693 Sep 4 00:04:27.162935 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:04:27.173367 kernel: hv_utils: KVP IC version 4.0 Sep 4 00:04:27.170786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:04:27.177543 KVP[1693]: KVP LIC Version: 3.1 Sep 4 00:04:27.177808 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:04:27.183821 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:04:27.190923 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:04:27.195578 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Refreshing passwd entry cache Sep 4 00:04:27.195456 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:04:27.194111 oslogin_cache_refresh[1692]: Refreshing passwd entry cache Sep 4 00:04:27.196678 extend-filesystems[1691]: Found /dev/nvme0n1p6 Sep 4 00:04:27.196886 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:04:27.198815 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:04:27.203931 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:04:27.215950 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:04:27.218623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:04:27.218816 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:04:27.224640 extend-filesystems[1691]: Found /dev/nvme0n1p9 Sep 4 00:04:27.232672 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Failure getting users, quitting Sep 4 00:04:27.232672 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:04:27.232672 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Refreshing group entry cache Sep 4 00:04:27.229666 oslogin_cache_refresh[1692]: Failure getting users, quitting Sep 4 00:04:27.229684 oslogin_cache_refresh[1692]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:04:27.229721 oslogin_cache_refresh[1692]: Refreshing group entry cache Sep 4 00:04:27.239419 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:04:27.239677 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:04:27.246773 extend-filesystems[1691]: Checking size of /dev/nvme0n1p9 Sep 4 00:04:27.256074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:04:27.256710 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:04:27.260345 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Failure getting groups, quitting Sep 4 00:04:27.260345 google_oslogin_nss_cache[1692]: oslogin_cache_refresh[1692]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:04:27.260338 oslogin_cache_refresh[1692]: Failure getting groups, quitting Sep 4 00:04:27.260348 oslogin_cache_refresh[1692]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:04:27.260686 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:04:27.265206 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:04:27.266157 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:04:27.272568 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 00:04:27.278111 jq[1709]: true Sep 4 00:04:27.282815 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:04:27.283828 chronyd[1740]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 00:04:27.288102 chronyd[1740]: Timezone right/UTC failed leap second check, ignoring Sep 4 00:04:27.289347 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 00:04:27.288250 chronyd[1740]: Loaded seccomp filter (level 2) Sep 4 00:04:27.295805 extend-filesystems[1691]: Old size kept for /dev/nvme0n1p9 Sep 4 00:04:27.299257 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:04:27.302676 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:04:27.314290 jq[1741]: true Sep 4 00:04:27.333684 tar[1719]: linux-amd64/LICENSE Sep 4 00:04:27.335548 tar[1719]: linux-amd64/helm Sep 4 00:04:27.338386 update_engine[1708]: I20250904 00:04:27.338312 1708 main.cc:92] Flatcar Update Engine starting Sep 4 00:04:27.353417 dbus-daemon[1685]: [system] SELinux support is enabled Sep 4 00:04:27.353630 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:04:27.360897 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:04:27.360924 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:04:27.364806 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:04:27.364834 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:04:27.377368 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:04:27.383675 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:04:27.389570 update_engine[1708]: I20250904 00:04:27.388150 1708 update_check_scheduler.cc:74] Next update check in 9m57s Sep 4 00:04:27.394850 systemd-logind[1706]: New seat seat0. Sep 4 00:04:27.402121 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:04:27.402251 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:04:27.454386 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:04:27.454719 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:04:27.458232 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 00:04:27.530447 coreos-metadata[1684]: Sep 04 00:04:27.529 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 00:04:27.535050 coreos-metadata[1684]: Sep 04 00:04:27.534 INFO Fetch successful Sep 4 00:04:27.535050 coreos-metadata[1684]: Sep 04 00:04:27.534 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 00:04:27.538768 coreos-metadata[1684]: Sep 04 00:04:27.538 INFO Fetch successful Sep 4 00:04:27.540926 coreos-metadata[1684]: Sep 04 00:04:27.540 INFO Fetching http://168.63.129.16/machine/d11ba6ac-b37d-4e31-8bcf-e2ef41b6b267/17d2691e%2Da14b%2D4ddf%2D869c%2D5185fdd7a88e.%5Fci%2D4372.1.0%2Dn%2D8c113b52d8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 00:04:27.542083 coreos-metadata[1684]: Sep 04 00:04:27.542 INFO Fetch successful Sep 4 00:04:27.542932 coreos-metadata[1684]: Sep 04 00:04:27.542 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 00:04:27.551706 coreos-metadata[1684]: Sep 04 00:04:27.551 INFO Fetch successful Sep 4 00:04:27.663930 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 00:04:27.667297 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:04:27.780869 locksmithd[1759]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:04:27.892317 sshd_keygen[1742]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:04:27.925825 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:04:27.930684 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:04:27.936628 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 00:04:27.963896 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:04:27.964092 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:04:27.971428 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:04:28.001884 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 00:04:28.026637 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:04:28.031305 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:04:28.039864 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:04:28.044687 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:04:28.278765 containerd[1726]: time="2025-09-04T00:04:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:04:28.280177 containerd[1726]: time="2025-09-04T00:04:28.280126040Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:04:28.292117 containerd[1726]: time="2025-09-04T00:04:28.292043510Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.54µs" Sep 4 00:04:28.292213 containerd[1726]: time="2025-09-04T00:04:28.292199354Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:04:28.292258 containerd[1726]: time="2025-09-04T00:04:28.292249691Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:04:28.292411 containerd[1726]: time="2025-09-04T00:04:28.292401716Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:04:28.292455 containerd[1726]: time="2025-09-04T00:04:28.292447544Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:04:28.292524 containerd[1726]: time="2025-09-04T00:04:28.292515954Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:04:28.292610 containerd[1726]: time="2025-09-04T00:04:28.292598688Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:04:28.292644 containerd[1726]: time="2025-09-04T00:04:28.292636597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:04:28.292889 containerd[1726]: time="2025-09-04T00:04:28.292873400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:04:28.292931 containerd[1726]: time="2025-09-04T00:04:28.292922748Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:04:28.292980 containerd[1726]: time="2025-09-04T00:04:28.292969130Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:04:28.293024 containerd[1726]: time="2025-09-04T00:04:28.293016395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:04:28.293114 containerd[1726]: time="2025-09-04T00:04:28.293105213Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:04:28.293308 containerd[1726]: time="2025-09-04T00:04:28.293299411Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:04:28.293367 containerd[1726]: time="2025-09-04T00:04:28.293356485Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:04:28.293403 containerd[1726]: time="2025-09-04T00:04:28.293396439Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:04:28.293455 containerd[1726]: time="2025-09-04T00:04:28.293448348Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:04:28.293754 containerd[1726]: time="2025-09-04T00:04:28.293739543Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:04:28.293861 containerd[1726]: time="2025-09-04T00:04:28.293852460Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:04:28.306311 containerd[1726]: time="2025-09-04T00:04:28.306277933Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:04:28.306393 containerd[1726]: time="2025-09-04T00:04:28.306328954Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:04:28.306393 containerd[1726]: time="2025-09-04T00:04:28.306345074Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:04:28.306393 containerd[1726]: time="2025-09-04T00:04:28.306357918Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:04:28.306393 containerd[1726]: time="2025-09-04T00:04:28.306371785Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:04:28.306393 containerd[1726]: time="2025-09-04T00:04:28.306382483Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:04:28.306495 containerd[1726]: time="2025-09-04T00:04:28.306397214Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:04:28.306495 containerd[1726]: time="2025-09-04T00:04:28.306425233Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:04:28.306495 containerd[1726]: time="2025-09-04T00:04:28.306438610Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:04:28.306495 containerd[1726]: time="2025-09-04T00:04:28.306448987Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:04:28.306495 containerd[1726]: time="2025-09-04T00:04:28.306459752Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:04:28.308046 containerd[1726]: time="2025-09-04T00:04:28.307862408Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:04:28.308046 containerd[1726]: time="2025-09-04T00:04:28.308007208Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:04:28.308046 containerd[1726]: time="2025-09-04T00:04:28.308028197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:04:28.308046 containerd[1726]: time="2025-09-04T00:04:28.308044992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:04:28.308168 containerd[1726]: time="2025-09-04T00:04:28.308055759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:04:28.308168 containerd[1726]: time="2025-09-04T00:04:28.308096620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:04:28.308168 containerd[1726]: time="2025-09-04T00:04:28.308108977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:04:28.308168 containerd[1726]: time="2025-09-04T00:04:28.308137963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:04:28.308168 containerd[1726]: time="2025-09-04T00:04:28.308163039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:04:28.308264 containerd[1726]: time="2025-09-04T00:04:28.308176541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:04:28.308264 containerd[1726]: time="2025-09-04T00:04:28.308188462Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:04:28.308264 containerd[1726]: time="2025-09-04T00:04:28.308201190Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:04:28.308324 containerd[1726]: time="2025-09-04T00:04:28.308268762Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:04:28.308324 containerd[1726]: time="2025-09-04T00:04:28.308282155Z" level=info msg="Start snapshots syncer" Sep 4 00:04:28.308324 containerd[1726]: time="2025-09-04T00:04:28.308302320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:04:28.310125 containerd[1726]: time="2025-09-04T00:04:28.309877946Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:04:28.310125 containerd[1726]: time="2025-09-04T00:04:28.309945793Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310033167Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310158857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310178471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310191010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310201144Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310224739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310236154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310246522Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310270080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:04:28.310294 containerd[1726]: time="2025-09-04T00:04:28.310281421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310302157Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310339412Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310354109Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310401166Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310411176Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310420192Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310435388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310456828Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310473389Z" level=info msg="runtime interface created" Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310478565Z" level=info msg="created NRI interface" Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310487217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310519861Z" level=info msg="Connect containerd service" Sep 4 00:04:28.310780 containerd[1726]: time="2025-09-04T00:04:28.310545685Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:04:28.312891 containerd[1726]: time="2025-09-04T00:04:28.312758940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:04:28.340583 tar[1719]: linux-amd64/README.md Sep 4 00:04:28.354363 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:04:28.687660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:28.692588 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:04:29.208197 kubelet[1845]: E0904 00:04:29.208143 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:04:29.210240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:04:29.210382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:04:29.210774 systemd[1]: kubelet.service: Consumed 959ms CPU time, 263.1M memory peak. Sep 4 00:04:29.361381 containerd[1726]: time="2025-09-04T00:04:29.361296823Z" level=info msg="Start subscribing containerd event" Sep 4 00:04:29.361381 containerd[1726]: time="2025-09-04T00:04:29.361363183Z" level=info msg="Start recovering state" Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361466891Z" level=info msg="Start event monitor" Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361480325Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361491107Z" level=info msg="Start streaming server" Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361518387Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361526768Z" level=info msg="runtime interface starting up..." Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361533158Z" level=info msg="starting plugins..." Sep 4 00:04:29.361764 containerd[1726]: time="2025-09-04T00:04:29.361545639Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:04:29.363524 containerd[1726]: time="2025-09-04T00:04:29.361984024Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:04:29.363524 containerd[1726]: time="2025-09-04T00:04:29.362032506Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:04:29.363524 containerd[1726]: time="2025-09-04T00:04:29.362091581Z" level=info msg="containerd successfully booted in 1.084256s" Sep 4 00:04:29.362216 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:04:29.365941 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:04:29.368374 systemd[1]: Startup finished in 3.380s (kernel) + 12.753s (initrd) + 20.580s (userspace) = 36.714s. Sep 4 00:04:29.810674 login[1827]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 4 00:04:29.812136 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 00:04:29.817268 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:04:29.819124 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:04:29.829015 systemd-logind[1706]: New session 1 of user core. Sep 4 00:04:30.002854 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:04:30.005356 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:04:30.015333 (systemd)[1863]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:04:30.017257 systemd-logind[1706]: New session c1 of user core. Sep 4 00:04:30.272616 systemd[1863]: Queued start job for default target default.target. Sep 4 00:04:30.281542 systemd[1863]: Created slice app.slice - User Application Slice. Sep 4 00:04:30.281576 systemd[1863]: Reached target paths.target - Paths. Sep 4 00:04:30.281613 systemd[1863]: Reached target timers.target - Timers. Sep 4 00:04:30.282901 systemd[1863]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:04:30.291465 systemd[1863]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:04:30.291644 systemd[1863]: Reached target sockets.target - Sockets. Sep 4 00:04:30.291738 systemd[1863]: Reached target basic.target - Basic System. Sep 4 00:04:30.291809 systemd[1863]: Reached target default.target - Main User Target. Sep 4 00:04:30.291845 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:04:30.292042 systemd[1863]: Startup finished in 269ms. Sep 4 00:04:30.296278 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:04:30.392384 waagent[1825]: 2025-09-04T00:04:30.392316Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 4 00:04:30.392985 waagent[1825]: 2025-09-04T00:04:30.392938Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 4 00:04:30.394521 waagent[1825]: 2025-09-04T00:04:30.394064Z INFO Daemon Daemon Python: 3.11.12 Sep 4 00:04:30.395667 waagent[1825]: 2025-09-04T00:04:30.395614Z INFO Daemon Daemon Run daemon Sep 4 00:04:30.396807 waagent[1825]: 2025-09-04T00:04:30.396769Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 4 00:04:30.397272 waagent[1825]: 2025-09-04T00:04:30.397025Z INFO Daemon Daemon Using waagent for provisioning Sep 4 00:04:30.399703 waagent[1825]: 2025-09-04T00:04:30.399668Z INFO Daemon Daemon Activate resource disk Sep 4 00:04:30.400672 waagent[1825]: 2025-09-04T00:04:30.400642Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 00:04:30.403379 waagent[1825]: 2025-09-04T00:04:30.403341Z INFO Daemon Daemon Found device: None Sep 4 00:04:30.403656 waagent[1825]: 2025-09-04T00:04:30.403626Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 00:04:30.405665 waagent[1825]: 2025-09-04T00:04:30.405636Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 00:04:30.408600 waagent[1825]: 2025-09-04T00:04:30.408555Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 00:04:30.409787 waagent[1825]: 2025-09-04T00:04:30.409757Z INFO Daemon Daemon Running default provisioning handler Sep 4 00:04:30.416445 waagent[1825]: 2025-09-04T00:04:30.416015Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 00:04:30.416882 waagent[1825]: 2025-09-04T00:04:30.416849Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 00:04:30.417053 waagent[1825]: 2025-09-04T00:04:30.417031Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 00:04:30.417370 waagent[1825]: 2025-09-04T00:04:30.417352Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 00:04:30.603815 waagent[1825]: 2025-09-04T00:04:30.603698Z INFO Daemon Daemon Successfully mounted dvd Sep 4 00:04:30.616787 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 00:04:30.619114 waagent[1825]: 2025-09-04T00:04:30.619065Z INFO Daemon Daemon Detect protocol endpoint Sep 4 00:04:30.619494 waagent[1825]: 2025-09-04T00:04:30.619410Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 00:04:30.621879 waagent[1825]: 2025-09-04T00:04:30.621802Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 00:04:30.623477 waagent[1825]: 2025-09-04T00:04:30.623446Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 00:04:30.624834 waagent[1825]: 2025-09-04T00:04:30.624803Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 00:04:30.626095 waagent[1825]: 2025-09-04T00:04:30.626025Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 00:04:30.636884 waagent[1825]: 2025-09-04T00:04:30.636850Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 00:04:30.640520 waagent[1825]: 2025-09-04T00:04:30.637593Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 00:04:30.640520 waagent[1825]: 2025-09-04T00:04:30.637761Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 00:04:30.811007 login[1827]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 00:04:30.815657 systemd-logind[1706]: New session 2 of user core. Sep 4 00:04:30.821635 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:04:30.838574 waagent[1825]: 2025-09-04T00:04:30.838470Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 00:04:30.839432 waagent[1825]: 2025-09-04T00:04:30.839384Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 00:04:30.844828 waagent[1825]: 2025-09-04T00:04:30.844779Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 00:04:30.855834 waagent[1825]: 2025-09-04T00:04:30.855765Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 4 00:04:30.862519 waagent[1825]: 2025-09-04T00:04:30.856703Z INFO Daemon Sep 4 00:04:30.862519 waagent[1825]: 2025-09-04T00:04:30.856789Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 175213d4-38de-4cb2-9e09-fd56a591aa24 eTag: 6628241365255284782 source: Fabric] Sep 4 00:04:30.862519 waagent[1825]: 2025-09-04T00:04:30.857052Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 00:04:30.862519 waagent[1825]: 2025-09-04T00:04:30.857331Z INFO Daemon Sep 4 00:04:30.862519 waagent[1825]: 2025-09-04T00:04:30.857573Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 00:04:30.866514 waagent[1825]: 2025-09-04T00:04:30.866481Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 00:04:30.943183 waagent[1825]: 2025-09-04T00:04:30.943136Z INFO Daemon Downloaded certificate {'thumbprint': '1DF12E59416B22503227ADEA90A110B832065034', 'hasPrivateKey': True} Sep 4 00:04:30.946089 waagent[1825]: 2025-09-04T00:04:30.946057Z INFO Daemon Fetch goal state completed Sep 4 00:04:30.952485 waagent[1825]: 2025-09-04T00:04:30.952440Z INFO Daemon Daemon Starting provisioning Sep 4 00:04:30.952989 waagent[1825]: 2025-09-04T00:04:30.952794Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 00:04:30.952989 waagent[1825]: 2025-09-04T00:04:30.952990Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-8c113b52d8] Sep 4 00:04:30.956841 waagent[1825]: 2025-09-04T00:04:30.956801Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-8c113b52d8] Sep 4 00:04:30.958216 waagent[1825]: 2025-09-04T00:04:30.957243Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 00:04:30.958216 waagent[1825]: 2025-09-04T00:04:30.957443Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 00:04:30.965120 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:30.965345 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:04:30.965370 systemd-networkd[1356]: eth0: DHCP lease lost Sep 4 00:04:30.966017 waagent[1825]: 2025-09-04T00:04:30.965968Z INFO Daemon Daemon Create user account if not exists Sep 4 00:04:30.970025 waagent[1825]: 2025-09-04T00:04:30.966768Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 00:04:30.970025 waagent[1825]: 2025-09-04T00:04:30.966962Z INFO Daemon Daemon Configure sudoer Sep 4 00:04:30.992584 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 00:04:31.239771 waagent[1825]: 2025-09-04T00:04:31.239691Z INFO Daemon Daemon Configure sshd Sep 4 00:04:31.243346 waagent[1825]: 2025-09-04T00:04:31.243302Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 00:04:31.247695 waagent[1825]: 2025-09-04T00:04:31.243810Z INFO Daemon Daemon Deploy ssh public key. Sep 4 00:04:32.308619 waagent[1825]: 2025-09-04T00:04:32.308563Z INFO Daemon Daemon Provisioning complete Sep 4 00:04:32.333108 waagent[1825]: 2025-09-04T00:04:32.333075Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 00:04:32.335405 waagent[1825]: 2025-09-04T00:04:32.333533Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 00:04:32.335405 waagent[1825]: 2025-09-04T00:04:32.333715Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 4 00:04:32.438425 waagent[1912]: 2025-09-04T00:04:32.438348Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 4 00:04:32.438758 waagent[1912]: 2025-09-04T00:04:32.438455Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 4 00:04:32.438758 waagent[1912]: 2025-09-04T00:04:32.438495Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 4 00:04:32.438758 waagent[1912]: 2025-09-04T00:04:32.438558Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 4 00:04:32.570345 waagent[1912]: 2025-09-04T00:04:32.570225Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 4 00:04:32.570442 waagent[1912]: 2025-09-04T00:04:32.570414Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 00:04:32.570488 waagent[1912]: 2025-09-04T00:04:32.570465Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 00:04:32.576906 waagent[1912]: 2025-09-04T00:04:32.576844Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 00:04:32.586834 waagent[1912]: 2025-09-04T00:04:32.586806Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 4 00:04:32.587183 waagent[1912]: 2025-09-04T00:04:32.587152Z INFO ExtHandler Sep 4 00:04:32.587229 waagent[1912]: 2025-09-04T00:04:32.587208Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6963e557-c91d-4df8-bb5f-a90615856def eTag: 6628241365255284782 source: Fabric] Sep 4 00:04:32.587435 waagent[1912]: 2025-09-04T00:04:32.587411Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 00:04:32.587795 waagent[1912]: 2025-09-04T00:04:32.587769Z INFO ExtHandler Sep 4 00:04:32.587844 waagent[1912]: 2025-09-04T00:04:32.587811Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 00:04:32.591480 waagent[1912]: 2025-09-04T00:04:32.591455Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 00:04:32.667519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:04:32.676162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#124 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 4 00:04:32.694085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#216 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 4 00:04:32.700515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#219 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 4 00:04:32.708533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#221 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 4 00:04:33.990116 waagent[1912]: 2025-09-04T00:04:33.988575Z INFO ExtHandler Downloaded certificate {'thumbprint': '1DF12E59416B22503227ADEA90A110B832065034', 'hasPrivateKey': True} Sep 4 00:04:33.990116 waagent[1912]: 2025-09-04T00:04:33.989232Z INFO ExtHandler Fetch goal state completed Sep 4 00:04:34.019334 waagent[1912]: 2025-09-04T00:04:34.019274Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 4 00:04:34.023855 waagent[1912]: 2025-09-04T00:04:34.023808Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1912 Sep 4 00:04:34.023978 waagent[1912]: 2025-09-04T00:04:34.023940Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 00:04:34.024222 waagent[1912]: 2025-09-04T00:04:34.024198Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 4 00:04:34.025277 waagent[1912]: 2025-09-04T00:04:34.025245Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 00:04:34.025614 waagent[1912]: 2025-09-04T00:04:34.025588Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 4 00:04:34.025736 waagent[1912]: 2025-09-04T00:04:34.025716Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 4 00:04:34.026117 waagent[1912]: 2025-09-04T00:04:34.026094Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 00:04:34.999489 waagent[1912]: 2025-09-04T00:04:34.999450Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 00:04:34.999836 waagent[1912]: 2025-09-04T00:04:34.999649Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 00:04:35.005127 waagent[1912]: 2025-09-04T00:04:35.005040Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 00:04:35.010662 systemd[1]: Reload requested from client PID 1933 ('systemctl') (unit waagent.service)... Sep 4 00:04:35.010682 systemd[1]: Reloading... Sep 4 00:04:35.082529 zram_generator::config[1971]: No configuration found. Sep 4 00:04:35.166619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:04:35.261482 systemd[1]: Reloading finished in 250 ms. Sep 4 00:04:35.285056 waagent[1912]: 2025-09-04T00:04:35.284229Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 00:04:35.285056 waagent[1912]: 2025-09-04T00:04:35.284377Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 00:04:35.950221 waagent[1912]: 2025-09-04T00:04:35.950148Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 00:04:35.950490 waagent[1912]: 2025-09-04T00:04:35.950463Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 4 00:04:35.951191 waagent[1912]: 2025-09-04T00:04:35.951157Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 00:04:35.951675 waagent[1912]: 2025-09-04T00:04:35.951565Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 00:04:35.951675 waagent[1912]: 2025-09-04T00:04:35.951619Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 00:04:35.951750 waagent[1912]: 2025-09-04T00:04:35.951692Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 00:04:35.951774 waagent[1912]: 2025-09-04T00:04:35.951748Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 00:04:35.951922 waagent[1912]: 2025-09-04T00:04:35.951901Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 00:04:35.952066 waagent[1912]: 2025-09-04T00:04:35.952042Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 00:04:35.952066 waagent[1912]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 00:04:35.952066 waagent[1912]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 00:04:35.952066 waagent[1912]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 00:04:35.952066 waagent[1912]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 00:04:35.952066 waagent[1912]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 00:04:35.952066 waagent[1912]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 00:04:35.952549 waagent[1912]: 2025-09-04T00:04:35.952491Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 00:04:35.952655 waagent[1912]: 2025-09-04T00:04:35.952621Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 00:04:35.952690 waagent[1912]: 2025-09-04T00:04:35.952662Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 00:04:35.952964 waagent[1912]: 2025-09-04T00:04:35.952928Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 00:04:35.953131 waagent[1912]: 2025-09-04T00:04:35.953111Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 00:04:35.953350 waagent[1912]: 2025-09-04T00:04:35.953328Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 00:04:35.953522 waagent[1912]: 2025-09-04T00:04:35.953470Z INFO EnvHandler ExtHandler Configure routes Sep 4 00:04:35.954425 waagent[1912]: 2025-09-04T00:04:35.954403Z INFO EnvHandler ExtHandler Gateway:None Sep 4 00:04:35.954882 waagent[1912]: 2025-09-04T00:04:35.954852Z INFO EnvHandler ExtHandler Routes:None Sep 4 00:04:35.965549 waagent[1912]: 2025-09-04T00:04:35.965173Z INFO ExtHandler ExtHandler Sep 4 00:04:35.965549 waagent[1912]: 2025-09-04T00:04:35.965224Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3ea89a64-4a4e-481f-96d4-6564e0c35dd0 correlation 2e5d4d08-60d3-40c7-aa8d-613f1ae99889 created: 2025-09-04T00:03:27.148239Z] Sep 4 00:04:35.965549 waagent[1912]: 2025-09-04T00:04:35.965437Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 00:04:35.965886 waagent[1912]: 2025-09-04T00:04:35.965857Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 4 00:04:36.040433 waagent[1912]: 2025-09-04T00:04:36.039978Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 4 00:04:36.040433 waagent[1912]: Try `iptables -h' or 'iptables --help' for more information.) Sep 4 00:04:36.040433 waagent[1912]: 2025-09-04T00:04:36.040351Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9724E18F-9B82-4DB6-9703-F32656D3483C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 4 00:04:36.103357 waagent[1912]: 2025-09-04T00:04:36.103308Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 00:04:36.103357 waagent[1912]: Executing ['ip', '-a', '-o', 'link']: Sep 4 00:04:36.103357 waagent[1912]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 00:04:36.103357 waagent[1912]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:35:5f brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 4 00:04:36.103357 waagent[1912]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:35:5f brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 4 00:04:36.103357 waagent[1912]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 00:04:36.103357 waagent[1912]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 00:04:36.103357 waagent[1912]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 00:04:36.103357 waagent[1912]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 00:04:36.103357 waagent[1912]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 00:04:36.103357 waagent[1912]: 2: eth0 inet6 fe80::7eed:8dff:fe2d:355f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 00:04:36.165442 waagent[1912]: 2025-09-04T00:04:36.165390Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 4 00:04:36.165442 waagent[1912]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:04:36.165442 waagent[1912]: pkts bytes target prot opt in out source destination Sep 4 00:04:36.165442 waagent[1912]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:04:36.165442 waagent[1912]: pkts bytes target prot opt in out source destination Sep 4 00:04:36.165442 waagent[1912]: Chain OUTPUT (policy ACCEPT 3 packets, 364 bytes) Sep 4 00:04:36.165442 waagent[1912]: pkts bytes target prot opt in out source destination Sep 4 00:04:36.165442 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 00:04:36.165442 waagent[1912]: 3 164 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 00:04:36.165442 waagent[1912]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 00:04:36.168331 waagent[1912]: 2025-09-04T00:04:36.168256Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 00:04:36.168331 waagent[1912]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:04:36.168331 waagent[1912]: pkts bytes target prot opt in out source destination Sep 4 00:04:36.168331 waagent[1912]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:04:36.168331 waagent[1912]: pkts bytes target prot opt in out source destination Sep 4 00:04:36.168331 waagent[1912]: Chain OUTPUT (policy ACCEPT 3 packets, 364 bytes) Sep 4 00:04:36.168331 waagent[1912]: pkts bytes target prot opt in out source destination Sep 4 00:04:36.168331 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 00:04:36.168331 waagent[1912]: 5 400 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 00:04:36.168331 waagent[1912]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 00:04:39.394622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:04:39.396130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:43.846076 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:04:43.847285 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:50266.service - OpenSSH per-connection server daemon (10.200.16.10:50266). Sep 4 00:04:45.050774 sshd[2065]: Accepted publickey for core from 10.200.16.10 port 50266 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:45.051977 sshd-session[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:45.055784 systemd-logind[1706]: New session 3 of user core. Sep 4 00:04:45.062658 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:04:45.610363 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:50276.service - OpenSSH per-connection server daemon (10.200.16.10:50276). Sep 4 00:04:46.250755 sshd[2070]: Accepted publickey for core from 10.200.16.10 port 50276 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:46.251896 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:46.256245 systemd-logind[1706]: New session 4 of user core. Sep 4 00:04:46.262659 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:04:46.698658 sshd[2072]: Connection closed by 10.200.16.10 port 50276 Sep 4 00:04:46.699188 sshd-session[2070]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:46.702396 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:50276.service: Deactivated successfully. Sep 4 00:04:46.703934 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:04:46.704753 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:04:46.706163 systemd-logind[1706]: Removed session 4. Sep 4 00:04:46.810131 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:50280.service - OpenSSH per-connection server daemon (10.200.16.10:50280). Sep 4 00:04:47.450183 sshd[2078]: Accepted publickey for core from 10.200.16.10 port 50280 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:47.838057 sshd-session[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:47.843600 systemd-logind[1706]: New session 5 of user core. Sep 4 00:04:47.850686 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:04:47.879525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:04:47.886739 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:04:47.920232 kubelet[2086]: E0904 00:04:47.920204 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:04:47.923001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:04:47.923129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:04:47.923373 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108.5M memory peak. Sep 4 00:04:48.201395 sshd[2080]: Connection closed by 10.200.16.10 port 50280 Sep 4 00:04:48.201976 sshd-session[2078]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:48.204569 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:50280.service: Deactivated successfully. Sep 4 00:04:48.206105 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:04:48.208045 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:04:48.209042 systemd-logind[1706]: Removed session 5. Sep 4 00:04:48.318121 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:50286.service - OpenSSH per-connection server daemon (10.200.16.10:50286). Sep 4 00:04:48.961734 sshd[2099]: Accepted publickey for core from 10.200.16.10 port 50286 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:48.962870 sshd-session[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:48.967271 systemd-logind[1706]: New session 6 of user core. Sep 4 00:04:48.973650 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:04:49.411392 sshd[2101]: Connection closed by 10.200.16.10 port 50286 Sep 4 00:04:49.411946 sshd-session[2099]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:49.414512 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:50286.service: Deactivated successfully. Sep 4 00:04:49.416132 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:04:49.418030 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:04:49.418965 systemd-logind[1706]: Removed session 6. Sep 4 00:04:49.527374 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:50288.service - OpenSSH per-connection server daemon (10.200.16.10:50288). Sep 4 00:04:50.166664 sshd[2107]: Accepted publickey for core from 10.200.16.10 port 50288 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:50.167815 sshd-session[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:50.172382 systemd-logind[1706]: New session 7 of user core. Sep 4 00:04:50.178686 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:04:50.590759 sudo[2110]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:04:50.590987 sudo[2110]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:04:50.604597 sudo[2110]: pam_unix(sudo:session): session closed for user root Sep 4 00:04:50.707925 sshd[2109]: Connection closed by 10.200.16.10 port 50288 Sep 4 00:04:50.708576 sshd-session[2107]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:50.711325 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:50288.service: Deactivated successfully. Sep 4 00:04:50.712876 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:04:50.714728 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:04:50.715726 systemd-logind[1706]: Removed session 7. Sep 4 00:04:50.824350 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:54246.service - OpenSSH per-connection server daemon (10.200.16.10:54246). Sep 4 00:04:51.072238 chronyd[1740]: Selected source PHC0 Sep 4 00:04:51.463747 sshd[2116]: Accepted publickey for core from 10.200.16.10 port 54246 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:51.464975 sshd-session[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:51.468763 systemd-logind[1706]: New session 8 of user core. Sep 4 00:04:51.476672 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:04:51.810795 sudo[2120]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:04:51.811027 sudo[2120]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:04:51.817022 sudo[2120]: pam_unix(sudo:session): session closed for user root Sep 4 00:04:51.821223 sudo[2119]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:04:51.821445 sudo[2119]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:04:51.829373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:04:51.862031 augenrules[2142]: No rules Sep 4 00:04:51.863110 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:04:51.863311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:04:51.864019 sudo[2119]: pam_unix(sudo:session): session closed for user root Sep 4 00:04:51.967628 sshd[2118]: Connection closed by 10.200.16.10 port 54246 Sep 4 00:04:51.968059 sshd-session[2116]: pam_unix(sshd:session): session closed for user core Sep 4 00:04:51.970559 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:54246.service: Deactivated successfully. Sep 4 00:04:51.971985 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:04:51.973158 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:04:51.974491 systemd-logind[1706]: Removed session 8. Sep 4 00:04:52.084161 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:54254.service - OpenSSH per-connection server daemon (10.200.16.10:54254). Sep 4 00:04:52.727580 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 54254 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:04:52.728671 sshd-session[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:04:52.733293 systemd-logind[1706]: New session 9 of user core. Sep 4 00:04:52.738666 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:04:53.075003 sudo[2154]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:04:53.075227 sudo[2154]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:04:55.092986 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:04:55.101814 (dockerd)[2171]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:04:55.886130 dockerd[2171]: time="2025-09-04T00:04:55.886067189Z" level=info msg="Starting up" Sep 4 00:04:55.887118 dockerd[2171]: time="2025-09-04T00:04:55.887076547Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:04:55.920392 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport788723043-merged.mount: Deactivated successfully. Sep 4 00:04:56.534252 dockerd[2171]: time="2025-09-04T00:04:56.534205769Z" level=info msg="Loading containers: start." Sep 4 00:04:56.545531 kernel: Initializing XFRM netlink socket Sep 4 00:04:56.786808 systemd-networkd[1356]: docker0: Link UP Sep 4 00:04:56.798546 dockerd[2171]: time="2025-09-04T00:04:56.798516234Z" level=info msg="Loading containers: done." Sep 4 00:04:57.274874 dockerd[2171]: time="2025-09-04T00:04:57.274792274Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:04:57.275296 dockerd[2171]: time="2025-09-04T00:04:57.274932145Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:04:57.275296 dockerd[2171]: time="2025-09-04T00:04:57.275051431Z" level=info msg="Initializing buildkit" Sep 4 00:04:57.438523 dockerd[2171]: time="2025-09-04T00:04:57.438461906Z" level=info msg="Completed buildkit initialization" Sep 4 00:04:57.442045 dockerd[2171]: time="2025-09-04T00:04:57.442014041Z" level=info msg="Daemon has completed initialization" Sep 4 00:04:57.442700 dockerd[2171]: time="2025-09-04T00:04:57.442057168Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:04:57.442289 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:04:58.144664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:04:58.146190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:04:58.505088 containerd[1726]: time="2025-09-04T00:04:58.504955076Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 00:05:01.124610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:01.136732 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:01.173515 kubelet[2380]: E0904 00:05:01.173052 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:01.175327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:01.176674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:01.176958 systemd[1]: kubelet.service: Consumed 128ms CPU time, 111M memory peak. Sep 4 00:05:02.411125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969325946.mount: Deactivated successfully. Sep 4 00:05:08.596408 containerd[1726]: time="2025-09-04T00:05:08.596356637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:08.598522 containerd[1726]: time="2025-09-04T00:05:08.598384583Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800695" Sep 4 00:05:08.601017 containerd[1726]: time="2025-09-04T00:05:08.600977646Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:08.604121 containerd[1726]: time="2025-09-04T00:05:08.604082060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:08.605075 containerd[1726]: time="2025-09-04T00:05:08.604683990Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 10.099692379s" Sep 4 00:05:08.605075 containerd[1726]: time="2025-09-04T00:05:08.604719013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 00:05:08.605319 containerd[1726]: time="2025-09-04T00:05:08.605305372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 00:05:08.733524 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 00:05:11.139118 containerd[1726]: time="2025-09-04T00:05:11.139067512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:11.141292 containerd[1726]: time="2025-09-04T00:05:11.141265852Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784136" Sep 4 00:05:11.143670 containerd[1726]: time="2025-09-04T00:05:11.143649660Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:11.147382 containerd[1726]: time="2025-09-04T00:05:11.147356636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:11.148177 containerd[1726]: time="2025-09-04T00:05:11.148022329Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.542652355s" Sep 4 00:05:11.148177 containerd[1726]: time="2025-09-04T00:05:11.148055728Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 00:05:11.148602 containerd[1726]: time="2025-09-04T00:05:11.148572459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 00:05:11.394723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:05:11.396425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:12.661570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:12.669577 update_engine[1708]: I20250904 00:05:12.669538 1708 update_attempter.cc:509] Updating boot flags... Sep 4 00:05:12.671742 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:12.775573 kubelet[2452]: E0904 00:05:12.772492 2452 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:12.777756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:12.777878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:12.778120 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.2M memory peak. Sep 4 00:05:13.562806 containerd[1726]: time="2025-09-04T00:05:13.562760018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:13.564893 containerd[1726]: time="2025-09-04T00:05:13.564856405Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175044" Sep 4 00:05:13.567260 containerd[1726]: time="2025-09-04T00:05:13.567222394Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:13.570758 containerd[1726]: time="2025-09-04T00:05:13.570714984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:13.571620 containerd[1726]: time="2025-09-04T00:05:13.571320291Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.422717855s" Sep 4 00:05:13.571620 containerd[1726]: time="2025-09-04T00:05:13.571351874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 00:05:13.571851 containerd[1726]: time="2025-09-04T00:05:13.571832613Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 00:05:14.691149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218508056.mount: Deactivated successfully. Sep 4 00:05:15.051362 containerd[1726]: time="2025-09-04T00:05:15.051245396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:15.053446 containerd[1726]: time="2025-09-04T00:05:15.053414818Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897178" Sep 4 00:05:15.057604 containerd[1726]: time="2025-09-04T00:05:15.057560051Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:15.061193 containerd[1726]: time="2025-09-04T00:05:15.060310745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:15.061193 containerd[1726]: time="2025-09-04T00:05:15.061018126Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.489158539s" Sep 4 00:05:15.061193 containerd[1726]: time="2025-09-04T00:05:15.061039327Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 00:05:15.061555 containerd[1726]: time="2025-09-04T00:05:15.061527467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:05:15.506040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974339247.mount: Deactivated successfully. Sep 4 00:05:16.330869 containerd[1726]: time="2025-09-04T00:05:16.330823811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:16.333013 containerd[1726]: time="2025-09-04T00:05:16.332976932Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 4 00:05:16.335511 containerd[1726]: time="2025-09-04T00:05:16.335461486Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:16.339207 containerd[1726]: time="2025-09-04T00:05:16.339166719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:16.339979 containerd[1726]: time="2025-09-04T00:05:16.339829025Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.278273425s" Sep 4 00:05:16.339979 containerd[1726]: time="2025-09-04T00:05:16.339861119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:05:16.340399 containerd[1726]: time="2025-09-04T00:05:16.340368214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:05:16.805743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279133157.mount: Deactivated successfully. Sep 4 00:05:16.821326 containerd[1726]: time="2025-09-04T00:05:16.820755648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:16.822806 containerd[1726]: time="2025-09-04T00:05:16.822777710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 4 00:05:16.825491 containerd[1726]: time="2025-09-04T00:05:16.825441871Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:16.828642 containerd[1726]: time="2025-09-04T00:05:16.828600082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:16.829069 containerd[1726]: time="2025-09-04T00:05:16.829044088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.580693ms" Sep 4 00:05:16.829143 containerd[1726]: time="2025-09-04T00:05:16.829131951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:05:16.829680 containerd[1726]: time="2025-09-04T00:05:16.829661887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 00:05:17.375345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924877550.mount: Deactivated successfully. Sep 4 00:05:19.007193 containerd[1726]: time="2025-09-04T00:05:19.007144503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:19.009463 containerd[1726]: time="2025-09-04T00:05:19.009428376Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 4 00:05:19.013520 containerd[1726]: time="2025-09-04T00:05:19.011696931Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:19.018019 containerd[1726]: time="2025-09-04T00:05:19.017983019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:19.018665 containerd[1726]: time="2025-09-04T00:05:19.018642831Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.188957088s" Sep 4 00:05:19.018742 containerd[1726]: time="2025-09-04T00:05:19.018731440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 00:05:21.366598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:21.366755 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.2M memory peak. Sep 4 00:05:21.368839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:21.389610 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-9.scope)... Sep 4 00:05:21.389621 systemd[1]: Reloading... Sep 4 00:05:21.468524 zram_generator::config[2684]: No configuration found. Sep 4 00:05:21.577421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:21.684082 systemd[1]: Reloading finished in 294 ms. Sep 4 00:05:21.776562 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:05:21.776643 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:05:21.776867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:21.776945 systemd[1]: kubelet.service: Consumed 79ms CPU time, 83.2M memory peak. Sep 4 00:05:21.778948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:22.267526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:22.273750 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:05:22.310994 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:22.310994 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:05:22.310994 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:22.313083 kubelet[2755]: I0904 00:05:22.313042 2755 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:05:22.609045 kubelet[2755]: I0904 00:05:22.607355 2755 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:05:22.609045 kubelet[2755]: I0904 00:05:22.607384 2755 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:05:22.609045 kubelet[2755]: I0904 00:05:22.607834 2755 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:05:22.637421 kubelet[2755]: E0904 00:05:22.637384 2755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:22.637558 kubelet[2755]: I0904 00:05:22.637402 2755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:05:22.645782 kubelet[2755]: I0904 00:05:22.645754 2755 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:05:22.648280 kubelet[2755]: I0904 00:05:22.648253 2755 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:05:22.648462 kubelet[2755]: I0904 00:05:22.648434 2755 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:05:22.648641 kubelet[2755]: I0904 00:05:22.648462 2755 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-8c113b52d8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:05:22.649272 kubelet[2755]: I0904 00:05:22.649259 2755 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:05:22.649302 kubelet[2755]: I0904 00:05:22.649275 2755 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:05:22.649383 kubelet[2755]: I0904 00:05:22.649373 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:22.652606 kubelet[2755]: I0904 00:05:22.652589 2755 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:05:22.652668 kubelet[2755]: I0904 00:05:22.652611 2755 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:05:22.652668 kubelet[2755]: I0904 00:05:22.652633 2755 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:05:22.652668 kubelet[2755]: I0904 00:05:22.652643 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:05:22.659552 kubelet[2755]: W0904 00:05:22.659493 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 4 00:05:22.659643 kubelet[2755]: E0904 00:05:22.659571 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:22.659674 kubelet[2755]: W0904 00:05:22.659640 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-8c113b52d8&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 4 00:05:22.659696 kubelet[2755]: E0904 00:05:22.659669 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-8c113b52d8&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:22.660814 kubelet[2755]: I0904 00:05:22.659917 2755 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:05:22.660814 kubelet[2755]: I0904 00:05:22.660300 2755 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:05:22.660814 kubelet[2755]: W0904 00:05:22.660346 2755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:05:22.663833 kubelet[2755]: I0904 00:05:22.663814 2755 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:05:22.663900 kubelet[2755]: I0904 00:05:22.663851 2755 server.go:1287] "Started kubelet" Sep 4 00:05:22.667664 kubelet[2755]: I0904 00:05:22.667608 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:05:22.667796 kubelet[2755]: I0904 00:05:22.667751 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:05:22.668112 kubelet[2755]: I0904 00:05:22.668100 2755 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:05:22.668950 kubelet[2755]: I0904 00:05:22.668924 2755 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:05:22.670744 kubelet[2755]: I0904 00:05:22.670729 2755 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:05:22.677527 kubelet[2755]: I0904 00:05:22.677496 2755 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:05:22.677796 kubelet[2755]: E0904 00:05:22.677771 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" Sep 4 00:05:22.680721 kubelet[2755]: I0904 00:05:22.680707 2755 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:05:22.680830 kubelet[2755]: I0904 00:05:22.677598 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:05:22.681055 kubelet[2755]: I0904 00:05:22.681037 2755 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:05:22.682132 kubelet[2755]: E0904 00:05:22.680524 2755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-8c113b52d8.1861eb8d82845d53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-8c113b52d8,UID:ci-4372.1.0-n-8c113b52d8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-8c113b52d8,},FirstTimestamp:2025-09-04 00:05:22.663832915 +0000 UTC m=+0.386927906,LastTimestamp:2025-09-04 00:05:22.663832915 +0000 UTC m=+0.386927906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-8c113b52d8,}" Sep 4 00:05:22.682439 kubelet[2755]: E0904 00:05:22.682418 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-8c113b52d8?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Sep 4 00:05:22.682730 kubelet[2755]: I0904 00:05:22.682718 2755 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:05:22.682848 kubelet[2755]: I0904 00:05:22.682835 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:05:22.684227 kubelet[2755]: W0904 00:05:22.684195 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 4 00:05:22.684333 kubelet[2755]: E0904 00:05:22.684318 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:22.684484 kubelet[2755]: I0904 00:05:22.684475 2755 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:05:22.692379 kubelet[2755]: I0904 00:05:22.692357 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:05:22.693317 kubelet[2755]: I0904 00:05:22.693302 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:05:22.693533 kubelet[2755]: I0904 00:05:22.693382 2755 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:05:22.693533 kubelet[2755]: I0904 00:05:22.693399 2755 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:05:22.693533 kubelet[2755]: I0904 00:05:22.693405 2755 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:05:22.693533 kubelet[2755]: E0904 00:05:22.693434 2755 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:05:22.698390 kubelet[2755]: W0904 00:05:22.698343 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 4 00:05:22.698470 kubelet[2755]: E0904 00:05:22.698395 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:22.701569 kubelet[2755]: E0904 00:05:22.701227 2755 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:05:22.705749 kubelet[2755]: I0904 00:05:22.705730 2755 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:05:22.705749 kubelet[2755]: I0904 00:05:22.705742 2755 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:05:22.705850 kubelet[2755]: I0904 00:05:22.705757 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:22.710397 kubelet[2755]: I0904 00:05:22.710377 2755 policy_none.go:49] "None policy: Start" Sep 4 00:05:22.710397 kubelet[2755]: I0904 00:05:22.710393 2755 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:05:22.710478 kubelet[2755]: I0904 00:05:22.710404 2755 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:05:22.718174 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:05:22.728566 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:05:22.731702 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:05:22.746393 kubelet[2755]: I0904 00:05:22.746024 2755 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:05:22.746393 kubelet[2755]: I0904 00:05:22.746170 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:05:22.746393 kubelet[2755]: I0904 00:05:22.746180 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:05:22.746393 kubelet[2755]: I0904 00:05:22.746332 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:05:22.747536 kubelet[2755]: E0904 00:05:22.747434 2755 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:05:22.747536 kubelet[2755]: E0904 00:05:22.747472 2755 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-8c113b52d8\" not found" Sep 4 00:05:22.801327 systemd[1]: Created slice kubepods-burstable-pod43750f44879b29aa7fdfcaf1d32e7b20.slice - libcontainer container kubepods-burstable-pod43750f44879b29aa7fdfcaf1d32e7b20.slice. Sep 4 00:05:22.825649 kubelet[2755]: E0904 00:05:22.825611 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.828111 systemd[1]: Created slice kubepods-burstable-podf7b0e0c1caf934b86c6c809f582c3601.slice - libcontainer container kubepods-burstable-podf7b0e0c1caf934b86c6c809f582c3601.slice. Sep 4 00:05:22.832384 kubelet[2755]: E0904 00:05:22.832366 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.834845 systemd[1]: Created slice kubepods-burstable-podfd4ec6bbc5e70af4019d395334988dc8.slice - libcontainer container kubepods-burstable-podfd4ec6bbc5e70af4019d395334988dc8.slice. Sep 4 00:05:22.836757 kubelet[2755]: E0904 00:05:22.836737 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.847790 kubelet[2755]: I0904 00:05:22.847774 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.848055 kubelet[2755]: E0904 00:05:22.848039 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882554 kubelet[2755]: I0904 00:05:22.882327 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd4ec6bbc5e70af4019d395334988dc8-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" (UID: \"fd4ec6bbc5e70af4019d395334988dc8\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882554 kubelet[2755]: I0904 00:05:22.882366 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd4ec6bbc5e70af4019d395334988dc8-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" (UID: \"fd4ec6bbc5e70af4019d395334988dc8\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882554 kubelet[2755]: I0904 00:05:22.882388 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd4ec6bbc5e70af4019d395334988dc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" (UID: \"fd4ec6bbc5e70af4019d395334988dc8\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882554 kubelet[2755]: I0904 00:05:22.882407 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7b0e0c1caf934b86c6c809f582c3601-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-8c113b52d8\" (UID: \"f7b0e0c1caf934b86c6c809f582c3601\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882554 kubelet[2755]: I0904 00:05:22.882422 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882835 kubelet[2755]: I0904 00:05:22.882438 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882835 kubelet[2755]: I0904 00:05:22.882454 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882835 kubelet[2755]: I0904 00:05:22.882475 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.882835 kubelet[2755]: I0904 00:05:22.882493 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:22.883120 kubelet[2755]: E0904 00:05:22.883096 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-8c113b52d8?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Sep 4 00:05:23.050157 kubelet[2755]: I0904 00:05:23.050130 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:23.050487 kubelet[2755]: E0904 00:05:23.050460 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:23.127755 containerd[1726]: time="2025-09-04T00:05:23.127719542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-8c113b52d8,Uid:43750f44879b29aa7fdfcaf1d32e7b20,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:23.133357 containerd[1726]: time="2025-09-04T00:05:23.133141332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-8c113b52d8,Uid:f7b0e0c1caf934b86c6c809f582c3601,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:23.137465 containerd[1726]: time="2025-09-04T00:05:23.137439917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-8c113b52d8,Uid:fd4ec6bbc5e70af4019d395334988dc8,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:23.181164 containerd[1726]: time="2025-09-04T00:05:23.181135764Z" level=info msg="connecting to shim 6809b6c1ffb99f0f8ed23253cbae8a94242885d4828196cf91bbe0ed82b58bab" address="unix:///run/containerd/s/b59bece94b949ffea93532206ed6d19b5851434a283694fa3a0ff830fe10533e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:23.193275 containerd[1726]: time="2025-09-04T00:05:23.193224400Z" level=info msg="connecting to shim 5b94f5142ec977f4996b2f29f2c481ad5a26684d66c5f4fe3e36bcbdcba04ee2" address="unix:///run/containerd/s/3af7c55cb6daa194a2f2b46755be73bcfffe54628219089aa1ed12ccfe9aa80c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:23.218658 systemd[1]: Started cri-containerd-6809b6c1ffb99f0f8ed23253cbae8a94242885d4828196cf91bbe0ed82b58bab.scope - libcontainer container 6809b6c1ffb99f0f8ed23253cbae8a94242885d4828196cf91bbe0ed82b58bab. Sep 4 00:05:23.231527 containerd[1726]: time="2025-09-04T00:05:23.230618993Z" level=info msg="connecting to shim 6d57163c72ece85516485fc8d8454f6a42cc7989a749bf2f2ff89bf58719ecad" address="unix:///run/containerd/s/53b0412f3931d0acbfb9557c030c56f7949e248e7a373c716dc9a95ef3131931" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:23.233681 systemd[1]: Started cri-containerd-5b94f5142ec977f4996b2f29f2c481ad5a26684d66c5f4fe3e36bcbdcba04ee2.scope - libcontainer container 5b94f5142ec977f4996b2f29f2c481ad5a26684d66c5f4fe3e36bcbdcba04ee2. Sep 4 00:05:23.264627 systemd[1]: Started cri-containerd-6d57163c72ece85516485fc8d8454f6a42cc7989a749bf2f2ff89bf58719ecad.scope - libcontainer container 6d57163c72ece85516485fc8d8454f6a42cc7989a749bf2f2ff89bf58719ecad. Sep 4 00:05:23.288029 kubelet[2755]: E0904 00:05:23.287998 2755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-8c113b52d8?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Sep 4 00:05:23.311712 containerd[1726]: time="2025-09-04T00:05:23.311679537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-8c113b52d8,Uid:f7b0e0c1caf934b86c6c809f582c3601,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b94f5142ec977f4996b2f29f2c481ad5a26684d66c5f4fe3e36bcbdcba04ee2\"" Sep 4 00:05:23.316394 containerd[1726]: time="2025-09-04T00:05:23.316083950Z" level=info msg="CreateContainer within sandbox \"5b94f5142ec977f4996b2f29f2c481ad5a26684d66c5f4fe3e36bcbdcba04ee2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:05:23.316394 containerd[1726]: time="2025-09-04T00:05:23.316324049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-8c113b52d8,Uid:43750f44879b29aa7fdfcaf1d32e7b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"6809b6c1ffb99f0f8ed23253cbae8a94242885d4828196cf91bbe0ed82b58bab\"" Sep 4 00:05:23.319983 containerd[1726]: time="2025-09-04T00:05:23.319960207Z" level=info msg="CreateContainer within sandbox \"6809b6c1ffb99f0f8ed23253cbae8a94242885d4828196cf91bbe0ed82b58bab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:05:23.336889 containerd[1726]: time="2025-09-04T00:05:23.336864217Z" level=info msg="Container 12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:23.341213 containerd[1726]: time="2025-09-04T00:05:23.341190059Z" level=info msg="Container 853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:23.343436 containerd[1726]: time="2025-09-04T00:05:23.343412615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-8c113b52d8,Uid:fd4ec6bbc5e70af4019d395334988dc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d57163c72ece85516485fc8d8454f6a42cc7989a749bf2f2ff89bf58719ecad\"" Sep 4 00:05:23.345181 containerd[1726]: time="2025-09-04T00:05:23.345156382Z" level=info msg="CreateContainer within sandbox \"6d57163c72ece85516485fc8d8454f6a42cc7989a749bf2f2ff89bf58719ecad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:05:23.352131 containerd[1726]: time="2025-09-04T00:05:23.352104921Z" level=info msg="CreateContainer within sandbox \"5b94f5142ec977f4996b2f29f2c481ad5a26684d66c5f4fe3e36bcbdcba04ee2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71\"" Sep 4 00:05:23.352559 containerd[1726]: time="2025-09-04T00:05:23.352533709Z" level=info msg="StartContainer for \"12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71\"" Sep 4 00:05:23.353242 containerd[1726]: time="2025-09-04T00:05:23.353213003Z" level=info msg="connecting to shim 12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71" address="unix:///run/containerd/s/3af7c55cb6daa194a2f2b46755be73bcfffe54628219089aa1ed12ccfe9aa80c" protocol=ttrpc version=3 Sep 4 00:05:23.361207 containerd[1726]: time="2025-09-04T00:05:23.361108864Z" level=info msg="CreateContainer within sandbox \"6809b6c1ffb99f0f8ed23253cbae8a94242885d4828196cf91bbe0ed82b58bab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276\"" Sep 4 00:05:23.361663 containerd[1726]: time="2025-09-04T00:05:23.361605303Z" level=info msg="StartContainer for \"853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276\"" Sep 4 00:05:23.364156 containerd[1726]: time="2025-09-04T00:05:23.364119890Z" level=info msg="connecting to shim 853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276" address="unix:///run/containerd/s/b59bece94b949ffea93532206ed6d19b5851434a283694fa3a0ff830fe10533e" protocol=ttrpc version=3 Sep 4 00:05:23.371653 systemd[1]: Started cri-containerd-12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71.scope - libcontainer container 12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71. Sep 4 00:05:23.378907 containerd[1726]: time="2025-09-04T00:05:23.378881785Z" level=info msg="Container 0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:23.391909 containerd[1726]: time="2025-09-04T00:05:23.391838787Z" level=info msg="CreateContainer within sandbox \"6d57163c72ece85516485fc8d8454f6a42cc7989a749bf2f2ff89bf58719ecad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30\"" Sep 4 00:05:23.393337 containerd[1726]: time="2025-09-04T00:05:23.392653814Z" level=info msg="StartContainer for \"0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30\"" Sep 4 00:05:23.393407 containerd[1726]: time="2025-09-04T00:05:23.393322741Z" level=info msg="connecting to shim 0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30" address="unix:///run/containerd/s/53b0412f3931d0acbfb9557c030c56f7949e248e7a373c716dc9a95ef3131931" protocol=ttrpc version=3 Sep 4 00:05:23.396132 systemd[1]: Started cri-containerd-853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276.scope - libcontainer container 853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276. Sep 4 00:05:23.416645 systemd[1]: Started cri-containerd-0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30.scope - libcontainer container 0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30. Sep 4 00:05:23.454886 kubelet[2755]: I0904 00:05:23.454450 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:23.454886 kubelet[2755]: E0904 00:05:23.454851 2755 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:23.473541 containerd[1726]: time="2025-09-04T00:05:23.472947051Z" level=info msg="StartContainer for \"12b6486f45f7af56bfca93a3c4cb14f0a92b508ae63ee70663adfb55f9a81d71\" returns successfully" Sep 4 00:05:23.476683 containerd[1726]: time="2025-09-04T00:05:23.476656935Z" level=info msg="StartContainer for \"853e1092c9a147a6ad36d22546c1000a3efbe53eaf6981dc1cbbd5edd0e88276\" returns successfully" Sep 4 00:05:23.501773 containerd[1726]: time="2025-09-04T00:05:23.501748517Z" level=info msg="StartContainer for \"0591bfbf0b904ec3fe8dcc64f1626afca6563d92ac89e45ca6173e1a9bec1a30\" returns successfully" Sep 4 00:05:23.539586 kubelet[2755]: W0904 00:05:23.539539 2755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 4 00:05:23.539677 kubelet[2755]: E0904 00:05:23.539600 2755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:23.708859 kubelet[2755]: E0904 00:05:23.708835 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:23.712898 kubelet[2755]: E0904 00:05:23.712878 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:23.718511 kubelet[2755]: E0904 00:05:23.716543 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:24.257177 kubelet[2755]: I0904 00:05:24.257042 2755 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:24.716473 kubelet[2755]: E0904 00:05:24.716431 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:24.718350 kubelet[2755]: E0904 00:05:24.718120 2755 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.426553 kubelet[2755]: E0904 00:05:25.426467 2755 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-n-8c113b52d8\" not found" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.483520 kubelet[2755]: I0904 00:05:25.483273 2755 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.524699 kubelet[2755]: E0904 00:05:25.524614 2755 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.1.0-n-8c113b52d8.1861eb8d82845d53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-8c113b52d8,UID:ci-4372.1.0-n-8c113b52d8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-8c113b52d8,},FirstTimestamp:2025-09-04 00:05:22.663832915 +0000 UTC m=+0.386927906,LastTimestamp:2025-09-04 00:05:22.663832915 +0000 UTC m=+0.386927906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-8c113b52d8,}" Sep 4 00:05:25.579206 kubelet[2755]: I0904 00:05:25.578750 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.585251 kubelet[2755]: E0904 00:05:25.585106 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.585251 kubelet[2755]: I0904 00:05:25.585138 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.586653 kubelet[2755]: E0904 00:05:25.586626 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.586653 kubelet[2755]: I0904 00:05:25.586646 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.587802 kubelet[2755]: E0904 00:05:25.587774 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-8c113b52d8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:25.656436 kubelet[2755]: I0904 00:05:25.656394 2755 apiserver.go:52] "Watching apiserver" Sep 4 00:05:25.681702 kubelet[2755]: I0904 00:05:25.681237 2755 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:05:27.729194 systemd[1]: Reload requested from client PID 3025 ('systemctl') (unit session-9.scope)... Sep 4 00:05:27.729208 systemd[1]: Reloading... Sep 4 00:05:27.796624 zram_generator::config[3070]: No configuration found. Sep 4 00:05:27.889118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:28.000497 systemd[1]: Reloading finished in 270 ms. Sep 4 00:05:28.031203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:28.040411 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:05:28.040681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:28.040733 systemd[1]: kubelet.service: Consumed 709ms CPU time, 130.4M memory peak. Sep 4 00:05:28.042273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:28.413251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:28.417923 (kubelet)[3138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:05:28.469127 kubelet[3138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:28.469127 kubelet[3138]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:05:28.469127 kubelet[3138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:28.469429 kubelet[3138]: I0904 00:05:28.469203 3138 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:05:28.475775 kubelet[3138]: I0904 00:05:28.475752 3138 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:05:28.475775 kubelet[3138]: I0904 00:05:28.475775 3138 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:05:28.476874 kubelet[3138]: I0904 00:05:28.476852 3138 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:05:28.481118 kubelet[3138]: I0904 00:05:28.480580 3138 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:05:28.483392 kubelet[3138]: I0904 00:05:28.483364 3138 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:05:28.486575 kubelet[3138]: I0904 00:05:28.486555 3138 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:05:28.488933 kubelet[3138]: I0904 00:05:28.488913 3138 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:05:28.489082 kubelet[3138]: I0904 00:05:28.489059 3138 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:05:28.489221 kubelet[3138]: I0904 00:05:28.489082 3138 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-8c113b52d8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:05:28.489317 kubelet[3138]: I0904 00:05:28.489228 3138 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:05:28.489317 kubelet[3138]: I0904 00:05:28.489237 3138 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:05:28.489317 kubelet[3138]: I0904 00:05:28.489280 3138 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:28.489398 kubelet[3138]: I0904 00:05:28.489386 3138 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:05:28.489427 kubelet[3138]: I0904 00:05:28.489406 3138 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:05:28.489451 kubelet[3138]: I0904 00:05:28.489427 3138 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:05:28.489451 kubelet[3138]: I0904 00:05:28.489437 3138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:05:28.496070 kubelet[3138]: I0904 00:05:28.496034 3138 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:05:28.496388 kubelet[3138]: I0904 00:05:28.496374 3138 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:05:28.496826 kubelet[3138]: I0904 00:05:28.496751 3138 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:05:28.496826 kubelet[3138]: I0904 00:05:28.496805 3138 server.go:1287] "Started kubelet" Sep 4 00:05:28.499695 kubelet[3138]: I0904 00:05:28.499646 3138 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:05:28.503339 kubelet[3138]: I0904 00:05:28.503285 3138 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:05:28.526173 kubelet[3138]: I0904 00:05:28.505184 3138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:05:28.526173 kubelet[3138]: E0904 00:05:28.522615 3138 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:05:28.526246 kubelet[3138]: I0904 00:05:28.526216 3138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:05:28.526512 kubelet[3138]: I0904 00:05:28.526427 3138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:05:28.526911 kubelet[3138]: I0904 00:05:28.526902 3138 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:05:28.530204 kubelet[3138]: I0904 00:05:28.527484 3138 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:05:28.530204 kubelet[3138]: I0904 00:05:28.528644 3138 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:05:28.530204 kubelet[3138]: I0904 00:05:28.528834 3138 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:05:28.530204 kubelet[3138]: I0904 00:05:28.529721 3138 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:05:28.530204 kubelet[3138]: I0904 00:05:28.529809 3138 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:05:28.531778 kubelet[3138]: I0904 00:05:28.531763 3138 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:05:28.545333 kubelet[3138]: I0904 00:05:28.545316 3138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:05:28.548029 kubelet[3138]: I0904 00:05:28.548014 3138 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:05:28.548120 kubelet[3138]: I0904 00:05:28.548114 3138 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:05:28.548473 kubelet[3138]: I0904 00:05:28.548462 3138 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:05:28.548571 kubelet[3138]: I0904 00:05:28.548565 3138 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:05:28.548787 kubelet[3138]: E0904 00:05:28.548759 3138 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:05:28.566575 kubelet[3138]: I0904 00:05:28.566565 3138 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:05:28.566677 kubelet[3138]: I0904 00:05:28.566671 3138 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:05:28.566739 kubelet[3138]: I0904 00:05:28.566735 3138 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:28.566861 kubelet[3138]: I0904 00:05:28.566855 3138 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:05:28.566904 kubelet[3138]: I0904 00:05:28.566894 3138 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:05:28.566936 kubelet[3138]: I0904 00:05:28.566933 3138 policy_none.go:49] "None policy: Start" Sep 4 00:05:28.566966 kubelet[3138]: I0904 00:05:28.566961 3138 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:05:28.566992 kubelet[3138]: I0904 00:05:28.566989 3138 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:05:28.567085 kubelet[3138]: I0904 00:05:28.567081 3138 state_mem.go:75] "Updated machine memory state" Sep 4 00:05:28.570170 kubelet[3138]: I0904 00:05:28.570158 3138 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:05:28.571056 kubelet[3138]: I0904 00:05:28.570542 3138 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:05:28.571056 kubelet[3138]: I0904 00:05:28.570554 3138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:05:28.571056 kubelet[3138]: I0904 00:05:28.570747 3138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:05:28.576271 kubelet[3138]: E0904 00:05:28.576253 3138 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:05:28.649323 kubelet[3138]: I0904 00:05:28.649301 3138 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.649510 kubelet[3138]: I0904 00:05:28.649301 3138 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.649655 kubelet[3138]: I0904 00:05:28.649354 3138 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.656477 kubelet[3138]: W0904 00:05:28.656298 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:28.659601 kubelet[3138]: W0904 00:05:28.659574 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:28.659909 kubelet[3138]: W0904 00:05:28.659864 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:28.682437 kubelet[3138]: I0904 00:05:28.682373 3138 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.691098 kubelet[3138]: I0904 00:05:28.691080 3138 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.691174 kubelet[3138]: I0904 00:05:28.691133 3138 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730179 kubelet[3138]: I0904 00:05:28.730155 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730400 kubelet[3138]: I0904 00:05:28.730191 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730400 kubelet[3138]: I0904 00:05:28.730222 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7b0e0c1caf934b86c6c809f582c3601-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-8c113b52d8\" (UID: \"f7b0e0c1caf934b86c6c809f582c3601\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730400 kubelet[3138]: I0904 00:05:28.730241 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd4ec6bbc5e70af4019d395334988dc8-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" (UID: \"fd4ec6bbc5e70af4019d395334988dc8\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730400 kubelet[3138]: I0904 00:05:28.730258 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd4ec6bbc5e70af4019d395334988dc8-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" (UID: \"fd4ec6bbc5e70af4019d395334988dc8\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730400 kubelet[3138]: I0904 00:05:28.730280 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730518 kubelet[3138]: I0904 00:05:28.730328 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730518 kubelet[3138]: I0904 00:05:28.730349 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/43750f44879b29aa7fdfcaf1d32e7b20-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-8c113b52d8\" (UID: \"43750f44879b29aa7fdfcaf1d32e7b20\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:28.730518 kubelet[3138]: I0904 00:05:28.730367 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd4ec6bbc5e70af4019d395334988dc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" (UID: \"fd4ec6bbc5e70af4019d395334988dc8\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:29.490213 kubelet[3138]: I0904 00:05:29.490172 3138 apiserver.go:52] "Watching apiserver" Sep 4 00:05:29.528875 kubelet[3138]: I0904 00:05:29.528852 3138 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:05:29.563528 kubelet[3138]: I0904 00:05:29.562287 3138 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:29.573131 kubelet[3138]: W0904 00:05:29.572804 3138 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:29.573131 kubelet[3138]: E0904 00:05:29.572856 3138 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-8c113b52d8\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" Sep 4 00:05:29.595578 kubelet[3138]: I0904 00:05:29.595521 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-8c113b52d8" podStartSLOduration=1.595493571 podStartE2EDuration="1.595493571s" podCreationTimestamp="2025-09-04 00:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:29.585156726 +0000 UTC m=+1.162906393" watchObservedRunningTime="2025-09-04 00:05:29.595493571 +0000 UTC m=+1.173243239" Sep 4 00:05:29.603403 kubelet[3138]: I0904 00:05:29.603325 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-8c113b52d8" podStartSLOduration=1.603300487 podStartE2EDuration="1.603300487s" podCreationTimestamp="2025-09-04 00:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:29.595928683 +0000 UTC m=+1.173678350" watchObservedRunningTime="2025-09-04 00:05:29.603300487 +0000 UTC m=+1.181050155" Sep 4 00:05:29.603736 kubelet[3138]: I0904 00:05:29.603532 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-8c113b52d8" podStartSLOduration=1.603524081 podStartE2EDuration="1.603524081s" podCreationTimestamp="2025-09-04 00:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:29.603229886 +0000 UTC m=+1.180979553" watchObservedRunningTime="2025-09-04 00:05:29.603524081 +0000 UTC m=+1.181273756" Sep 4 00:05:33.516465 kubelet[3138]: I0904 00:05:33.516431 3138 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:05:33.517158 kubelet[3138]: I0904 00:05:33.516999 3138 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:05:33.517204 containerd[1726]: time="2025-09-04T00:05:33.516788169Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:05:34.190627 systemd[1]: Created slice kubepods-besteffort-poddba68aab_b582_4c77_a157_f27b67dfc16d.slice - libcontainer container kubepods-besteffort-poddba68aab_b582_4c77_a157_f27b67dfc16d.slice. Sep 4 00:05:34.265291 kubelet[3138]: I0904 00:05:34.265246 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dba68aab-b582-4c77-a157-f27b67dfc16d-kube-proxy\") pod \"kube-proxy-z96n4\" (UID: \"dba68aab-b582-4c77-a157-f27b67dfc16d\") " pod="kube-system/kube-proxy-z96n4" Sep 4 00:05:34.265739 kubelet[3138]: I0904 00:05:34.265573 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dba68aab-b582-4c77-a157-f27b67dfc16d-xtables-lock\") pod \"kube-proxy-z96n4\" (UID: \"dba68aab-b582-4c77-a157-f27b67dfc16d\") " pod="kube-system/kube-proxy-z96n4" Sep 4 00:05:34.265739 kubelet[3138]: I0904 00:05:34.265602 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflsf\" (UniqueName: \"kubernetes.io/projected/dba68aab-b582-4c77-a157-f27b67dfc16d-kube-api-access-fflsf\") pod \"kube-proxy-z96n4\" (UID: \"dba68aab-b582-4c77-a157-f27b67dfc16d\") " pod="kube-system/kube-proxy-z96n4" Sep 4 00:05:34.265894 kubelet[3138]: I0904 00:05:34.265864 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dba68aab-b582-4c77-a157-f27b67dfc16d-lib-modules\") pod \"kube-proxy-z96n4\" (UID: \"dba68aab-b582-4c77-a157-f27b67dfc16d\") " pod="kube-system/kube-proxy-z96n4" Sep 4 00:05:34.500009 containerd[1726]: time="2025-09-04T00:05:34.499896865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z96n4,Uid:dba68aab-b582-4c77-a157-f27b67dfc16d,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:34.540359 containerd[1726]: time="2025-09-04T00:05:34.540319297Z" level=info msg="connecting to shim fe755bf60d97a3d6c21137f79633ab1b2cc59c35276d5fda7dca7d1aec5b0ba0" address="unix:///run/containerd/s/41d2149ebc77a5043fe9de6252295d6298038b9d81662e5d24f76fd6d83d4e03" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:34.570672 systemd[1]: Started cri-containerd-fe755bf60d97a3d6c21137f79633ab1b2cc59c35276d5fda7dca7d1aec5b0ba0.scope - libcontainer container fe755bf60d97a3d6c21137f79633ab1b2cc59c35276d5fda7dca7d1aec5b0ba0. Sep 4 00:05:34.585783 systemd[1]: Created slice kubepods-besteffort-pod8dde85e0_f25a_4578_b6a7_6119f1c7a048.slice - libcontainer container kubepods-besteffort-pod8dde85e0_f25a_4578_b6a7_6119f1c7a048.slice. Sep 4 00:05:34.609739 containerd[1726]: time="2025-09-04T00:05:34.609704488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z96n4,Uid:dba68aab-b582-4c77-a157-f27b67dfc16d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe755bf60d97a3d6c21137f79633ab1b2cc59c35276d5fda7dca7d1aec5b0ba0\"" Sep 4 00:05:34.612725 containerd[1726]: time="2025-09-04T00:05:34.612699510Z" level=info msg="CreateContainer within sandbox \"fe755bf60d97a3d6c21137f79633ab1b2cc59c35276d5fda7dca7d1aec5b0ba0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:05:34.630693 containerd[1726]: time="2025-09-04T00:05:34.629965259Z" level=info msg="Container 0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:34.643547 containerd[1726]: time="2025-09-04T00:05:34.643520956Z" level=info msg="CreateContainer within sandbox \"fe755bf60d97a3d6c21137f79633ab1b2cc59c35276d5fda7dca7d1aec5b0ba0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354\"" Sep 4 00:05:34.644059 containerd[1726]: time="2025-09-04T00:05:34.644032120Z" level=info msg="StartContainer for \"0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354\"" Sep 4 00:05:34.645400 containerd[1726]: time="2025-09-04T00:05:34.645374394Z" level=info msg="connecting to shim 0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354" address="unix:///run/containerd/s/41d2149ebc77a5043fe9de6252295d6298038b9d81662e5d24f76fd6d83d4e03" protocol=ttrpc version=3 Sep 4 00:05:34.660628 systemd[1]: Started cri-containerd-0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354.scope - libcontainer container 0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354. Sep 4 00:05:34.668358 kubelet[3138]: I0904 00:05:34.668326 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8dde85e0-f25a-4578-b6a7-6119f1c7a048-var-lib-calico\") pod \"tigera-operator-755d956888-pbzrh\" (UID: \"8dde85e0-f25a-4578-b6a7-6119f1c7a048\") " pod="tigera-operator/tigera-operator-755d956888-pbzrh" Sep 4 00:05:34.668600 kubelet[3138]: I0904 00:05:34.668372 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jlk\" (UniqueName: \"kubernetes.io/projected/8dde85e0-f25a-4578-b6a7-6119f1c7a048-kube-api-access-l6jlk\") pod \"tigera-operator-755d956888-pbzrh\" (UID: \"8dde85e0-f25a-4578-b6a7-6119f1c7a048\") " pod="tigera-operator/tigera-operator-755d956888-pbzrh" Sep 4 00:05:34.688855 containerd[1726]: time="2025-09-04T00:05:34.688822166Z" level=info msg="StartContainer for \"0165e79d5830ba3bace97d83998af127027a547819e1909cf42d5236856b0354\" returns successfully" Sep 4 00:05:34.891118 containerd[1726]: time="2025-09-04T00:05:34.891020565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pbzrh,Uid:8dde85e0-f25a-4578-b6a7-6119f1c7a048,Namespace:tigera-operator,Attempt:0,}" Sep 4 00:05:34.931886 containerd[1726]: time="2025-09-04T00:05:34.931849730Z" level=info msg="connecting to shim 6df9a4199a79442c1353e58a774eb3a3b9b73706452837dc287c1b97c9db5be0" address="unix:///run/containerd/s/445d60feb550239578fb1b3d66051abdea9a1c7425f019d650f3b1bfd249538e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:34.957641 systemd[1]: Started cri-containerd-6df9a4199a79442c1353e58a774eb3a3b9b73706452837dc287c1b97c9db5be0.scope - libcontainer container 6df9a4199a79442c1353e58a774eb3a3b9b73706452837dc287c1b97c9db5be0. Sep 4 00:05:35.007409 containerd[1726]: time="2025-09-04T00:05:35.007374017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pbzrh,Uid:8dde85e0-f25a-4578-b6a7-6119f1c7a048,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6df9a4199a79442c1353e58a774eb3a3b9b73706452837dc287c1b97c9db5be0\"" Sep 4 00:05:35.010434 containerd[1726]: time="2025-09-04T00:05:35.010240521Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 00:05:35.590752 kubelet[3138]: I0904 00:05:35.590648 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z96n4" podStartSLOduration=1.590631747 podStartE2EDuration="1.590631747s" podCreationTimestamp="2025-09-04 00:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:35.59056159 +0000 UTC m=+7.168311252" watchObservedRunningTime="2025-09-04 00:05:35.590631747 +0000 UTC m=+7.168381428" Sep 4 00:05:36.761832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444423403.mount: Deactivated successfully. Sep 4 00:05:37.190378 containerd[1726]: time="2025-09-04T00:05:37.190329966Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:37.192836 containerd[1726]: time="2025-09-04T00:05:37.192799471Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 00:05:37.195379 containerd[1726]: time="2025-09-04T00:05:37.195335574Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:37.200187 containerd[1726]: time="2025-09-04T00:05:37.199290655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:37.200187 containerd[1726]: time="2025-09-04T00:05:37.199941076Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.189672003s" Sep 4 00:05:37.200187 containerd[1726]: time="2025-09-04T00:05:37.199966917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 00:05:37.202609 containerd[1726]: time="2025-09-04T00:05:37.202584639Z" level=info msg="CreateContainer within sandbox \"6df9a4199a79442c1353e58a774eb3a3b9b73706452837dc287c1b97c9db5be0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 00:05:37.215520 containerd[1726]: time="2025-09-04T00:05:37.213862262Z" level=info msg="Container dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:37.229881 containerd[1726]: time="2025-09-04T00:05:37.229836232Z" level=info msg="CreateContainer within sandbox \"6df9a4199a79442c1353e58a774eb3a3b9b73706452837dc287c1b97c9db5be0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16\"" Sep 4 00:05:37.230385 containerd[1726]: time="2025-09-04T00:05:37.230357544Z" level=info msg="StartContainer for \"dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16\"" Sep 4 00:05:37.232566 containerd[1726]: time="2025-09-04T00:05:37.232537007Z" level=info msg="connecting to shim dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16" address="unix:///run/containerd/s/445d60feb550239578fb1b3d66051abdea9a1c7425f019d650f3b1bfd249538e" protocol=ttrpc version=3 Sep 4 00:05:37.254631 systemd[1]: Started cri-containerd-dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16.scope - libcontainer container dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16. Sep 4 00:05:37.278806 containerd[1726]: time="2025-09-04T00:05:37.278747400Z" level=info msg="StartContainer for \"dd6ecc66f418bedb20cacb6ad9ff379087d1880072a89dbc80f803e70bc74c16\" returns successfully" Sep 4 00:05:37.596241 kubelet[3138]: I0904 00:05:37.595639 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-pbzrh" podStartSLOduration=1.4043543330000001 podStartE2EDuration="3.595622826s" podCreationTimestamp="2025-09-04 00:05:34 +0000 UTC" firstStartedPulling="2025-09-04 00:05:35.009568592 +0000 UTC m=+6.587318265" lastFinishedPulling="2025-09-04 00:05:37.200837093 +0000 UTC m=+8.778586758" observedRunningTime="2025-09-04 00:05:37.595421111 +0000 UTC m=+9.173170778" watchObservedRunningTime="2025-09-04 00:05:37.595622826 +0000 UTC m=+9.173372494" Sep 4 00:05:42.936979 sudo[2154]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:43.042432 sshd[2153]: Connection closed by 10.200.16.10 port 54254 Sep 4 00:05:43.042980 sshd-session[2151]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:43.046131 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:54254.service: Deactivated successfully. Sep 4 00:05:43.050232 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:05:43.050526 systemd[1]: session-9.scope: Consumed 3.440s CPU time, 227.4M memory peak. Sep 4 00:05:43.053939 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:05:43.056783 systemd-logind[1706]: Removed session 9. Sep 4 00:05:46.457970 systemd[1]: Created slice kubepods-besteffort-pod5e115d5e_3ec5_433e_87c3_e3ee3965adff.slice - libcontainer container kubepods-besteffort-pod5e115d5e_3ec5_433e_87c3_e3ee3965adff.slice. Sep 4 00:05:46.539376 kubelet[3138]: I0904 00:05:46.539344 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchnp\" (UniqueName: \"kubernetes.io/projected/5e115d5e-3ec5-433e-87c3-e3ee3965adff-kube-api-access-tchnp\") pod \"calico-typha-5cdfc6d97d-bk8kb\" (UID: \"5e115d5e-3ec5-433e-87c3-e3ee3965adff\") " pod="calico-system/calico-typha-5cdfc6d97d-bk8kb" Sep 4 00:05:46.539950 kubelet[3138]: I0904 00:05:46.539822 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e115d5e-3ec5-433e-87c3-e3ee3965adff-tigera-ca-bundle\") pod \"calico-typha-5cdfc6d97d-bk8kb\" (UID: \"5e115d5e-3ec5-433e-87c3-e3ee3965adff\") " pod="calico-system/calico-typha-5cdfc6d97d-bk8kb" Sep 4 00:05:46.539950 kubelet[3138]: I0904 00:05:46.539853 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5e115d5e-3ec5-433e-87c3-e3ee3965adff-typha-certs\") pod \"calico-typha-5cdfc6d97d-bk8kb\" (UID: \"5e115d5e-3ec5-433e-87c3-e3ee3965adff\") " pod="calico-system/calico-typha-5cdfc6d97d-bk8kb" Sep 4 00:05:46.762795 containerd[1726]: time="2025-09-04T00:05:46.762689196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cdfc6d97d-bk8kb,Uid:5e115d5e-3ec5-433e-87c3-e3ee3965adff,Namespace:calico-system,Attempt:0,}" Sep 4 00:05:46.769423 systemd[1]: Created slice kubepods-besteffort-pod5c182e98_d94e_4952_aabb_bc6ed064a748.slice - libcontainer container kubepods-besteffort-pod5c182e98_d94e_4952_aabb_bc6ed064a748.slice. Sep 4 00:05:46.813522 containerd[1726]: time="2025-09-04T00:05:46.813454766Z" level=info msg="connecting to shim 9174bfe07d578b5125df156f4d19afaef5b38b4d5f7c82d5c1d296baa38a847f" address="unix:///run/containerd/s/a8e2768ec411f88cadc319756dfeb70bef3bbb9b90a12287f66ecc9b0ab0c976" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:46.842309 kubelet[3138]: I0904 00:05:46.842281 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-cni-log-dir\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.842442 kubelet[3138]: I0904 00:05:46.842319 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-flexvol-driver-host\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.842442 kubelet[3138]: I0904 00:05:46.842339 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vj4\" (UniqueName: \"kubernetes.io/projected/5c182e98-d94e-4952-aabb-bc6ed064a748-kube-api-access-w2vj4\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.842442 kubelet[3138]: I0904 00:05:46.842358 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-cni-net-dir\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.842442 kubelet[3138]: I0904 00:05:46.842376 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5c182e98-d94e-4952-aabb-bc6ed064a748-node-certs\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.842442 kubelet[3138]: I0904 00:05:46.842393 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-var-run-calico\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.843340 kubelet[3138]: I0904 00:05:46.842409 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-xtables-lock\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.843340 kubelet[3138]: I0904 00:05:46.842427 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-cni-bin-dir\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.843340 kubelet[3138]: I0904 00:05:46.842442 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-lib-modules\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.843340 kubelet[3138]: I0904 00:05:46.842462 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c182e98-d94e-4952-aabb-bc6ed064a748-tigera-ca-bundle\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.843340 kubelet[3138]: I0904 00:05:46.842478 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-var-lib-calico\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.843472 kubelet[3138]: I0904 00:05:46.842496 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5c182e98-d94e-4952-aabb-bc6ed064a748-policysync\") pod \"calico-node-xxcjd\" (UID: \"5c182e98-d94e-4952-aabb-bc6ed064a748\") " pod="calico-system/calico-node-xxcjd" Sep 4 00:05:46.849681 systemd[1]: Started cri-containerd-9174bfe07d578b5125df156f4d19afaef5b38b4d5f7c82d5c1d296baa38a847f.scope - libcontainer container 9174bfe07d578b5125df156f4d19afaef5b38b4d5f7c82d5c1d296baa38a847f. Sep 4 00:05:46.904971 containerd[1726]: time="2025-09-04T00:05:46.904934838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cdfc6d97d-bk8kb,Uid:5e115d5e-3ec5-433e-87c3-e3ee3965adff,Namespace:calico-system,Attempt:0,} returns sandbox id \"9174bfe07d578b5125df156f4d19afaef5b38b4d5f7c82d5c1d296baa38a847f\"" Sep 4 00:05:46.906268 containerd[1726]: time="2025-09-04T00:05:46.906242314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 00:05:46.945899 kubelet[3138]: E0904 00:05:46.945828 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.945899 kubelet[3138]: W0904 00:05:46.945852 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.946327 kubelet[3138]: E0904 00:05:46.946221 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.946327 kubelet[3138]: E0904 00:05:46.946299 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.946327 kubelet[3138]: W0904 00:05:46.946311 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.946327 kubelet[3138]: E0904 00:05:46.946329 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.946771 kubelet[3138]: E0904 00:05:46.946759 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.946805 kubelet[3138]: W0904 00:05:46.946773 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.946805 kubelet[3138]: E0904 00:05:46.946791 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.948348 kubelet[3138]: E0904 00:05:46.948274 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.948348 kubelet[3138]: W0904 00:05:46.948291 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.948348 kubelet[3138]: E0904 00:05:46.948308 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.948707 kubelet[3138]: E0904 00:05:46.948488 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.948707 kubelet[3138]: W0904 00:05:46.948495 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.948707 kubelet[3138]: E0904 00:05:46.948592 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.948707 kubelet[3138]: E0904 00:05:46.948675 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.948707 kubelet[3138]: W0904 00:05:46.948681 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.948707 kubelet[3138]: E0904 00:05:46.948696 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.949198 kubelet[3138]: E0904 00:05:46.948870 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.949198 kubelet[3138]: W0904 00:05:46.948877 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.949198 kubelet[3138]: E0904 00:05:46.948951 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.949198 kubelet[3138]: E0904 00:05:46.949176 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.949198 kubelet[3138]: W0904 00:05:46.949182 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.949614 kubelet[3138]: E0904 00:05:46.949254 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.949614 kubelet[3138]: E0904 00:05:46.949493 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.952672 kubelet[3138]: W0904 00:05:46.952585 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.952897 kubelet[3138]: E0904 00:05:46.952836 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.952897 kubelet[3138]: W0904 00:05:46.952847 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.953054 kubelet[3138]: E0904 00:05:46.953007 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.953054 kubelet[3138]: W0904 00:05:46.953014 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.953216 kubelet[3138]: E0904 00:05:46.953155 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.953216 kubelet[3138]: W0904 00:05:46.953162 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.953216 kubelet[3138]: E0904 00:05:46.953171 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.953444 kubelet[3138]: E0904 00:05:46.953346 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.953444 kubelet[3138]: W0904 00:05:46.953353 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.953444 kubelet[3138]: E0904 00:05:46.953360 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.954108 kubelet[3138]: E0904 00:05:46.954091 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.954361 kubelet[3138]: E0904 00:05:46.954204 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.954361 kubelet[3138]: W0904 00:05:46.954316 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.954361 kubelet[3138]: E0904 00:05:46.954327 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.954542 kubelet[3138]: E0904 00:05:46.954211 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.954542 kubelet[3138]: E0904 00:05:46.954222 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.954747 kubelet[3138]: E0904 00:05:46.954727 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.954747 kubelet[3138]: W0904 00:05:46.954736 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.954847 kubelet[3138]: E0904 00:05:46.954810 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.954990 kubelet[3138]: E0904 00:05:46.954973 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.954990 kubelet[3138]: W0904 00:05:46.954982 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.955083 kubelet[3138]: E0904 00:05:46.955050 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.955209 kubelet[3138]: E0904 00:05:46.955195 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.955209 kubelet[3138]: W0904 00:05:46.955202 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.955317 kubelet[3138]: E0904 00:05:46.955265 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.955485 kubelet[3138]: E0904 00:05:46.955470 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.955485 kubelet[3138]: W0904 00:05:46.955478 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.955635 kubelet[3138]: E0904 00:05:46.955537 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.955765 kubelet[3138]: E0904 00:05:46.955747 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.955765 kubelet[3138]: W0904 00:05:46.955756 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.955885 kubelet[3138]: E0904 00:05:46.955829 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.956701 kubelet[3138]: E0904 00:05:46.956611 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.956701 kubelet[3138]: W0904 00:05:46.956626 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.956701 kubelet[3138]: E0904 00:05:46.956638 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.956868 kubelet[3138]: E0904 00:05:46.956862 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.957011 kubelet[3138]: W0904 00:05:46.956897 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.957011 kubelet[3138]: E0904 00:05:46.956907 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.957625 kubelet[3138]: E0904 00:05:46.957578 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.957780 kubelet[3138]: W0904 00:05:46.957690 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.957780 kubelet[3138]: E0904 00:05:46.957706 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:46.957914 kubelet[3138]: E0904 00:05:46.957908 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:46.957975 kubelet[3138]: W0904 00:05:46.957946 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:46.957975 kubelet[3138]: E0904 00:05:46.957955 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.054192 kubelet[3138]: E0904 00:05:47.053860 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:47.073245 containerd[1726]: time="2025-09-04T00:05:47.073200772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xxcjd,Uid:5c182e98-d94e-4952-aabb-bc6ed064a748,Namespace:calico-system,Attempt:0,}" Sep 4 00:05:47.134581 kubelet[3138]: E0904 00:05:47.134542 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.134581 kubelet[3138]: W0904 00:05:47.134577 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.134909 kubelet[3138]: E0904 00:05:47.134602 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.134909 kubelet[3138]: E0904 00:05:47.134819 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.134909 kubelet[3138]: W0904 00:05:47.134831 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.135097 kubelet[3138]: E0904 00:05:47.135001 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.135388 kubelet[3138]: E0904 00:05:47.135221 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.135388 kubelet[3138]: W0904 00:05:47.135236 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.135388 kubelet[3138]: E0904 00:05:47.135250 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.135615 kubelet[3138]: E0904 00:05:47.135608 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.135682 kubelet[3138]: W0904 00:05:47.135673 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.135765 kubelet[3138]: E0904 00:05:47.135714 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.136532 kubelet[3138]: E0904 00:05:47.135970 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.136532 kubelet[3138]: W0904 00:05:47.135988 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.136532 kubelet[3138]: E0904 00:05:47.135998 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.136827 kubelet[3138]: E0904 00:05:47.136810 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.136931 kubelet[3138]: W0904 00:05:47.136883 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.136931 kubelet[3138]: E0904 00:05:47.136899 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.137163 kubelet[3138]: E0904 00:05:47.137116 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.137163 kubelet[3138]: W0904 00:05:47.137125 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.137163 kubelet[3138]: E0904 00:05:47.137134 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.137513 kubelet[3138]: E0904 00:05:47.137440 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.137513 kubelet[3138]: W0904 00:05:47.137449 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.137513 kubelet[3138]: E0904 00:05:47.137460 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.138686 kubelet[3138]: E0904 00:05:47.138656 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.138842 kubelet[3138]: W0904 00:05:47.138672 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.138842 kubelet[3138]: E0904 00:05:47.138787 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.139067 kubelet[3138]: E0904 00:05:47.139030 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.139067 kubelet[3138]: W0904 00:05:47.139040 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.139067 kubelet[3138]: E0904 00:05:47.139053 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.139319 kubelet[3138]: E0904 00:05:47.139280 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.139319 kubelet[3138]: W0904 00:05:47.139287 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.139319 kubelet[3138]: E0904 00:05:47.139295 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.139528 kubelet[3138]: E0904 00:05:47.139520 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.141859 kubelet[3138]: W0904 00:05:47.141019 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.141859 kubelet[3138]: E0904 00:05:47.141395 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.141859 kubelet[3138]: E0904 00:05:47.141577 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.141859 kubelet[3138]: W0904 00:05:47.141586 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.141859 kubelet[3138]: E0904 00:05:47.141594 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.141859 kubelet[3138]: E0904 00:05:47.141688 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.141859 kubelet[3138]: W0904 00:05:47.141693 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.141859 kubelet[3138]: E0904 00:05:47.141698 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.141859 kubelet[3138]: E0904 00:05:47.141784 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.141859 kubelet[3138]: W0904 00:05:47.141789 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.143673 kubelet[3138]: E0904 00:05:47.141795 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.143984 kubelet[3138]: E0904 00:05:47.143779 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.143984 kubelet[3138]: W0904 00:05:47.143792 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.143984 kubelet[3138]: E0904 00:05:47.143805 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.143984 kubelet[3138]: E0904 00:05:47.143947 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.143984 kubelet[3138]: W0904 00:05:47.143953 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.143984 kubelet[3138]: E0904 00:05:47.143960 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.144271 kubelet[3138]: E0904 00:05:47.144212 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.144271 kubelet[3138]: W0904 00:05:47.144219 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.144271 kubelet[3138]: E0904 00:05:47.144226 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.144455 kubelet[3138]: E0904 00:05:47.144398 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.144455 kubelet[3138]: W0904 00:05:47.144404 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.144455 kubelet[3138]: E0904 00:05:47.144410 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.144668 kubelet[3138]: E0904 00:05:47.144591 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.144668 kubelet[3138]: W0904 00:05:47.144597 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.144668 kubelet[3138]: E0904 00:05:47.144604 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.145319 containerd[1726]: time="2025-09-04T00:05:47.145283803Z" level=info msg="connecting to shim a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5" address="unix:///run/containerd/s/f6d6e182cf33b46d946e2e1557de8e96530275f02c8b4239887a5dc9a449311b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:47.147880 kubelet[3138]: E0904 00:05:47.147846 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.147969 kubelet[3138]: W0904 00:05:47.147960 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.148015 kubelet[3138]: E0904 00:05:47.148008 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.148272 kubelet[3138]: E0904 00:05:47.148264 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.148408 kubelet[3138]: W0904 00:05:47.148398 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.148459 kubelet[3138]: E0904 00:05:47.148452 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.148609 kubelet[3138]: E0904 00:05:47.148603 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.148665 kubelet[3138]: W0904 00:05:47.148658 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.148704 kubelet[3138]: E0904 00:05:47.148697 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.148852 kubelet[3138]: I0904 00:05:47.148839 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/03860e7a-926d-4187-ba43-ef35f1cd2768-varrun\") pod \"csi-node-driver-jftxm\" (UID: \"03860e7a-926d-4187-ba43-ef35f1cd2768\") " pod="calico-system/csi-node-driver-jftxm" Sep 4 00:05:47.149081 kubelet[3138]: E0904 00:05:47.149074 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.149125 kubelet[3138]: W0904 00:05:47.149120 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.149163 kubelet[3138]: E0904 00:05:47.149155 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.149227 kubelet[3138]: I0904 00:05:47.149216 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03860e7a-926d-4187-ba43-ef35f1cd2768-kubelet-dir\") pod \"csi-node-driver-jftxm\" (UID: \"03860e7a-926d-4187-ba43-ef35f1cd2768\") " pod="calico-system/csi-node-driver-jftxm" Sep 4 00:05:47.149418 kubelet[3138]: E0904 00:05:47.149411 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.149462 kubelet[3138]: W0904 00:05:47.149457 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.149515 kubelet[3138]: E0904 00:05:47.149494 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.149754 kubelet[3138]: E0904 00:05:47.149744 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.149800 kubelet[3138]: W0904 00:05:47.149793 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.149936 kubelet[3138]: E0904 00:05:47.149928 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.150077 kubelet[3138]: E0904 00:05:47.150072 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.150120 kubelet[3138]: W0904 00:05:47.150114 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.151562 kubelet[3138]: E0904 00:05:47.150190 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.151692 kubelet[3138]: I0904 00:05:47.151677 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03860e7a-926d-4187-ba43-ef35f1cd2768-socket-dir\") pod \"csi-node-driver-jftxm\" (UID: \"03860e7a-926d-4187-ba43-ef35f1cd2768\") " pod="calico-system/csi-node-driver-jftxm" Sep 4 00:05:47.153117 kubelet[3138]: E0904 00:05:47.152580 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.153117 kubelet[3138]: W0904 00:05:47.152594 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.153117 kubelet[3138]: E0904 00:05:47.152618 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.153117 kubelet[3138]: E0904 00:05:47.152764 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.153117 kubelet[3138]: W0904 00:05:47.152770 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.153117 kubelet[3138]: E0904 00:05:47.152784 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.153117 kubelet[3138]: E0904 00:05:47.152886 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.153117 kubelet[3138]: W0904 00:05:47.152891 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.153117 kubelet[3138]: E0904 00:05:47.152898 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.153370 kubelet[3138]: I0904 00:05:47.152924 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jchhs\" (UniqueName: \"kubernetes.io/projected/03860e7a-926d-4187-ba43-ef35f1cd2768-kube-api-access-jchhs\") pod \"csi-node-driver-jftxm\" (UID: \"03860e7a-926d-4187-ba43-ef35f1cd2768\") " pod="calico-system/csi-node-driver-jftxm" Sep 4 00:05:47.153370 kubelet[3138]: E0904 00:05:47.153044 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.153370 kubelet[3138]: W0904 00:05:47.153051 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.153370 kubelet[3138]: E0904 00:05:47.153060 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.153370 kubelet[3138]: I0904 00:05:47.153073 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03860e7a-926d-4187-ba43-ef35f1cd2768-registration-dir\") pod \"csi-node-driver-jftxm\" (UID: \"03860e7a-926d-4187-ba43-ef35f1cd2768\") " pod="calico-system/csi-node-driver-jftxm" Sep 4 00:05:47.154573 kubelet[3138]: E0904 00:05:47.153595 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.154573 kubelet[3138]: W0904 00:05:47.154546 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.154806 kubelet[3138]: E0904 00:05:47.154684 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.154967 kubelet[3138]: E0904 00:05:47.154885 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.154967 kubelet[3138]: W0904 00:05:47.154895 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.155092 kubelet[3138]: E0904 00:05:47.155040 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.155154 kubelet[3138]: E0904 00:05:47.155148 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.155232 kubelet[3138]: W0904 00:05:47.155185 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.155232 kubelet[3138]: E0904 00:05:47.155205 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.155368 kubelet[3138]: E0904 00:05:47.155363 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.155411 kubelet[3138]: W0904 00:05:47.155405 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.155449 kubelet[3138]: E0904 00:05:47.155443 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.197263 systemd[1]: Started cri-containerd-a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5.scope - libcontainer container a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5. Sep 4 00:05:47.254251 kubelet[3138]: E0904 00:05:47.254095 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.254663 kubelet[3138]: W0904 00:05:47.254334 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.254663 kubelet[3138]: E0904 00:05:47.254362 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.255556 kubelet[3138]: E0904 00:05:47.255517 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.255556 kubelet[3138]: W0904 00:05:47.255534 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.255747 kubelet[3138]: E0904 00:05:47.255606 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.255939 kubelet[3138]: E0904 00:05:47.255921 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.256150 kubelet[3138]: W0904 00:05:47.255988 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.256150 kubelet[3138]: E0904 00:05:47.256011 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.256793 kubelet[3138]: E0904 00:05:47.256758 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.256793 kubelet[3138]: W0904 00:05:47.256773 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.256999 kubelet[3138]: E0904 00:05:47.256914 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.257190 kubelet[3138]: E0904 00:05:47.257136 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.257190 kubelet[3138]: W0904 00:05:47.257144 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.257190 kubelet[3138]: E0904 00:05:47.257154 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.257342 kubelet[3138]: E0904 00:05:47.257329 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.257378 kubelet[3138]: W0904 00:05:47.257351 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.257378 kubelet[3138]: E0904 00:05:47.257362 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.257467 kubelet[3138]: E0904 00:05:47.257456 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.257467 kubelet[3138]: W0904 00:05:47.257463 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.257632 kubelet[3138]: E0904 00:05:47.257470 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.257632 kubelet[3138]: E0904 00:05:47.257581 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.257632 kubelet[3138]: W0904 00:05:47.257587 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.257632 kubelet[3138]: E0904 00:05:47.257594 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.257806 kubelet[3138]: E0904 00:05:47.257797 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.257865 kubelet[3138]: W0904 00:05:47.257852 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.257922 kubelet[3138]: E0904 00:05:47.257897 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.258214 kubelet[3138]: E0904 00:05:47.258197 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.258342 kubelet[3138]: W0904 00:05:47.258270 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.258342 kubelet[3138]: E0904 00:05:47.258291 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.258561 kubelet[3138]: E0904 00:05:47.258553 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.258625 kubelet[3138]: W0904 00:05:47.258617 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.258782 kubelet[3138]: E0904 00:05:47.258757 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.258921 kubelet[3138]: E0904 00:05:47.258865 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.258921 kubelet[3138]: W0904 00:05:47.258871 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.258921 kubelet[3138]: E0904 00:05:47.258884 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.259057 kubelet[3138]: E0904 00:05:47.259045 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.259114 kubelet[3138]: W0904 00:05:47.259059 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.259114 kubelet[3138]: E0904 00:05:47.259071 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.259268 kubelet[3138]: E0904 00:05:47.259244 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.259268 kubelet[3138]: W0904 00:05:47.259254 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.259371 kubelet[3138]: E0904 00:05:47.259271 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.259371 kubelet[3138]: E0904 00:05:47.259365 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.259371 kubelet[3138]: W0904 00:05:47.259371 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.259495 kubelet[3138]: E0904 00:05:47.259378 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.259636 kubelet[3138]: E0904 00:05:47.259588 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.259636 kubelet[3138]: W0904 00:05:47.259598 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.259636 kubelet[3138]: E0904 00:05:47.259609 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.259905 kubelet[3138]: E0904 00:05:47.259875 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.259905 kubelet[3138]: W0904 00:05:47.259884 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.259991 kubelet[3138]: E0904 00:05:47.259971 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.260134 kubelet[3138]: E0904 00:05:47.260111 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.260134 kubelet[3138]: W0904 00:05:47.260120 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.260250 kubelet[3138]: E0904 00:05:47.260205 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.260456 kubelet[3138]: E0904 00:05:47.260424 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.260456 kubelet[3138]: W0904 00:05:47.260443 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.260634 kubelet[3138]: E0904 00:05:47.260575 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.260822 kubelet[3138]: E0904 00:05:47.260801 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.260822 kubelet[3138]: W0904 00:05:47.260811 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.260949 kubelet[3138]: E0904 00:05:47.260898 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.261203 kubelet[3138]: E0904 00:05:47.261181 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.261203 kubelet[3138]: W0904 00:05:47.261191 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.261417 kubelet[3138]: E0904 00:05:47.261406 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.261620 kubelet[3138]: E0904 00:05:47.261600 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.261620 kubelet[3138]: W0904 00:05:47.261609 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.261759 kubelet[3138]: E0904 00:05:47.261650 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.261973 kubelet[3138]: E0904 00:05:47.261918 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.262096 kubelet[3138]: W0904 00:05:47.262023 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.262227 kubelet[3138]: E0904 00:05:47.262214 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.262422 kubelet[3138]: E0904 00:05:47.262399 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.262422 kubelet[3138]: W0904 00:05:47.262410 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.262617 kubelet[3138]: E0904 00:05:47.262527 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.262920 kubelet[3138]: E0904 00:05:47.262829 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.262920 kubelet[3138]: W0904 00:05:47.262839 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.262920 kubelet[3138]: E0904 00:05:47.262849 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.279568 kubelet[3138]: E0904 00:05:47.279547 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:47.279568 kubelet[3138]: W0904 00:05:47.279566 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:47.280088 kubelet[3138]: E0904 00:05:47.279580 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:47.299334 containerd[1726]: time="2025-09-04T00:05:47.299292579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xxcjd,Uid:5c182e98-d94e-4952-aabb-bc6ed064a748,Namespace:calico-system,Attempt:0,} returns sandbox id \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\"" Sep 4 00:05:48.113068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387065025.mount: Deactivated successfully. Sep 4 00:05:48.549654 kubelet[3138]: E0904 00:05:48.549560 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:48.938971 containerd[1726]: time="2025-09-04T00:05:48.938930917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:48.941395 containerd[1726]: time="2025-09-04T00:05:48.941324234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 00:05:48.944199 containerd[1726]: time="2025-09-04T00:05:48.944148727Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:48.950603 containerd[1726]: time="2025-09-04T00:05:48.950537526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:48.951010 containerd[1726]: time="2025-09-04T00:05:48.950911740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.044642349s" Sep 4 00:05:48.951010 containerd[1726]: time="2025-09-04T00:05:48.950938665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 00:05:48.952420 containerd[1726]: time="2025-09-04T00:05:48.952220424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 00:05:48.967029 containerd[1726]: time="2025-09-04T00:05:48.966998517Z" level=info msg="CreateContainer within sandbox \"9174bfe07d578b5125df156f4d19afaef5b38b4d5f7c82d5c1d296baa38a847f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 00:05:48.985645 containerd[1726]: time="2025-09-04T00:05:48.985619685Z" level=info msg="Container 5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:49.001359 containerd[1726]: time="2025-09-04T00:05:49.001282376Z" level=info msg="CreateContainer within sandbox \"9174bfe07d578b5125df156f4d19afaef5b38b4d5f7c82d5c1d296baa38a847f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3\"" Sep 4 00:05:49.001740 containerd[1726]: time="2025-09-04T00:05:49.001723620Z" level=info msg="StartContainer for \"5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3\"" Sep 4 00:05:49.003779 containerd[1726]: time="2025-09-04T00:05:49.003755126Z" level=info msg="connecting to shim 5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3" address="unix:///run/containerd/s/a8e2768ec411f88cadc319756dfeb70bef3bbb9b90a12287f66ecc9b0ab0c976" protocol=ttrpc version=3 Sep 4 00:05:49.028709 systemd[1]: Started cri-containerd-5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3.scope - libcontainer container 5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3. Sep 4 00:05:49.088012 containerd[1726]: time="2025-09-04T00:05:49.087873081Z" level=info msg="StartContainer for \"5b918d47a80b966b6ba7e3fe85d9d3541002c9095d8b94b888cf644d95b5edf3\" returns successfully" Sep 4 00:05:49.623121 kubelet[3138]: I0904 00:05:49.623061 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cdfc6d97d-bk8kb" podStartSLOduration=1.577399244 podStartE2EDuration="3.623045213s" podCreationTimestamp="2025-09-04 00:05:46 +0000 UTC" firstStartedPulling="2025-09-04 00:05:46.905978876 +0000 UTC m=+18.483728535" lastFinishedPulling="2025-09-04 00:05:48.951624843 +0000 UTC m=+20.529374504" observedRunningTime="2025-09-04 00:05:49.622740674 +0000 UTC m=+21.200490360" watchObservedRunningTime="2025-09-04 00:05:49.623045213 +0000 UTC m=+21.200794876" Sep 4 00:05:49.659978 kubelet[3138]: E0904 00:05:49.659951 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.659978 kubelet[3138]: W0904 00:05:49.659970 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.659978 kubelet[3138]: E0904 00:05:49.659988 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660229 kubelet[3138]: E0904 00:05:49.660103 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660229 kubelet[3138]: W0904 00:05:49.660109 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660229 kubelet[3138]: E0904 00:05:49.660116 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660229 kubelet[3138]: E0904 00:05:49.660206 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660229 kubelet[3138]: W0904 00:05:49.660211 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660229 kubelet[3138]: E0904 00:05:49.660218 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660435 kubelet[3138]: E0904 00:05:49.660349 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660435 kubelet[3138]: W0904 00:05:49.660355 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660435 kubelet[3138]: E0904 00:05:49.660363 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660551 kubelet[3138]: E0904 00:05:49.660457 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660551 kubelet[3138]: W0904 00:05:49.660462 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660551 kubelet[3138]: E0904 00:05:49.660468 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660676 kubelet[3138]: E0904 00:05:49.660583 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660676 kubelet[3138]: W0904 00:05:49.660589 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660676 kubelet[3138]: E0904 00:05:49.660596 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660778 kubelet[3138]: E0904 00:05:49.660678 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660778 kubelet[3138]: W0904 00:05:49.660684 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660778 kubelet[3138]: E0904 00:05:49.660690 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660778 kubelet[3138]: E0904 00:05:49.660771 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660778 kubelet[3138]: W0904 00:05:49.660776 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660947 kubelet[3138]: E0904 00:05:49.660782 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.660947 kubelet[3138]: E0904 00:05:49.660868 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.660947 kubelet[3138]: W0904 00:05:49.660873 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.660947 kubelet[3138]: E0904 00:05:49.660879 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.661088 kubelet[3138]: E0904 00:05:49.660959 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.661088 kubelet[3138]: W0904 00:05:49.660964 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.661088 kubelet[3138]: E0904 00:05:49.660969 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.661088 kubelet[3138]: E0904 00:05:49.661045 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.661088 kubelet[3138]: W0904 00:05:49.661049 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.661088 kubelet[3138]: E0904 00:05:49.661055 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.661288 kubelet[3138]: E0904 00:05:49.661140 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.661288 kubelet[3138]: W0904 00:05:49.661145 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.661288 kubelet[3138]: E0904 00:05:49.661151 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.661288 kubelet[3138]: E0904 00:05:49.661234 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.661288 kubelet[3138]: W0904 00:05:49.661238 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.661288 kubelet[3138]: E0904 00:05:49.661244 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.661490 kubelet[3138]: E0904 00:05:49.661353 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.661490 kubelet[3138]: W0904 00:05:49.661358 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.661490 kubelet[3138]: E0904 00:05:49.661365 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.661490 kubelet[3138]: E0904 00:05:49.661458 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.661490 kubelet[3138]: W0904 00:05:49.661463 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.661490 kubelet[3138]: E0904 00:05:49.661469 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.677892 kubelet[3138]: E0904 00:05:49.677865 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.677892 kubelet[3138]: W0904 00:05:49.677883 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.678039 kubelet[3138]: E0904 00:05:49.677902 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.678154 kubelet[3138]: E0904 00:05:49.678130 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.678154 kubelet[3138]: W0904 00:05:49.678141 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.678246 kubelet[3138]: E0904 00:05:49.678161 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.678311 kubelet[3138]: E0904 00:05:49.678291 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.678311 kubelet[3138]: W0904 00:05:49.678301 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.678311 kubelet[3138]: E0904 00:05:49.678310 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.678549 kubelet[3138]: E0904 00:05:49.678450 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.678549 kubelet[3138]: W0904 00:05:49.678459 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.678549 kubelet[3138]: E0904 00:05:49.678467 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.678679 kubelet[3138]: E0904 00:05:49.678649 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.678723 kubelet[3138]: W0904 00:05:49.678711 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.678763 kubelet[3138]: E0904 00:05:49.678732 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.678894 kubelet[3138]: E0904 00:05:49.678878 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.678894 kubelet[3138]: W0904 00:05:49.678887 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.678961 kubelet[3138]: E0904 00:05:49.678903 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679026 kubelet[3138]: E0904 00:05:49.679016 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.679026 kubelet[3138]: W0904 00:05:49.679024 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.679080 kubelet[3138]: E0904 00:05:49.679033 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679183 kubelet[3138]: E0904 00:05:49.679170 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.679183 kubelet[3138]: W0904 00:05:49.679180 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.679351 kubelet[3138]: E0904 00:05:49.679258 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679351 kubelet[3138]: E0904 00:05:49.679282 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.679351 kubelet[3138]: W0904 00:05:49.679287 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.679440 kubelet[3138]: E0904 00:05:49.679368 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.679440 kubelet[3138]: W0904 00:05:49.679373 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.679440 kubelet[3138]: E0904 00:05:49.679382 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679684 kubelet[3138]: E0904 00:05:49.679459 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.679684 kubelet[3138]: W0904 00:05:49.679464 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.679684 kubelet[3138]: E0904 00:05:49.679470 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679684 kubelet[3138]: E0904 00:05:49.679587 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.679684 kubelet[3138]: W0904 00:05:49.679592 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.679684 kubelet[3138]: E0904 00:05:49.679598 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679881 kubelet[3138]: E0904 00:05:49.679864 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.679977 kubelet[3138]: E0904 00:05:49.679965 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.680004 kubelet[3138]: W0904 00:05:49.679975 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.680004 kubelet[3138]: E0904 00:05:49.679994 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.680183 kubelet[3138]: E0904 00:05:49.680163 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.680183 kubelet[3138]: W0904 00:05:49.680181 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.680241 kubelet[3138]: E0904 00:05:49.680196 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.680328 kubelet[3138]: E0904 00:05:49.680303 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.680328 kubelet[3138]: W0904 00:05:49.680324 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.680383 kubelet[3138]: E0904 00:05:49.680331 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.680431 kubelet[3138]: E0904 00:05:49.680419 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.680431 kubelet[3138]: W0904 00:05:49.680428 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.680478 kubelet[3138]: E0904 00:05:49.680436 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.680605 kubelet[3138]: E0904 00:05:49.680595 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.680605 kubelet[3138]: W0904 00:05:49.680604 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.680663 kubelet[3138]: E0904 00:05:49.680612 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:49.680881 kubelet[3138]: E0904 00:05:49.680871 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:05:49.680919 kubelet[3138]: W0904 00:05:49.680881 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:05:49.680919 kubelet[3138]: E0904 00:05:49.680890 3138 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:05:50.184562 containerd[1726]: time="2025-09-04T00:05:50.184496397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:50.186717 containerd[1726]: time="2025-09-04T00:05:50.186606763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 00:05:50.189012 containerd[1726]: time="2025-09-04T00:05:50.188974303Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:50.192681 containerd[1726]: time="2025-09-04T00:05:50.192276663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:50.192681 containerd[1726]: time="2025-09-04T00:05:50.192586046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.240326863s" Sep 4 00:05:50.192681 containerd[1726]: time="2025-09-04T00:05:50.192612931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 00:05:50.194418 containerd[1726]: time="2025-09-04T00:05:50.194394582Z" level=info msg="CreateContainer within sandbox \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 00:05:50.210716 containerd[1726]: time="2025-09-04T00:05:50.210680968Z" level=info msg="Container f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:50.217724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75691061.mount: Deactivated successfully. Sep 4 00:05:50.230030 containerd[1726]: time="2025-09-04T00:05:50.230000320Z" level=info msg="CreateContainer within sandbox \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\"" Sep 4 00:05:50.230384 containerd[1726]: time="2025-09-04T00:05:50.230366904Z" level=info msg="StartContainer for \"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\"" Sep 4 00:05:50.231993 containerd[1726]: time="2025-09-04T00:05:50.231964006Z" level=info msg="connecting to shim f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314" address="unix:///run/containerd/s/f6d6e182cf33b46d946e2e1557de8e96530275f02c8b4239887a5dc9a449311b" protocol=ttrpc version=3 Sep 4 00:05:50.252642 systemd[1]: Started cri-containerd-f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314.scope - libcontainer container f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314. Sep 4 00:05:50.290237 containerd[1726]: time="2025-09-04T00:05:50.290199502Z" level=info msg="StartContainer for \"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\" returns successfully" Sep 4 00:05:50.295746 systemd[1]: cri-containerd-f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314.scope: Deactivated successfully. Sep 4 00:05:50.298670 containerd[1726]: time="2025-09-04T00:05:50.298562119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\" id:\"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\" pid:3823 exited_at:{seconds:1756944350 nanos:298217720}" Sep 4 00:05:50.298670 containerd[1726]: time="2025-09-04T00:05:50.298601526Z" level=info msg="received exit event container_id:\"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\" id:\"f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314\" pid:3823 exited_at:{seconds:1756944350 nanos:298217720}" Sep 4 00:05:50.316868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f255f9186591f1c5621ba78d3dca620075b8cbea54c39582462d59b3bdf88314-rootfs.mount: Deactivated successfully. Sep 4 00:05:50.549564 kubelet[3138]: E0904 00:05:50.549363 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:50.614523 kubelet[3138]: I0904 00:05:50.614130 3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:05:52.549377 kubelet[3138]: E0904 00:05:52.549264 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:53.434364 kubelet[3138]: I0904 00:05:53.434297 3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:05:53.621543 containerd[1726]: time="2025-09-04T00:05:53.621478474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 00:05:54.550680 kubelet[3138]: E0904 00:05:54.549621 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:56.550240 kubelet[3138]: E0904 00:05:56.549218 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:58.550191 kubelet[3138]: E0904 00:05:58.550001 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:05:58.861369 containerd[1726]: time="2025-09-04T00:05:58.861267195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:58.863433 containerd[1726]: time="2025-09-04T00:05:58.863348180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 00:05:58.866762 containerd[1726]: time="2025-09-04T00:05:58.866041530Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:58.868952 containerd[1726]: time="2025-09-04T00:05:58.868924445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:58.869344 containerd[1726]: time="2025-09-04T00:05:58.869323497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.247673724s" Sep 4 00:05:58.869424 containerd[1726]: time="2025-09-04T00:05:58.869412086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 00:05:58.872092 containerd[1726]: time="2025-09-04T00:05:58.872060305Z" level=info msg="CreateContainer within sandbox \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 00:05:58.893833 containerd[1726]: time="2025-09-04T00:05:58.893553980Z" level=info msg="Container cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:58.908226 containerd[1726]: time="2025-09-04T00:05:58.908185382Z" level=info msg="CreateContainer within sandbox \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\"" Sep 4 00:05:58.908750 containerd[1726]: time="2025-09-04T00:05:58.908626786Z" level=info msg="StartContainer for \"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\"" Sep 4 00:05:58.910117 containerd[1726]: time="2025-09-04T00:05:58.910021607Z" level=info msg="connecting to shim cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de" address="unix:///run/containerd/s/f6d6e182cf33b46d946e2e1557de8e96530275f02c8b4239887a5dc9a449311b" protocol=ttrpc version=3 Sep 4 00:05:58.929654 systemd[1]: Started cri-containerd-cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de.scope - libcontainer container cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de. Sep 4 00:05:58.961734 containerd[1726]: time="2025-09-04T00:05:58.961709108Z" level=info msg="StartContainer for \"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\" returns successfully" Sep 4 00:06:00.549534 kubelet[3138]: E0904 00:06:00.548929 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:06:02.550069 kubelet[3138]: E0904 00:06:02.549013 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:06:04.549686 kubelet[3138]: E0904 00:06:04.549479 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:06:05.687946 containerd[1726]: time="2025-09-04T00:06:05.687903955Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:06:05.690796 systemd[1]: cri-containerd-cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de.scope: Deactivated successfully. Sep 4 00:06:05.691064 systemd[1]: cri-containerd-cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de.scope: Consumed 416ms CPU time, 192.9M memory peak, 171.3M written to disk. Sep 4 00:06:05.691632 containerd[1726]: time="2025-09-04T00:06:05.691556599Z" level=info msg="received exit event container_id:\"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\" id:\"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\" pid:3885 exited_at:{seconds:1756944365 nanos:690596211}" Sep 4 00:06:05.692797 containerd[1726]: time="2025-09-04T00:06:05.692491949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\" id:\"cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de\" pid:3885 exited_at:{seconds:1756944365 nanos:690596211}" Sep 4 00:06:05.710466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf567a728ab33a709a747b7e04e0dc776232e796083022c0a6f2daee518989de-rootfs.mount: Deactivated successfully. Sep 4 00:06:05.748675 kubelet[3138]: I0904 00:06:05.748620 3138 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:06:05.785516 systemd[1]: Created slice kubepods-burstable-pod47a80b82_b86a_4d4b_ad4c_f063574e5b1b.slice - libcontainer container kubepods-burstable-pod47a80b82_b86a_4d4b_ad4c_f063574e5b1b.slice. Sep 4 00:06:05.798687 systemd[1]: Created slice kubepods-besteffort-pod7ab2ea62_19e9_4240_ab11_757b8e9b385f.slice - libcontainer container kubepods-besteffort-pod7ab2ea62_19e9_4240_ab11_757b8e9b385f.slice. Sep 4 00:06:05.800892 kubelet[3138]: W0904 00:06:05.800857 3138 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4372.1.0-n-8c113b52d8" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-n-8c113b52d8' and this object Sep 4 00:06:05.800983 kubelet[3138]: E0904 00:06:05.800900 3138 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4372.1.0-n-8c113b52d8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-n-8c113b52d8' and this object" logger="UnhandledError" Sep 4 00:06:05.811307 systemd[1]: Created slice kubepods-burstable-pod4c718a0b_ae25_4026_ba0f_44a0152bb84f.slice - libcontainer container kubepods-burstable-pod4c718a0b_ae25_4026_ba0f_44a0152bb84f.slice. Sep 4 00:06:05.819083 systemd[1]: Created slice kubepods-besteffort-podd4387aa6_9e8e_43a7_b18d_22810d82aaec.slice - libcontainer container kubepods-besteffort-podd4387aa6_9e8e_43a7_b18d_22810d82aaec.slice. Sep 4 00:06:05.825712 systemd[1]: Created slice kubepods-besteffort-pod67931029_203e_4477_b4c8_3cc4250196b9.slice - libcontainer container kubepods-besteffort-pod67931029_203e_4477_b4c8_3cc4250196b9.slice. Sep 4 00:06:05.831559 systemd[1]: Created slice kubepods-besteffort-pod2dfaf5ee_4a2a_472f_a6b3_6c24915bacdb.slice - libcontainer container kubepods-besteffort-pod2dfaf5ee_4a2a_472f_a6b3_6c24915bacdb.slice. Sep 4 00:06:05.836492 systemd[1]: Created slice kubepods-besteffort-pod36198baa_4723_4c62_b490_51e9f3c3f348.slice - libcontainer container kubepods-besteffort-pod36198baa_4723_4c62_b490_51e9f3c3f348.slice. Sep 4 00:06:05.874006 kubelet[3138]: I0904 00:06:05.873980 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67931029-203e-4477-b4c8-3cc4250196b9-tigera-ca-bundle\") pod \"calico-kube-controllers-97c55758-c7zbb\" (UID: \"67931029-203e-4477-b4c8-3cc4250196b9\") " pod="calico-system/calico-kube-controllers-97c55758-c7zbb" Sep 4 00:06:05.874107 kubelet[3138]: I0904 00:06:05.874018 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4387aa6-9e8e-43a7-b18d-22810d82aaec-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-lbjpm\" (UID: \"d4387aa6-9e8e-43a7-b18d-22810d82aaec\") " pod="calico-system/goldmane-54d579b49d-lbjpm" Sep 4 00:06:05.874107 kubelet[3138]: I0904 00:06:05.874038 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47a80b82-b86a-4d4b-ad4c-f063574e5b1b-config-volume\") pod \"coredns-668d6bf9bc-mw4ns\" (UID: \"47a80b82-b86a-4d4b-ad4c-f063574e5b1b\") " pod="kube-system/coredns-668d6bf9bc-mw4ns" Sep 4 00:06:05.874107 kubelet[3138]: I0904 00:06:05.874057 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4387aa6-9e8e-43a7-b18d-22810d82aaec-config\") pod \"goldmane-54d579b49d-lbjpm\" (UID: \"d4387aa6-9e8e-43a7-b18d-22810d82aaec\") " pod="calico-system/goldmane-54d579b49d-lbjpm" Sep 4 00:06:05.874107 kubelet[3138]: I0904 00:06:05.874077 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb-calico-apiserver-certs\") pod \"calico-apiserver-6bf5df56b4-j6fg6\" (UID: \"2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb\") " pod="calico-apiserver/calico-apiserver-6bf5df56b4-j6fg6" Sep 4 00:06:05.874107 kubelet[3138]: I0904 00:06:05.874097 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdr55\" (UniqueName: \"kubernetes.io/projected/d4387aa6-9e8e-43a7-b18d-22810d82aaec-kube-api-access-sdr55\") pod \"goldmane-54d579b49d-lbjpm\" (UID: \"d4387aa6-9e8e-43a7-b18d-22810d82aaec\") " pod="calico-system/goldmane-54d579b49d-lbjpm" Sep 4 00:06:05.874350 kubelet[3138]: I0904 00:06:05.874118 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwcrb\" (UniqueName: \"kubernetes.io/projected/4c718a0b-ae25-4026-ba0f-44a0152bb84f-kube-api-access-gwcrb\") pod \"coredns-668d6bf9bc-2ljjn\" (UID: \"4c718a0b-ae25-4026-ba0f-44a0152bb84f\") " pod="kube-system/coredns-668d6bf9bc-2ljjn" Sep 4 00:06:05.874350 kubelet[3138]: I0904 00:06:05.874136 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-ca-bundle\") pod \"whisker-c4dc7c6ff-lq8lq\" (UID: \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\") " pod="calico-system/whisker-c4dc7c6ff-lq8lq" Sep 4 00:06:05.874350 kubelet[3138]: I0904 00:06:05.874154 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d4387aa6-9e8e-43a7-b18d-22810d82aaec-goldmane-key-pair\") pod \"goldmane-54d579b49d-lbjpm\" (UID: \"d4387aa6-9e8e-43a7-b18d-22810d82aaec\") " pod="calico-system/goldmane-54d579b49d-lbjpm" Sep 4 00:06:05.874350 kubelet[3138]: I0904 00:06:05.874173 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-backend-key-pair\") pod \"whisker-c4dc7c6ff-lq8lq\" (UID: \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\") " pod="calico-system/whisker-c4dc7c6ff-lq8lq" Sep 4 00:06:05.874350 kubelet[3138]: I0904 00:06:05.874195 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c718a0b-ae25-4026-ba0f-44a0152bb84f-config-volume\") pod \"coredns-668d6bf9bc-2ljjn\" (UID: \"4c718a0b-ae25-4026-ba0f-44a0152bb84f\") " pod="kube-system/coredns-668d6bf9bc-2ljjn" Sep 4 00:06:05.874582 kubelet[3138]: I0904 00:06:05.874214 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m85m\" (UniqueName: \"kubernetes.io/projected/2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb-kube-api-access-7m85m\") pod \"calico-apiserver-6bf5df56b4-j6fg6\" (UID: \"2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb\") " pod="calico-apiserver/calico-apiserver-6bf5df56b4-j6fg6" Sep 4 00:06:05.874582 kubelet[3138]: I0904 00:06:05.874233 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4nzh\" (UniqueName: \"kubernetes.io/projected/36198baa-4723-4c62-b490-51e9f3c3f348-kube-api-access-g4nzh\") pod \"calico-apiserver-6bf5df56b4-n5995\" (UID: \"36198baa-4723-4c62-b490-51e9f3c3f348\") " pod="calico-apiserver/calico-apiserver-6bf5df56b4-n5995" Sep 4 00:06:05.874582 kubelet[3138]: I0904 00:06:05.874251 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpw6m\" (UniqueName: \"kubernetes.io/projected/7ab2ea62-19e9-4240-ab11-757b8e9b385f-kube-api-access-rpw6m\") pod \"whisker-c4dc7c6ff-lq8lq\" (UID: \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\") " pod="calico-system/whisker-c4dc7c6ff-lq8lq" Sep 4 00:06:05.874582 kubelet[3138]: I0904 00:06:05.874274 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw94f\" (UniqueName: \"kubernetes.io/projected/67931029-203e-4477-b4c8-3cc4250196b9-kube-api-access-mw94f\") pod \"calico-kube-controllers-97c55758-c7zbb\" (UID: \"67931029-203e-4477-b4c8-3cc4250196b9\") " pod="calico-system/calico-kube-controllers-97c55758-c7zbb" Sep 4 00:06:05.874582 kubelet[3138]: I0904 00:06:05.874293 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdrf\" (UniqueName: \"kubernetes.io/projected/47a80b82-b86a-4d4b-ad4c-f063574e5b1b-kube-api-access-gkdrf\") pod \"coredns-668d6bf9bc-mw4ns\" (UID: \"47a80b82-b86a-4d4b-ad4c-f063574e5b1b\") " pod="kube-system/coredns-668d6bf9bc-mw4ns" Sep 4 00:06:05.875754 kubelet[3138]: I0904 00:06:05.874311 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36198baa-4723-4c62-b490-51e9f3c3f348-calico-apiserver-certs\") pod \"calico-apiserver-6bf5df56b4-n5995\" (UID: \"36198baa-4723-4c62-b490-51e9f3c3f348\") " pod="calico-apiserver/calico-apiserver-6bf5df56b4-n5995" Sep 4 00:06:06.093792 containerd[1726]: time="2025-09-04T00:06:06.093394126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mw4ns,Uid:47a80b82-b86a-4d4b-ad4c-f063574e5b1b,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:06.106426 containerd[1726]: time="2025-09-04T00:06:06.106395309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c4dc7c6ff-lq8lq,Uid:7ab2ea62-19e9-4240-ab11-757b8e9b385f,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:06.118363 containerd[1726]: time="2025-09-04T00:06:06.118330888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ljjn,Uid:4c718a0b-ae25-4026-ba0f-44a0152bb84f,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:06.129126 containerd[1726]: time="2025-09-04T00:06:06.129082685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97c55758-c7zbb,Uid:67931029-203e-4477-b4c8-3cc4250196b9,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:06.135633 containerd[1726]: time="2025-09-04T00:06:06.135609057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-j6fg6,Uid:2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:06.141084 containerd[1726]: time="2025-09-04T00:06:06.141060802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-n5995,Uid:36198baa-4723-4c62-b490-51e9f3c3f348,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:06.556254 systemd[1]: Created slice kubepods-besteffort-pod03860e7a_926d_4187_ba43_ef35f1cd2768.slice - libcontainer container kubepods-besteffort-pod03860e7a_926d_4187_ba43_ef35f1cd2768.slice. Sep 4 00:06:06.558382 containerd[1726]: time="2025-09-04T00:06:06.558344759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jftxm,Uid:03860e7a-926d-4187-ba43-ef35f1cd2768,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:06.984962 kubelet[3138]: E0904 00:06:06.984923 3138 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:06.985355 kubelet[3138]: E0904 00:06:06.985024 3138 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4387aa6-9e8e-43a7-b18d-22810d82aaec-config podName:d4387aa6-9e8e-43a7-b18d-22810d82aaec nodeName:}" failed. No retries permitted until 2025-09-04 00:06:07.484999635 +0000 UTC m=+39.062749287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4387aa6-9e8e-43a7-b18d-22810d82aaec-config") pod "goldmane-54d579b49d-lbjpm" (UID: "d4387aa6-9e8e-43a7-b18d-22810d82aaec") : failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:07.623740 containerd[1726]: time="2025-09-04T00:06:07.623697053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lbjpm,Uid:d4387aa6-9e8e-43a7-b18d-22810d82aaec,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:11.327048 containerd[1726]: time="2025-09-04T00:06:11.326935024Z" level=error msg="Failed to destroy network for sandbox \"6bff2c508f8666d8c00cde9c4c31ae2da3343afb7fb1dbe56403555fdae943fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.332997 containerd[1726]: time="2025-09-04T00:06:11.332736574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mw4ns,Uid:47a80b82-b86a-4d4b-ad4c-f063574e5b1b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bff2c508f8666d8c00cde9c4c31ae2da3343afb7fb1dbe56403555fdae943fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.333308 kubelet[3138]: E0904 00:06:11.333078 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bff2c508f8666d8c00cde9c4c31ae2da3343afb7fb1dbe56403555fdae943fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.333308 kubelet[3138]: E0904 00:06:11.333161 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bff2c508f8666d8c00cde9c4c31ae2da3343afb7fb1dbe56403555fdae943fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mw4ns" Sep 4 00:06:11.333308 kubelet[3138]: E0904 00:06:11.333193 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bff2c508f8666d8c00cde9c4c31ae2da3343afb7fb1dbe56403555fdae943fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mw4ns" Sep 4 00:06:11.335159 kubelet[3138]: E0904 00:06:11.333325 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mw4ns_kube-system(47a80b82-b86a-4d4b-ad4c-f063574e5b1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mw4ns_kube-system(47a80b82-b86a-4d4b-ad4c-f063574e5b1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bff2c508f8666d8c00cde9c4c31ae2da3343afb7fb1dbe56403555fdae943fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mw4ns" podUID="47a80b82-b86a-4d4b-ad4c-f063574e5b1b" Sep 4 00:06:11.358281 containerd[1726]: time="2025-09-04T00:06:11.358232692Z" level=error msg="Failed to destroy network for sandbox \"2612e34751aa57b92e5008265829305254e015fef1adda49b8c9405132268d9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.362427 containerd[1726]: time="2025-09-04T00:06:11.362378917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c4dc7c6ff-lq8lq,Uid:7ab2ea62-19e9-4240-ab11-757b8e9b385f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2612e34751aa57b92e5008265829305254e015fef1adda49b8c9405132268d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.362665 kubelet[3138]: E0904 00:06:11.362605 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2612e34751aa57b92e5008265829305254e015fef1adda49b8c9405132268d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.362665 kubelet[3138]: E0904 00:06:11.362655 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2612e34751aa57b92e5008265829305254e015fef1adda49b8c9405132268d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c4dc7c6ff-lq8lq" Sep 4 00:06:11.362926 kubelet[3138]: E0904 00:06:11.362852 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2612e34751aa57b92e5008265829305254e015fef1adda49b8c9405132268d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c4dc7c6ff-lq8lq" Sep 4 00:06:11.362968 kubelet[3138]: E0904 00:06:11.362914 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c4dc7c6ff-lq8lq_calico-system(7ab2ea62-19e9-4240-ab11-757b8e9b385f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c4dc7c6ff-lq8lq_calico-system(7ab2ea62-19e9-4240-ab11-757b8e9b385f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2612e34751aa57b92e5008265829305254e015fef1adda49b8c9405132268d9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c4dc7c6ff-lq8lq" podUID="7ab2ea62-19e9-4240-ab11-757b8e9b385f" Sep 4 00:06:11.373290 containerd[1726]: time="2025-09-04T00:06:11.373253453Z" level=error msg="Failed to destroy network for sandbox \"2c3cc3581f5730363d776f51216821d5825fd4a88b84eaac3cd7249b128a754c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.376639 containerd[1726]: time="2025-09-04T00:06:11.376594676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97c55758-c7zbb,Uid:67931029-203e-4477-b4c8-3cc4250196b9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cc3581f5730363d776f51216821d5825fd4a88b84eaac3cd7249b128a754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.376871 kubelet[3138]: E0904 00:06:11.376773 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cc3581f5730363d776f51216821d5825fd4a88b84eaac3cd7249b128a754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.376871 kubelet[3138]: E0904 00:06:11.376845 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cc3581f5730363d776f51216821d5825fd4a88b84eaac3cd7249b128a754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-97c55758-c7zbb" Sep 4 00:06:11.376871 kubelet[3138]: E0904 00:06:11.376866 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cc3581f5730363d776f51216821d5825fd4a88b84eaac3cd7249b128a754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-97c55758-c7zbb" Sep 4 00:06:11.377097 kubelet[3138]: E0904 00:06:11.377034 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-97c55758-c7zbb_calico-system(67931029-203e-4477-b4c8-3cc4250196b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-97c55758-c7zbb_calico-system(67931029-203e-4477-b4c8-3cc4250196b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c3cc3581f5730363d776f51216821d5825fd4a88b84eaac3cd7249b128a754c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-97c55758-c7zbb" podUID="67931029-203e-4477-b4c8-3cc4250196b9" Sep 4 00:06:11.380127 containerd[1726]: time="2025-09-04T00:06:11.380095719Z" level=error msg="Failed to destroy network for sandbox \"ae4eab49dbc0889463611b916870f7a4087a8f92444e34dbc7c9f59287cfa4e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.383812 containerd[1726]: time="2025-09-04T00:06:11.383710383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-n5995,Uid:36198baa-4723-4c62-b490-51e9f3c3f348,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4eab49dbc0889463611b916870f7a4087a8f92444e34dbc7c9f59287cfa4e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.383923 kubelet[3138]: E0904 00:06:11.383870 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4eab49dbc0889463611b916870f7a4087a8f92444e34dbc7c9f59287cfa4e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.384060 kubelet[3138]: E0904 00:06:11.383919 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4eab49dbc0889463611b916870f7a4087a8f92444e34dbc7c9f59287cfa4e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf5df56b4-n5995" Sep 4 00:06:11.384060 kubelet[3138]: E0904 00:06:11.383938 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4eab49dbc0889463611b916870f7a4087a8f92444e34dbc7c9f59287cfa4e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf5df56b4-n5995" Sep 4 00:06:11.384060 kubelet[3138]: E0904 00:06:11.383978 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf5df56b4-n5995_calico-apiserver(36198baa-4723-4c62-b490-51e9f3c3f348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf5df56b4-n5995_calico-apiserver(36198baa-4723-4c62-b490-51e9f3c3f348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae4eab49dbc0889463611b916870f7a4087a8f92444e34dbc7c9f59287cfa4e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf5df56b4-n5995" podUID="36198baa-4723-4c62-b490-51e9f3c3f348" Sep 4 00:06:11.388448 containerd[1726]: time="2025-09-04T00:06:11.388359441Z" level=error msg="Failed to destroy network for sandbox \"49b800b39cb524b8d49583c613dcad69c12bc14f1ac17791fe3e522e1d9651a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.394110 containerd[1726]: time="2025-09-04T00:06:11.394067769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ljjn,Uid:4c718a0b-ae25-4026-ba0f-44a0152bb84f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b800b39cb524b8d49583c613dcad69c12bc14f1ac17791fe3e522e1d9651a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.394398 kubelet[3138]: E0904 00:06:11.394350 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b800b39cb524b8d49583c613dcad69c12bc14f1ac17791fe3e522e1d9651a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.394454 kubelet[3138]: E0904 00:06:11.394405 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b800b39cb524b8d49583c613dcad69c12bc14f1ac17791fe3e522e1d9651a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2ljjn" Sep 4 00:06:11.394454 kubelet[3138]: E0904 00:06:11.394422 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b800b39cb524b8d49583c613dcad69c12bc14f1ac17791fe3e522e1d9651a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2ljjn" Sep 4 00:06:11.394579 kubelet[3138]: E0904 00:06:11.394462 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2ljjn_kube-system(4c718a0b-ae25-4026-ba0f-44a0152bb84f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2ljjn_kube-system(4c718a0b-ae25-4026-ba0f-44a0152bb84f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49b800b39cb524b8d49583c613dcad69c12bc14f1ac17791fe3e522e1d9651a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2ljjn" podUID="4c718a0b-ae25-4026-ba0f-44a0152bb84f" Sep 4 00:06:11.395327 containerd[1726]: time="2025-09-04T00:06:11.395298797Z" level=error msg="Failed to destroy network for sandbox \"1a5917c856e3a7265c9070835c59d392bd5c725a71823c27f8e2674eec85d50b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.397404 containerd[1726]: time="2025-09-04T00:06:11.397361129Z" level=error msg="Failed to destroy network for sandbox \"d5ba221bda6ea28d5de43c3e0c2889328e7a6396f0abe43d09f1382906b50371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.398545 containerd[1726]: time="2025-09-04T00:06:11.398473393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-j6fg6,Uid:2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5917c856e3a7265c9070835c59d392bd5c725a71823c27f8e2674eec85d50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.398906 kubelet[3138]: E0904 00:06:11.398841 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5917c856e3a7265c9070835c59d392bd5c725a71823c27f8e2674eec85d50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.399045 kubelet[3138]: E0904 00:06:11.398991 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5917c856e3a7265c9070835c59d392bd5c725a71823c27f8e2674eec85d50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf5df56b4-j6fg6" Sep 4 00:06:11.399045 kubelet[3138]: E0904 00:06:11.399013 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5917c856e3a7265c9070835c59d392bd5c725a71823c27f8e2674eec85d50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf5df56b4-j6fg6" Sep 4 00:06:11.399202 kubelet[3138]: E0904 00:06:11.399140 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf5df56b4-j6fg6_calico-apiserver(2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf5df56b4-j6fg6_calico-apiserver(2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a5917c856e3a7265c9070835c59d392bd5c725a71823c27f8e2674eec85d50b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf5df56b4-j6fg6" podUID="2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb" Sep 4 00:06:11.401580 containerd[1726]: time="2025-09-04T00:06:11.401481923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jftxm,Uid:03860e7a-926d-4187-ba43-ef35f1cd2768,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ba221bda6ea28d5de43c3e0c2889328e7a6396f0abe43d09f1382906b50371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.402062 kubelet[3138]: E0904 00:06:11.401982 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ba221bda6ea28d5de43c3e0c2889328e7a6396f0abe43d09f1382906b50371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.402231 kubelet[3138]: E0904 00:06:11.402147 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ba221bda6ea28d5de43c3e0c2889328e7a6396f0abe43d09f1382906b50371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jftxm" Sep 4 00:06:11.402231 kubelet[3138]: E0904 00:06:11.402173 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ba221bda6ea28d5de43c3e0c2889328e7a6396f0abe43d09f1382906b50371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jftxm" Sep 4 00:06:11.402410 kubelet[3138]: E0904 00:06:11.402331 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jftxm_calico-system(03860e7a-926d-4187-ba43-ef35f1cd2768)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jftxm_calico-system(03860e7a-926d-4187-ba43-ef35f1cd2768)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5ba221bda6ea28d5de43c3e0c2889328e7a6396f0abe43d09f1382906b50371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jftxm" podUID="03860e7a-926d-4187-ba43-ef35f1cd2768" Sep 4 00:06:11.405836 containerd[1726]: time="2025-09-04T00:06:11.405809689Z" level=error msg="Failed to destroy network for sandbox \"7fd2842f45cdab59d453965f5305c83286975d894b72a98264efe8bbaf80d9f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.408563 containerd[1726]: time="2025-09-04T00:06:11.408529264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lbjpm,Uid:d4387aa6-9e8e-43a7-b18d-22810d82aaec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd2842f45cdab59d453965f5305c83286975d894b72a98264efe8bbaf80d9f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.408939 kubelet[3138]: E0904 00:06:11.408681 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd2842f45cdab59d453965f5305c83286975d894b72a98264efe8bbaf80d9f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:11.408939 kubelet[3138]: E0904 00:06:11.408714 3138 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd2842f45cdab59d453965f5305c83286975d894b72a98264efe8bbaf80d9f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-lbjpm" Sep 4 00:06:11.408939 kubelet[3138]: E0904 00:06:11.408728 3138 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd2842f45cdab59d453965f5305c83286975d894b72a98264efe8bbaf80d9f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-lbjpm" Sep 4 00:06:11.409087 kubelet[3138]: E0904 00:06:11.408758 3138 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-lbjpm_calico-system(d4387aa6-9e8e-43a7-b18d-22810d82aaec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-lbjpm_calico-system(d4387aa6-9e8e-43a7-b18d-22810d82aaec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fd2842f45cdab59d453965f5305c83286975d894b72a98264efe8bbaf80d9f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-lbjpm" podUID="d4387aa6-9e8e-43a7-b18d-22810d82aaec" Sep 4 00:06:11.655138 containerd[1726]: time="2025-09-04T00:06:11.655043679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 00:06:11.986473 systemd[1]: run-netns-cni\x2d66260319\x2d417d\x2d9cbb\x2dba38\x2dd0eae77b6e7a.mount: Deactivated successfully. Sep 4 00:06:11.986968 systemd[1]: run-netns-cni\x2dcd0599ab\x2db5e3\x2d454d\x2d76be\x2d6e0163235f9d.mount: Deactivated successfully. Sep 4 00:06:11.987075 systemd[1]: run-netns-cni\x2d33d3aa81\x2d7afc\x2dfec7\x2d4443\x2d5892676c20b9.mount: Deactivated successfully. Sep 4 00:06:11.987201 systemd[1]: run-netns-cni\x2d4ee71a11\x2df6cd\x2dcd37\x2d5b95\x2d26fa29bcbef7.mount: Deactivated successfully. Sep 4 00:06:11.987264 systemd[1]: run-netns-cni\x2d40ce18ee\x2dadee\x2dc930\x2d3454\x2d9af22950ef88.mount: Deactivated successfully. Sep 4 00:06:11.987333 systemd[1]: run-netns-cni\x2d9656eaf5\x2d6b51\x2d1799\x2d694e\x2d04a93b0aeb67.mount: Deactivated successfully. Sep 4 00:06:16.387579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332619941.mount: Deactivated successfully. Sep 4 00:06:16.418268 containerd[1726]: time="2025-09-04T00:06:16.418222195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:16.420227 containerd[1726]: time="2025-09-04T00:06:16.420127154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 00:06:16.422817 containerd[1726]: time="2025-09-04T00:06:16.422793138Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:16.426061 containerd[1726]: time="2025-09-04T00:06:16.426015314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:16.426432 containerd[1726]: time="2025-09-04T00:06:16.426282258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.771187262s" Sep 4 00:06:16.426432 containerd[1726]: time="2025-09-04T00:06:16.426311731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 00:06:16.439156 containerd[1726]: time="2025-09-04T00:06:16.439129050Z" level=info msg="CreateContainer within sandbox \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 00:06:16.454713 containerd[1726]: time="2025-09-04T00:06:16.454683441Z" level=info msg="Container cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:16.475216 containerd[1726]: time="2025-09-04T00:06:16.475179071Z" level=info msg="CreateContainer within sandbox \"a95c664fac5beb557bcb2294ee6a041482431ed21a104544d69a92a0b0cee2d5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\"" Sep 4 00:06:16.475963 containerd[1726]: time="2025-09-04T00:06:16.475913690Z" level=info msg="StartContainer for \"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\"" Sep 4 00:06:16.477517 containerd[1726]: time="2025-09-04T00:06:16.477439914Z" level=info msg="connecting to shim cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c" address="unix:///run/containerd/s/f6d6e182cf33b46d946e2e1557de8e96530275f02c8b4239887a5dc9a449311b" protocol=ttrpc version=3 Sep 4 00:06:16.498638 systemd[1]: Started cri-containerd-cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c.scope - libcontainer container cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c. Sep 4 00:06:16.529937 containerd[1726]: time="2025-09-04T00:06:16.529881273Z" level=info msg="StartContainer for \"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" returns successfully" Sep 4 00:06:16.690854 kubelet[3138]: I0904 00:06:16.690748 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xxcjd" podStartSLOduration=1.5648521290000001 podStartE2EDuration="30.690732546s" podCreationTimestamp="2025-09-04 00:05:46 +0000 UTC" firstStartedPulling="2025-09-04 00:05:47.301154798 +0000 UTC m=+18.878904459" lastFinishedPulling="2025-09-04 00:06:16.427035208 +0000 UTC m=+48.004784876" observedRunningTime="2025-09-04 00:06:16.688465573 +0000 UTC m=+48.266215241" watchObservedRunningTime="2025-09-04 00:06:16.690732546 +0000 UTC m=+48.268482298" Sep 4 00:06:16.751567 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 00:06:16.751681 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 00:06:16.758600 containerd[1726]: time="2025-09-04T00:06:16.758553681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" id:\"01b08f0e3f364166e72797a3703237191d53625004704b6bf1686ecbe430875e\" pid:4199 exit_status:1 exited_at:{seconds:1756944376 nanos:757949137}" Sep 4 00:06:16.943574 kubelet[3138]: I0904 00:06:16.943187 3138 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-backend-key-pair\") pod \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\" (UID: \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\") " Sep 4 00:06:16.943574 kubelet[3138]: I0904 00:06:16.943230 3138 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-ca-bundle\") pod \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\" (UID: \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\") " Sep 4 00:06:16.943574 kubelet[3138]: I0904 00:06:16.943280 3138 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpw6m\" (UniqueName: \"kubernetes.io/projected/7ab2ea62-19e9-4240-ab11-757b8e9b385f-kube-api-access-rpw6m\") pod \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\" (UID: \"7ab2ea62-19e9-4240-ab11-757b8e9b385f\") " Sep 4 00:06:16.946113 kubelet[3138]: I0904 00:06:16.945989 3138 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7ab2ea62-19e9-4240-ab11-757b8e9b385f" (UID: "7ab2ea62-19e9-4240-ab11-757b8e9b385f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:06:16.948319 kubelet[3138]: I0904 00:06:16.948289 3138 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7ab2ea62-19e9-4240-ab11-757b8e9b385f" (UID: "7ab2ea62-19e9-4240-ab11-757b8e9b385f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:06:16.948597 kubelet[3138]: I0904 00:06:16.948566 3138 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab2ea62-19e9-4240-ab11-757b8e9b385f-kube-api-access-rpw6m" (OuterVolumeSpecName: "kube-api-access-rpw6m") pod "7ab2ea62-19e9-4240-ab11-757b8e9b385f" (UID: "7ab2ea62-19e9-4240-ab11-757b8e9b385f"). InnerVolumeSpecName "kube-api-access-rpw6m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:06:17.044075 kubelet[3138]: I0904 00:06:17.044049 3138 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-backend-key-pair\") on node \"ci-4372.1.0-n-8c113b52d8\" DevicePath \"\"" Sep 4 00:06:17.044075 kubelet[3138]: I0904 00:06:17.044077 3138 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab2ea62-19e9-4240-ab11-757b8e9b385f-whisker-ca-bundle\") on node \"ci-4372.1.0-n-8c113b52d8\" DevicePath \"\"" Sep 4 00:06:17.044258 kubelet[3138]: I0904 00:06:17.044089 3138 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpw6m\" (UniqueName: \"kubernetes.io/projected/7ab2ea62-19e9-4240-ab11-757b8e9b385f-kube-api-access-rpw6m\") on node \"ci-4372.1.0-n-8c113b52d8\" DevicePath \"\"" Sep 4 00:06:17.386051 systemd[1]: var-lib-kubelet-pods-7ab2ea62\x2d19e9\x2d4240\x2dab11\x2d757b8e9b385f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpw6m.mount: Deactivated successfully. Sep 4 00:06:17.386171 systemd[1]: var-lib-kubelet-pods-7ab2ea62\x2d19e9\x2d4240\x2dab11\x2d757b8e9b385f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 00:06:17.676092 systemd[1]: Removed slice kubepods-besteffort-pod7ab2ea62_19e9_4240_ab11_757b8e9b385f.slice - libcontainer container kubepods-besteffort-pod7ab2ea62_19e9_4240_ab11_757b8e9b385f.slice. Sep 4 00:06:17.755324 systemd[1]: Created slice kubepods-besteffort-podc699aeaa_9fc5_4eed_9702_aa9028e07ac2.slice - libcontainer container kubepods-besteffort-podc699aeaa_9fc5_4eed_9702_aa9028e07ac2.slice. Sep 4 00:06:17.773265 containerd[1726]: time="2025-09-04T00:06:17.772801221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" id:\"d34077a01fc5654c9128a1869df4ce4c799428e07a0684b932423b0ab18d3c10\" pid:4248 exit_status:1 exited_at:{seconds:1756944377 nanos:772548444}" Sep 4 00:06:17.849145 kubelet[3138]: I0904 00:06:17.849099 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxp9z\" (UniqueName: \"kubernetes.io/projected/c699aeaa-9fc5-4eed-9702-aa9028e07ac2-kube-api-access-jxp9z\") pod \"whisker-54568dd578-m8hzf\" (UID: \"c699aeaa-9fc5-4eed-9702-aa9028e07ac2\") " pod="calico-system/whisker-54568dd578-m8hzf" Sep 4 00:06:17.849145 kubelet[3138]: I0904 00:06:17.849145 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c699aeaa-9fc5-4eed-9702-aa9028e07ac2-whisker-backend-key-pair\") pod \"whisker-54568dd578-m8hzf\" (UID: \"c699aeaa-9fc5-4eed-9702-aa9028e07ac2\") " pod="calico-system/whisker-54568dd578-m8hzf" Sep 4 00:06:17.849481 kubelet[3138]: I0904 00:06:17.849162 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c699aeaa-9fc5-4eed-9702-aa9028e07ac2-whisker-ca-bundle\") pod \"whisker-54568dd578-m8hzf\" (UID: \"c699aeaa-9fc5-4eed-9702-aa9028e07ac2\") " pod="calico-system/whisker-54568dd578-m8hzf" Sep 4 00:06:18.061309 containerd[1726]: time="2025-09-04T00:06:18.061198958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54568dd578-m8hzf,Uid:c699aeaa-9fc5-4eed-9702-aa9028e07ac2,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:18.208073 systemd-networkd[1356]: cali86fd17f0171: Link UP Sep 4 00:06:18.209897 systemd-networkd[1356]: cali86fd17f0171: Gained carrier Sep 4 00:06:18.230761 containerd[1726]: 2025-09-04 00:06:18.089 [INFO][4262] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:06:18.230761 containerd[1726]: 2025-09-04 00:06:18.097 [INFO][4262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0 whisker-54568dd578- calico-system c699aeaa-9fc5-4eed-9702-aa9028e07ac2 929 0 2025-09-04 00:06:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54568dd578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 whisker-54568dd578-m8hzf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali86fd17f0171 [] [] }} ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-" Sep 4 00:06:18.230761 containerd[1726]: 2025-09-04 00:06:18.097 [INFO][4262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.230761 containerd[1726]: 2025-09-04 00:06:18.132 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" HandleID="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Workload="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.132 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" HandleID="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Workload="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"whisker-54568dd578-m8hzf", "timestamp":"2025-09-04 00:06:18.132250147 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.132 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.132 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.132 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.140 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.145 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.150 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.151 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.230996 containerd[1726]: 2025-09-04 00:06:18.154 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.155 [INFO][4274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.156 [INFO][4274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139 Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.166 [INFO][4274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.174 [INFO][4274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.65/26] block=192.168.31.64/26 handle="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.175 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.65/26] handle="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.176 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:18.231670 containerd[1726]: 2025-09-04 00:06:18.176 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.65/26] IPv6=[] ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" HandleID="k8s-pod-network.318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Workload="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.231837 containerd[1726]: 2025-09-04 00:06:18.180 [INFO][4262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0", GenerateName:"whisker-54568dd578-", Namespace:"calico-system", SelfLink:"", UID:"c699aeaa-9fc5-4eed-9702-aa9028e07ac2", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54568dd578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"whisker-54568dd578-m8hzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali86fd17f0171", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:18.231837 containerd[1726]: 2025-09-04 00:06:18.180 [INFO][4262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.65/32] ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.231934 containerd[1726]: 2025-09-04 00:06:18.180 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86fd17f0171 ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.231934 containerd[1726]: 2025-09-04 00:06:18.211 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.231984 containerd[1726]: 2025-09-04 00:06:18.213 [INFO][4262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0", GenerateName:"whisker-54568dd578-", Namespace:"calico-system", SelfLink:"", UID:"c699aeaa-9fc5-4eed-9702-aa9028e07ac2", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54568dd578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139", Pod:"whisker-54568dd578-m8hzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali86fd17f0171", MAC:"d6:e4:c8:2a:5e:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:18.232045 containerd[1726]: 2025-09-04 00:06:18.226 [INFO][4262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" Namespace="calico-system" Pod="whisker-54568dd578-m8hzf" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-whisker--54568dd578--m8hzf-eth0" Sep 4 00:06:18.281258 containerd[1726]: time="2025-09-04T00:06:18.281177213Z" level=info msg="connecting to shim 318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139" address="unix:///run/containerd/s/059f7f20b2ce50433934adcb88631e66a9b472a5fe1320795b63f1a2dcd566f7" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:18.307679 systemd[1]: Started cri-containerd-318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139.scope - libcontainer container 318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139. Sep 4 00:06:18.399417 containerd[1726]: time="2025-09-04T00:06:18.399354163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54568dd578-m8hzf,Uid:c699aeaa-9fc5-4eed-9702-aa9028e07ac2,Namespace:calico-system,Attempt:0,} returns sandbox id \"318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139\"" Sep 4 00:06:18.403517 containerd[1726]: time="2025-09-04T00:06:18.403474466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 00:06:18.552060 kubelet[3138]: I0904 00:06:18.552015 3138 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab2ea62-19e9-4240-ab11-757b8e9b385f" path="/var/lib/kubelet/pods/7ab2ea62-19e9-4240-ab11-757b8e9b385f/volumes" Sep 4 00:06:18.825215 systemd-networkd[1356]: vxlan.calico: Link UP Sep 4 00:06:18.825226 systemd-networkd[1356]: vxlan.calico: Gained carrier Sep 4 00:06:19.600690 systemd-networkd[1356]: cali86fd17f0171: Gained IPv6LL Sep 4 00:06:20.070006 containerd[1726]: time="2025-09-04T00:06:20.069956333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:20.072200 containerd[1726]: time="2025-09-04T00:06:20.072163277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 00:06:20.074943 containerd[1726]: time="2025-09-04T00:06:20.074917921Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:20.078199 containerd[1726]: time="2025-09-04T00:06:20.078155781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:20.079275 containerd[1726]: time="2025-09-04T00:06:20.079252218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.675732395s" Sep 4 00:06:20.079388 containerd[1726]: time="2025-09-04T00:06:20.079373145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 00:06:20.083352 containerd[1726]: time="2025-09-04T00:06:20.083310297Z" level=info msg="CreateContainer within sandbox \"318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 00:06:20.097530 containerd[1726]: time="2025-09-04T00:06:20.095230075Z" level=info msg="Container 30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:20.110019 containerd[1726]: time="2025-09-04T00:06:20.109990923Z" level=info msg="CreateContainer within sandbox \"318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285\"" Sep 4 00:06:20.111570 containerd[1726]: time="2025-09-04T00:06:20.111541174Z" level=info msg="StartContainer for \"30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285\"" Sep 4 00:06:20.112573 containerd[1726]: time="2025-09-04T00:06:20.112473045Z" level=info msg="connecting to shim 30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285" address="unix:///run/containerd/s/059f7f20b2ce50433934adcb88631e66a9b472a5fe1320795b63f1a2dcd566f7" protocol=ttrpc version=3 Sep 4 00:06:20.130660 systemd[1]: Started cri-containerd-30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285.scope - libcontainer container 30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285. Sep 4 00:06:20.175565 containerd[1726]: time="2025-09-04T00:06:20.175527548Z" level=info msg="StartContainer for \"30b11b4d63d92000a0156841a67f1ce60d13b9cd8b3a2c0178a2099ad1dcd285\" returns successfully" Sep 4 00:06:20.177523 containerd[1726]: time="2025-09-04T00:06:20.177436970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 00:06:20.752735 systemd-networkd[1356]: vxlan.calico: Gained IPv6LL Sep 4 00:06:21.952923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633475892.mount: Deactivated successfully. Sep 4 00:06:21.991327 containerd[1726]: time="2025-09-04T00:06:21.991283821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:21.993342 containerd[1726]: time="2025-09-04T00:06:21.993228664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 00:06:21.995736 containerd[1726]: time="2025-09-04T00:06:21.995708120Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:21.998907 containerd[1726]: time="2025-09-04T00:06:21.998856913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:21.999378 containerd[1726]: time="2025-09-04T00:06:21.999208720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 1.821734227s" Sep 4 00:06:21.999378 containerd[1726]: time="2025-09-04T00:06:21.999235838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 00:06:22.002111 containerd[1726]: time="2025-09-04T00:06:22.002075532Z" level=info msg="CreateContainer within sandbox \"318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 00:06:22.017667 containerd[1726]: time="2025-09-04T00:06:22.017631177Z" level=info msg="Container 4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:22.044231 containerd[1726]: time="2025-09-04T00:06:22.044201817Z" level=info msg="CreateContainer within sandbox \"318d1682f3ca403308cb5367b09c760dfab4e843f6f5e392eb57b71704be8139\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415\"" Sep 4 00:06:22.044883 containerd[1726]: time="2025-09-04T00:06:22.044861352Z" level=info msg="StartContainer for \"4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415\"" Sep 4 00:06:22.046079 containerd[1726]: time="2025-09-04T00:06:22.046051689Z" level=info msg="connecting to shim 4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415" address="unix:///run/containerd/s/059f7f20b2ce50433934adcb88631e66a9b472a5fe1320795b63f1a2dcd566f7" protocol=ttrpc version=3 Sep 4 00:06:22.070647 systemd[1]: Started cri-containerd-4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415.scope - libcontainer container 4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415. Sep 4 00:06:22.116286 containerd[1726]: time="2025-09-04T00:06:22.116249531Z" level=info msg="StartContainer for \"4ed7d1c355b8b68efb4dcd07110753f3bd441920cdf18a425f0fe011311fa415\" returns successfully" Sep 4 00:06:22.550231 containerd[1726]: time="2025-09-04T00:06:22.549800423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mw4ns,Uid:47a80b82-b86a-4d4b-ad4c-f063574e5b1b,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:22.639783 systemd-networkd[1356]: cali743e157f90f: Link UP Sep 4 00:06:22.641427 systemd-networkd[1356]: cali743e157f90f: Gained carrier Sep 4 00:06:22.656836 containerd[1726]: 2025-09-04 00:06:22.585 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0 coredns-668d6bf9bc- kube-system 47a80b82-b86a-4d4b-ad4c-f063574e5b1b 844 0 2025-09-04 00:05:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 coredns-668d6bf9bc-mw4ns eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali743e157f90f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-" Sep 4 00:06:22.656836 containerd[1726]: 2025-09-04 00:06:22.585 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.656836 containerd[1726]: 2025-09-04 00:06:22.607 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" HandleID="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Workload="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.607 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" HandleID="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Workload="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"coredns-668d6bf9bc-mw4ns", "timestamp":"2025-09-04 00:06:22.607675047 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.607 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.607 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.607 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.612 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.616 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.619 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.621 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657043 containerd[1726]: 2025-09-04 00:06:22.622 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.622 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.623 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8 Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.629 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.634 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.66/26] block=192.168.31.64/26 handle="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.634 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.66/26] handle="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.634 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:22.657476 containerd[1726]: 2025-09-04 00:06:22.634 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.66/26] IPv6=[] ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" HandleID="k8s-pod-network.d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Workload="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.657833 containerd[1726]: 2025-09-04 00:06:22.636 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"47a80b82-b86a-4d4b-ad4c-f063574e5b1b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"coredns-668d6bf9bc-mw4ns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali743e157f90f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:22.657833 containerd[1726]: 2025-09-04 00:06:22.636 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.66/32] ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.657833 containerd[1726]: 2025-09-04 00:06:22.636 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali743e157f90f ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.657833 containerd[1726]: 2025-09-04 00:06:22.640 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.657833 containerd[1726]: 2025-09-04 00:06:22.640 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"47a80b82-b86a-4d4b-ad4c-f063574e5b1b", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8", Pod:"coredns-668d6bf9bc-mw4ns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali743e157f90f", MAC:"f2:f6:b5:2c:8d:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:22.657833 containerd[1726]: 2025-09-04 00:06:22.650 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-mw4ns" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--mw4ns-eth0" Sep 4 00:06:22.698526 containerd[1726]: time="2025-09-04T00:06:22.698397732Z" level=info msg="connecting to shim d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8" address="unix:///run/containerd/s/59d44b88319a70daad43abae1127f7ae6f74421af74914298fa33894e4673273" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:22.703695 kubelet[3138]: I0904 00:06:22.703575 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54568dd578-m8hzf" podStartSLOduration=2.106477827 podStartE2EDuration="5.703555805s" podCreationTimestamp="2025-09-04 00:06:17 +0000 UTC" firstStartedPulling="2025-09-04 00:06:18.402903196 +0000 UTC m=+49.980652864" lastFinishedPulling="2025-09-04 00:06:21.999981168 +0000 UTC m=+53.577730842" observedRunningTime="2025-09-04 00:06:22.702461436 +0000 UTC m=+54.280211103" watchObservedRunningTime="2025-09-04 00:06:22.703555805 +0000 UTC m=+54.281305486" Sep 4 00:06:22.726685 systemd[1]: Started cri-containerd-d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8.scope - libcontainer container d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8. Sep 4 00:06:22.773297 containerd[1726]: time="2025-09-04T00:06:22.773271131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mw4ns,Uid:47a80b82-b86a-4d4b-ad4c-f063574e5b1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8\"" Sep 4 00:06:22.775682 containerd[1726]: time="2025-09-04T00:06:22.775656950Z" level=info msg="CreateContainer within sandbox \"d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:06:22.794236 containerd[1726]: time="2025-09-04T00:06:22.793376316Z" level=info msg="Container 5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:22.811029 containerd[1726]: time="2025-09-04T00:06:22.810856023Z" level=info msg="CreateContainer within sandbox \"d3932735488f7871d5d3f10227cc47b84ef05350d46ce2752539d2c2b70f45b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523\"" Sep 4 00:06:22.812057 containerd[1726]: time="2025-09-04T00:06:22.811977443Z" level=info msg="StartContainer for \"5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523\"" Sep 4 00:06:22.813913 containerd[1726]: time="2025-09-04T00:06:22.813880267Z" level=info msg="connecting to shim 5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523" address="unix:///run/containerd/s/59d44b88319a70daad43abae1127f7ae6f74421af74914298fa33894e4673273" protocol=ttrpc version=3 Sep 4 00:06:22.834652 systemd[1]: Started cri-containerd-5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523.scope - libcontainer container 5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523. Sep 4 00:06:22.862426 containerd[1726]: time="2025-09-04T00:06:22.862394837Z" level=info msg="StartContainer for \"5ad790fb8e93abae33b39d449afbf072ec67a282b381fc34817f6325d3736523\" returns successfully" Sep 4 00:06:23.699488 kubelet[3138]: I0904 00:06:23.698492 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mw4ns" podStartSLOduration=49.698474251 podStartE2EDuration="49.698474251s" podCreationTimestamp="2025-09-04 00:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:23.698316249 +0000 UTC m=+55.276065945" watchObservedRunningTime="2025-09-04 00:06:23.698474251 +0000 UTC m=+55.276223921" Sep 4 00:06:23.824761 systemd-networkd[1356]: cali743e157f90f: Gained IPv6LL Sep 4 00:06:24.550541 containerd[1726]: time="2025-09-04T00:06:24.550229430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-j6fg6,Uid:2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:24.550919 containerd[1726]: time="2025-09-04T00:06:24.550728120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lbjpm,Uid:d4387aa6-9e8e-43a7-b18d-22810d82aaec,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:24.551056 containerd[1726]: time="2025-09-04T00:06:24.551008781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ljjn,Uid:4c718a0b-ae25-4026-ba0f-44a0152bb84f,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:24.551244 containerd[1726]: time="2025-09-04T00:06:24.551161640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97c55758-c7zbb,Uid:67931029-203e-4477-b4c8-3cc4250196b9,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:24.768684 systemd-networkd[1356]: cali0621f7cdb68: Link UP Sep 4 00:06:24.771069 systemd-networkd[1356]: cali0621f7cdb68: Gained carrier Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.626 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0 goldmane-54d579b49d- calico-system d4387aa6-9e8e-43a7-b18d-22810d82aaec 851 0 2025-09-04 00:05:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 goldmane-54d579b49d-lbjpm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0621f7cdb68 [] [] }} ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.626 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.697 [INFO][4768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" HandleID="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Workload="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.699 [INFO][4768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" HandleID="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Workload="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"goldmane-54d579b49d-lbjpm", "timestamp":"2025-09-04 00:06:24.697372577 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.699 [INFO][4768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.699 [INFO][4768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.699 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.710 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.728 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.733 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.738 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.740 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.740 [INFO][4768] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.742 [INFO][4768] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61 Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.745 [INFO][4768] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.754 [INFO][4768] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.67/26] block=192.168.31.64/26 handle="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.754 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.67/26] handle="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.754 [INFO][4768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:24.788540 containerd[1726]: 2025-09-04 00:06:24.754 [INFO][4768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.67/26] IPv6=[] ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" HandleID="k8s-pod-network.353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Workload="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.789577 containerd[1726]: 2025-09-04 00:06:24.759 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d4387aa6-9e8e-43a7-b18d-22810d82aaec", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"goldmane-54d579b49d-lbjpm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0621f7cdb68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:24.789577 containerd[1726]: 2025-09-04 00:06:24.760 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.67/32] ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.789577 containerd[1726]: 2025-09-04 00:06:24.760 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0621f7cdb68 ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.789577 containerd[1726]: 2025-09-04 00:06:24.770 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.789577 containerd[1726]: 2025-09-04 00:06:24.772 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d4387aa6-9e8e-43a7-b18d-22810d82aaec", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61", Pod:"goldmane-54d579b49d-lbjpm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0621f7cdb68", MAC:"a6:1a:14:36:2d:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:24.789577 containerd[1726]: 2025-09-04 00:06:24.786 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" Namespace="calico-system" Pod="goldmane-54d579b49d-lbjpm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-goldmane--54d579b49d--lbjpm-eth0" Sep 4 00:06:24.837490 containerd[1726]: time="2025-09-04T00:06:24.837378730Z" level=info msg="connecting to shim 353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61" address="unix:///run/containerd/s/f35017d60105806965e1e4a2d397bd7d6182eca15f4a8c104f9ba5582b3831f4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:24.862829 systemd[1]: Started cri-containerd-353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61.scope - libcontainer container 353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61. Sep 4 00:06:24.867963 systemd-networkd[1356]: cali1930b382aa0: Link UP Sep 4 00:06:24.871831 systemd-networkd[1356]: cali1930b382aa0: Gained carrier Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.648 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0 calico-apiserver-6bf5df56b4- calico-apiserver 2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb 854 0 2025-09-04 00:05:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf5df56b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 calico-apiserver-6bf5df56b4-j6fg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1930b382aa0 [] [] }} ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.648 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.700 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" HandleID="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.701 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" HandleID="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"calico-apiserver-6bf5df56b4-j6fg6", "timestamp":"2025-09-04 00:06:24.699139235 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.702 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.755 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.755 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.809 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.827 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.836 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.839 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.841 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.841 [INFO][4777] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.843 [INFO][4777] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285 Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.850 [INFO][4777] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.858 [INFO][4777] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.68/26] block=192.168.31.64/26 handle="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.858 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.68/26] handle="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.859 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:24.891145 containerd[1726]: 2025-09-04 00:06:24.859 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.68/26] IPv6=[] ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" HandleID="k8s-pod-network.3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.892131 containerd[1726]: 2025-09-04 00:06:24.862 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0", GenerateName:"calico-apiserver-6bf5df56b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf5df56b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"calico-apiserver-6bf5df56b4-j6fg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1930b382aa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:24.892131 containerd[1726]: 2025-09-04 00:06:24.863 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.68/32] ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.892131 containerd[1726]: 2025-09-04 00:06:24.863 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1930b382aa0 ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.892131 containerd[1726]: 2025-09-04 00:06:24.872 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.892131 containerd[1726]: 2025-09-04 00:06:24.872 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0", GenerateName:"calico-apiserver-6bf5df56b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf5df56b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285", Pod:"calico-apiserver-6bf5df56b4-j6fg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1930b382aa0", MAC:"4a:c3:7d:57:3a:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:24.892131 containerd[1726]: 2025-09-04 00:06:24.887 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-j6fg6" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--j6fg6-eth0" Sep 4 00:06:24.935838 containerd[1726]: time="2025-09-04T00:06:24.935724833Z" level=info msg="connecting to shim 3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285" address="unix:///run/containerd/s/afbb5d37ba76ced8820bacdc5d461b92984a56ff495dcc8417c6b38e1d2b3c97" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:24.950533 containerd[1726]: time="2025-09-04T00:06:24.949388819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-lbjpm,Uid:d4387aa6-9e8e-43a7-b18d-22810d82aaec,Namespace:calico-system,Attempt:0,} returns sandbox id \"353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61\"" Sep 4 00:06:24.953980 containerd[1726]: time="2025-09-04T00:06:24.953663998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 00:06:24.978424 systemd[1]: Started cri-containerd-3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285.scope - libcontainer container 3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285. Sep 4 00:06:24.983180 systemd-networkd[1356]: cali644b417cefa: Link UP Sep 4 00:06:24.984833 systemd-networkd[1356]: cali644b417cefa: Gained carrier Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.650 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0 coredns-668d6bf9bc- kube-system 4c718a0b-ae25-4026-ba0f-44a0152bb84f 853 0 2025-09-04 00:05:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 coredns-668d6bf9bc-2ljjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali644b417cefa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.650 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.720 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" HandleID="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Workload="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.720 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" HandleID="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Workload="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"coredns-668d6bf9bc-2ljjn", "timestamp":"2025-09-04 00:06:24.71812345 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.720 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.859 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.859 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.910 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.930 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.939 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.942 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.945 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.945 [INFO][4775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.946 [INFO][4775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9 Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.953 [INFO][4775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.965 [INFO][4775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.69/26] block=192.168.31.64/26 handle="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.965 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.69/26] handle="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.965 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:25.001668 containerd[1726]: 2025-09-04 00:06:24.965 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.69/26] IPv6=[] ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" HandleID="k8s-pod-network.dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Workload="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.002115 containerd[1726]: 2025-09-04 00:06:24.975 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4c718a0b-ae25-4026-ba0f-44a0152bb84f", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"coredns-668d6bf9bc-2ljjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali644b417cefa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:25.002115 containerd[1726]: 2025-09-04 00:06:24.975 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.69/32] ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.002115 containerd[1726]: 2025-09-04 00:06:24.975 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali644b417cefa ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.002115 containerd[1726]: 2025-09-04 00:06:24.983 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.002115 containerd[1726]: 2025-09-04 00:06:24.984 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4c718a0b-ae25-4026-ba0f-44a0152bb84f", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9", Pod:"coredns-668d6bf9bc-2ljjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali644b417cefa", MAC:"da:d6:f9:6d:f0:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:25.002115 containerd[1726]: 2025-09-04 00:06:24.999 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-2ljjn" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-coredns--668d6bf9bc--2ljjn-eth0" Sep 4 00:06:25.052976 containerd[1726]: time="2025-09-04T00:06:25.052893763Z" level=info msg="connecting to shim dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9" address="unix:///run/containerd/s/12afe115a61a7360afa040864275e18fb4bcb57440b80930ed6695059bff79dc" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:25.084733 systemd-networkd[1356]: caliea24cbb7629: Link UP Sep 4 00:06:25.084954 systemd-networkd[1356]: caliea24cbb7629: Gained carrier Sep 4 00:06:25.102969 containerd[1726]: time="2025-09-04T00:06:25.102834734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-j6fg6,Uid:2dfaf5ee-4a2a-472f-a6b3-6c24915bacdb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285\"" Sep 4 00:06:25.111662 systemd[1]: Started cri-containerd-dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9.scope - libcontainer container dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9. Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.670 [INFO][4753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0 calico-kube-controllers-97c55758- calico-system 67931029-203e-4477-b4c8-3cc4250196b9 852 0 2025-09-04 00:05:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:97c55758 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 calico-kube-controllers-97c55758-c7zbb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea24cbb7629 [] [] }} ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.670 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.733 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" HandleID="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.733 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" HandleID="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"calico-kube-controllers-97c55758-c7zbb", "timestamp":"2025-09-04 00:06:24.733176953 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.733 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.966 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:24.966 [INFO][4787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.011 [INFO][4787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.028 [INFO][4787] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.035 [INFO][4787] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.040 [INFO][4787] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.045 [INFO][4787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.045 [INFO][4787] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.048 [INFO][4787] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.054 [INFO][4787] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.066 [INFO][4787] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.70/26] block=192.168.31.64/26 handle="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.066 [INFO][4787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.70/26] handle="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.066 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:25.119247 containerd[1726]: 2025-09-04 00:06:25.066 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.70/26] IPv6=[] ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" HandleID="k8s-pod-network.3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.120754 containerd[1726]: 2025-09-04 00:06:25.076 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0", GenerateName:"calico-kube-controllers-97c55758-", Namespace:"calico-system", SelfLink:"", UID:"67931029-203e-4477-b4c8-3cc4250196b9", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"97c55758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"calico-kube-controllers-97c55758-c7zbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea24cbb7629", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:25.120754 containerd[1726]: 2025-09-04 00:06:25.077 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.70/32] ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.120754 containerd[1726]: 2025-09-04 00:06:25.077 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea24cbb7629 ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.120754 containerd[1726]: 2025-09-04 00:06:25.089 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.120754 containerd[1726]: 2025-09-04 00:06:25.090 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0", GenerateName:"calico-kube-controllers-97c55758-", Namespace:"calico-system", SelfLink:"", UID:"67931029-203e-4477-b4c8-3cc4250196b9", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"97c55758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf", Pod:"calico-kube-controllers-97c55758-c7zbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea24cbb7629", MAC:"7a:a5:2b:42:78:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:25.120754 containerd[1726]: 2025-09-04 00:06:25.116 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" Namespace="calico-system" Pod="calico-kube-controllers-97c55758-c7zbb" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--kube--controllers--97c55758--c7zbb-eth0" Sep 4 00:06:25.160643 containerd[1726]: time="2025-09-04T00:06:25.160606020Z" level=info msg="connecting to shim 3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf" address="unix:///run/containerd/s/a3e77448d0b1815a30ae1cff33c09678c64d7f5a2b05fa42a1f1dc0c4c1e97d4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:25.170278 containerd[1726]: time="2025-09-04T00:06:25.170246244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ljjn,Uid:4c718a0b-ae25-4026-ba0f-44a0152bb84f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9\"" Sep 4 00:06:25.175112 containerd[1726]: time="2025-09-04T00:06:25.175081707Z" level=info msg="CreateContainer within sandbox \"dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:06:25.185454 systemd[1]: Started cri-containerd-3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf.scope - libcontainer container 3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf. Sep 4 00:06:25.192290 containerd[1726]: time="2025-09-04T00:06:25.192259474Z" level=info msg="Container 4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:25.209641 containerd[1726]: time="2025-09-04T00:06:25.209614133Z" level=info msg="CreateContainer within sandbox \"dd723f0477de409e7ef72842b8781542ca7aa165bd9e576953a3ff47748e92d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f\"" Sep 4 00:06:25.210298 containerd[1726]: time="2025-09-04T00:06:25.210234393Z" level=info msg="StartContainer for \"4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f\"" Sep 4 00:06:25.212239 containerd[1726]: time="2025-09-04T00:06:25.212214590Z" level=info msg="connecting to shim 4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f" address="unix:///run/containerd/s/12afe115a61a7360afa040864275e18fb4bcb57440b80930ed6695059bff79dc" protocol=ttrpc version=3 Sep 4 00:06:25.236780 systemd[1]: Started cri-containerd-4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f.scope - libcontainer container 4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f. Sep 4 00:06:25.242847 containerd[1726]: time="2025-09-04T00:06:25.242456784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97c55758-c7zbb,Uid:67931029-203e-4477-b4c8-3cc4250196b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf\"" Sep 4 00:06:25.272838 containerd[1726]: time="2025-09-04T00:06:25.272803200Z" level=info msg="StartContainer for \"4f3e0d404d9dfdf769ec4793e55937484daaaf2157253b6b8781d4bd232d5d8f\" returns successfully" Sep 4 00:06:25.720187 kubelet[3138]: I0904 00:06:25.719238 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2ljjn" podStartSLOduration=51.719221651 podStartE2EDuration="51.719221651s" podCreationTimestamp="2025-09-04 00:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:25.707747841 +0000 UTC m=+57.285497508" watchObservedRunningTime="2025-09-04 00:06:25.719221651 +0000 UTC m=+57.296971318" Sep 4 00:06:26.320749 systemd-networkd[1356]: cali644b417cefa: Gained IPv6LL Sep 4 00:06:26.513046 systemd-networkd[1356]: cali0621f7cdb68: Gained IPv6LL Sep 4 00:06:26.550907 containerd[1726]: time="2025-09-04T00:06:26.550854043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-n5995,Uid:36198baa-4723-4c62-b490-51e9f3c3f348,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:26.551865 containerd[1726]: time="2025-09-04T00:06:26.551829535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jftxm,Uid:03860e7a-926d-4187-ba43-ef35f1cd2768,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:26.578211 systemd-networkd[1356]: caliea24cbb7629: Gained IPv6LL Sep 4 00:06:26.714658 systemd-networkd[1356]: cali9a03141f157: Link UP Sep 4 00:06:26.716240 systemd-networkd[1356]: cali9a03141f157: Gained carrier Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.611 [INFO][5077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0 csi-node-driver- calico-system 03860e7a-926d-4187-ba43-ef35f1cd2768 717 0 2025-09-04 00:05:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 csi-node-driver-jftxm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9a03141f157 [] [] }} ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.611 [INFO][5077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.661 [INFO][5092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" HandleID="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Workload="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.661 [INFO][5092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" HandleID="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Workload="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"csi-node-driver-jftxm", "timestamp":"2025-09-04 00:06:26.661549519 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.661 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.661 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.661 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.668 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.673 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.677 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.682 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.683 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.683 [INFO][5092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.685 [INFO][5092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.692 [INFO][5092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.704 [INFO][5092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.71/26] block=192.168.31.64/26 handle="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.704 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.71/26] handle="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.704 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:26.736602 containerd[1726]: 2025-09-04 00:06:26.704 [INFO][5092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.71/26] IPv6=[] ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" HandleID="k8s-pod-network.990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Workload="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.737236 containerd[1726]: 2025-09-04 00:06:26.709 [INFO][5077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03860e7a-926d-4187-ba43-ef35f1cd2768", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"csi-node-driver-jftxm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a03141f157", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:26.737236 containerd[1726]: 2025-09-04 00:06:26.710 [INFO][5077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.71/32] ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.737236 containerd[1726]: 2025-09-04 00:06:26.710 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a03141f157 ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.737236 containerd[1726]: 2025-09-04 00:06:26.717 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.737236 containerd[1726]: 2025-09-04 00:06:26.717 [INFO][5077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03860e7a-926d-4187-ba43-ef35f1cd2768", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd", Pod:"csi-node-driver-jftxm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9a03141f157", MAC:"92:4b:c6:91:e9:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:26.737236 containerd[1726]: 2025-09-04 00:06:26.734 [INFO][5077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" Namespace="calico-system" Pod="csi-node-driver-jftxm" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-csi--node--driver--jftxm-eth0" Sep 4 00:06:26.768641 systemd-networkd[1356]: cali1930b382aa0: Gained IPv6LL Sep 4 00:06:26.802564 containerd[1726]: time="2025-09-04T00:06:26.802496076Z" level=info msg="connecting to shim 990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd" address="unix:///run/containerd/s/2cee674bafb2be59b9748f60e42d0a4fde9df6a7943baa31bd5066ea143dba23" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:26.838819 systemd[1]: Started cri-containerd-990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd.scope - libcontainer container 990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd. Sep 4 00:06:26.842416 systemd-networkd[1356]: cali5ae00a30590: Link UP Sep 4 00:06:26.851900 systemd-networkd[1356]: cali5ae00a30590: Gained carrier Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.623 [INFO][5066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0 calico-apiserver-6bf5df56b4- calico-apiserver 36198baa-4723-4c62-b490-51e9f3c3f348 849 0 2025-09-04 00:05:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf5df56b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-8c113b52d8 calico-apiserver-6bf5df56b4-n5995 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ae00a30590 [] [] }} ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.623 [INFO][5066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.666 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" HandleID="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.667 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" HandleID="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-8c113b52d8", "pod":"calico-apiserver-6bf5df56b4-n5995", "timestamp":"2025-09-04 00:06:26.666155327 +0000 UTC"}, Hostname:"ci-4372.1.0-n-8c113b52d8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.668 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.704 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.704 [INFO][5098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-8c113b52d8' Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.772 [INFO][5098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.788 [INFO][5098] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.793 [INFO][5098] ipam/ipam.go 511: Trying affinity for 192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.795 [INFO][5098] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.798 [INFO][5098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.64/26 host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.798 [INFO][5098] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.64/26 handle="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.800 [INFO][5098] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.810 [INFO][5098] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.64/26 handle="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.831 [INFO][5098] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.72/26] block=192.168.31.64/26 handle="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.831 [INFO][5098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.72/26] handle="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" host="ci-4372.1.0-n-8c113b52d8" Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.831 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:26.882746 containerd[1726]: 2025-09-04 00:06:26.831 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.72/26] IPv6=[] ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" HandleID="k8s-pod-network.c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Workload="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.883568 containerd[1726]: 2025-09-04 00:06:26.837 [INFO][5066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0", GenerateName:"calico-apiserver-6bf5df56b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"36198baa-4723-4c62-b490-51e9f3c3f348", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf5df56b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"", Pod:"calico-apiserver-6bf5df56b4-n5995", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ae00a30590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:26.883568 containerd[1726]: 2025-09-04 00:06:26.837 [INFO][5066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.72/32] ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.883568 containerd[1726]: 2025-09-04 00:06:26.837 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ae00a30590 ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.883568 containerd[1726]: 2025-09-04 00:06:26.855 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.883568 containerd[1726]: 2025-09-04 00:06:26.856 [INFO][5066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0", GenerateName:"calico-apiserver-6bf5df56b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"36198baa-4723-4c62-b490-51e9f3c3f348", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf5df56b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-8c113b52d8", ContainerID:"c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc", Pod:"calico-apiserver-6bf5df56b4-n5995", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ae00a30590", MAC:"5a:30:91:17:61:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:26.883568 containerd[1726]: 2025-09-04 00:06:26.873 [INFO][5066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" Namespace="calico-apiserver" Pod="calico-apiserver-6bf5df56b4-n5995" WorkloadEndpoint="ci--4372.1.0--n--8c113b52d8-k8s-calico--apiserver--6bf5df56b4--n5995-eth0" Sep 4 00:06:26.897967 containerd[1726]: time="2025-09-04T00:06:26.897934982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jftxm,Uid:03860e7a-926d-4187-ba43-ef35f1cd2768,Namespace:calico-system,Attempt:0,} returns sandbox id \"990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd\"" Sep 4 00:06:26.934937 containerd[1726]: time="2025-09-04T00:06:26.934902793Z" level=info msg="connecting to shim c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc" address="unix:///run/containerd/s/3ed527430ff5b91d181d888214597f01d4af20ca680ecd002a0d89e88011d497" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:26.974684 systemd[1]: Started cri-containerd-c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc.scope - libcontainer container c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc. Sep 4 00:06:27.045234 containerd[1726]: time="2025-09-04T00:06:27.045125530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf5df56b4-n5995,Uid:36198baa-4723-4c62-b490-51e9f3c3f348,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc\"" Sep 4 00:06:27.562588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125445110.mount: Deactivated successfully. Sep 4 00:06:27.600777 containerd[1726]: time="2025-09-04T00:06:27.600730458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:27.604716 containerd[1726]: time="2025-09-04T00:06:27.604559188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 00:06:27.607130 containerd[1726]: time="2025-09-04T00:06:27.607103680Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:27.610864 containerd[1726]: time="2025-09-04T00:06:27.610730644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:27.611218 containerd[1726]: time="2025-09-04T00:06:27.611195778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.657499883s" Sep 4 00:06:27.611254 containerd[1726]: time="2025-09-04T00:06:27.611228373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 00:06:27.612495 containerd[1726]: time="2025-09-04T00:06:27.612427411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:06:27.613987 containerd[1726]: time="2025-09-04T00:06:27.613530815Z" level=info msg="CreateContainer within sandbox \"353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 00:06:27.631733 containerd[1726]: time="2025-09-04T00:06:27.631697045Z" level=info msg="Container b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:27.647191 containerd[1726]: time="2025-09-04T00:06:27.647162994Z" level=info msg="CreateContainer within sandbox \"353088510511b0570101597ab3a6f75fe75dff650b06880a872b3fded5b43a61\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\"" Sep 4 00:06:27.647831 containerd[1726]: time="2025-09-04T00:06:27.647644167Z" level=info msg="StartContainer for \"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\"" Sep 4 00:06:27.648990 containerd[1726]: time="2025-09-04T00:06:27.648966244Z" level=info msg="connecting to shim b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340" address="unix:///run/containerd/s/f35017d60105806965e1e4a2d397bd7d6182eca15f4a8c104f9ba5582b3831f4" protocol=ttrpc version=3 Sep 4 00:06:27.669663 systemd[1]: Started cri-containerd-b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340.scope - libcontainer container b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340. Sep 4 00:06:27.720065 containerd[1726]: time="2025-09-04T00:06:27.719974044Z" level=info msg="StartContainer for \"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" returns successfully" Sep 4 00:06:27.984644 systemd-networkd[1356]: cali5ae00a30590: Gained IPv6LL Sep 4 00:06:28.305087 systemd-networkd[1356]: cali9a03141f157: Gained IPv6LL Sep 4 00:06:28.741202 kubelet[3138]: I0904 00:06:28.740861 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-lbjpm" podStartSLOduration=40.082174271 podStartE2EDuration="42.74084305s" podCreationTimestamp="2025-09-04 00:05:46 +0000 UTC" firstStartedPulling="2025-09-04 00:06:24.953269795 +0000 UTC m=+56.531019463" lastFinishedPulling="2025-09-04 00:06:27.61193858 +0000 UTC m=+59.189688242" observedRunningTime="2025-09-04 00:06:28.740188103 +0000 UTC m=+60.317937796" watchObservedRunningTime="2025-09-04 00:06:28.74084305 +0000 UTC m=+60.318592723" Sep 4 00:06:28.840646 containerd[1726]: time="2025-09-04T00:06:28.840605825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"af8488b4ae7d9f0f2c62ce31c16e2ad7100294854b58b30ac1ada55fc263ddd7\" pid:5277 exit_status:1 exited_at:{seconds:1756944388 nanos:839563992}" Sep 4 00:06:29.782983 containerd[1726]: time="2025-09-04T00:06:29.782879716Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"2eb21b4980dcacc759b21107da01fe957ede6a2a7ba00f5a010f8849658bbcbc\" pid:5312 exit_status:1 exited_at:{seconds:1756944389 nanos:782610197}" Sep 4 00:06:29.814536 containerd[1726]: time="2025-09-04T00:06:29.814424708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:29.816606 containerd[1726]: time="2025-09-04T00:06:29.816469229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 00:06:29.819112 containerd[1726]: time="2025-09-04T00:06:29.819086477Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:29.826520 containerd[1726]: time="2025-09-04T00:06:29.826312182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:29.827406 containerd[1726]: time="2025-09-04T00:06:29.827376175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.214919317s" Sep 4 00:06:29.827466 containerd[1726]: time="2025-09-04T00:06:29.827410346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:06:29.829544 containerd[1726]: time="2025-09-04T00:06:29.829030616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 00:06:29.830674 containerd[1726]: time="2025-09-04T00:06:29.830636879Z" level=info msg="CreateContainer within sandbox \"3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:06:29.845423 containerd[1726]: time="2025-09-04T00:06:29.845401384Z" level=info msg="Container 079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:29.868884 containerd[1726]: time="2025-09-04T00:06:29.868854514Z" level=info msg="CreateContainer within sandbox \"3a707bba47764c5a1daef071ee6775d3d65181ad6e3a17a49e84d7cc24b68285\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36\"" Sep 4 00:06:29.869437 containerd[1726]: time="2025-09-04T00:06:29.869415433Z" level=info msg="StartContainer for \"079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36\"" Sep 4 00:06:29.870526 containerd[1726]: time="2025-09-04T00:06:29.870477822Z" level=info msg="connecting to shim 079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36" address="unix:///run/containerd/s/afbb5d37ba76ced8820bacdc5d461b92984a56ff495dcc8417c6b38e1d2b3c97" protocol=ttrpc version=3 Sep 4 00:06:29.894656 systemd[1]: Started cri-containerd-079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36.scope - libcontainer container 079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36. Sep 4 00:06:29.937675 containerd[1726]: time="2025-09-04T00:06:29.937640008Z" level=info msg="StartContainer for \"079282d7f614b02bde1f7ca5bb0802d06c6aae3b53d15f367886468b6a660b36\" returns successfully" Sep 4 00:06:30.738523 kubelet[3138]: I0904 00:06:30.738446 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bf5df56b4-j6fg6" podStartSLOduration=43.014536868 podStartE2EDuration="47.738428208s" podCreationTimestamp="2025-09-04 00:05:43 +0000 UTC" firstStartedPulling="2025-09-04 00:06:25.105027775 +0000 UTC m=+56.682777432" lastFinishedPulling="2025-09-04 00:06:29.828919109 +0000 UTC m=+61.406668772" observedRunningTime="2025-09-04 00:06:30.735897426 +0000 UTC m=+62.313647094" watchObservedRunningTime="2025-09-04 00:06:30.738428208 +0000 UTC m=+62.316177872" Sep 4 00:06:30.798721 containerd[1726]: time="2025-09-04T00:06:30.798642363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"dab28e2eca9546bdac473591443e275c65fb9135bff5e22c739add2cd7fc4e66\" pid:5374 exited_at:{seconds:1756944390 nanos:798341508}" Sep 4 00:06:31.726944 kubelet[3138]: I0904 00:06:31.726912 3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:32.775761 containerd[1726]: time="2025-09-04T00:06:32.775718989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:32.780961 containerd[1726]: time="2025-09-04T00:06:32.780835904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 00:06:32.783469 containerd[1726]: time="2025-09-04T00:06:32.783445078Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:32.789858 containerd[1726]: time="2025-09-04T00:06:32.789807524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:32.790360 containerd[1726]: time="2025-09-04T00:06:32.790333926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.961276854s" Sep 4 00:06:32.790409 containerd[1726]: time="2025-09-04T00:06:32.790362541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 00:06:32.796527 containerd[1726]: time="2025-09-04T00:06:32.795764775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 00:06:32.806169 containerd[1726]: time="2025-09-04T00:06:32.806141714Z" level=info msg="CreateContainer within sandbox \"3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 00:06:32.830491 containerd[1726]: time="2025-09-04T00:06:32.829623844Z" level=info msg="Container abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:32.845055 containerd[1726]: time="2025-09-04T00:06:32.845020863Z" level=info msg="CreateContainer within sandbox \"3f5caa00da6bc53001d3a402fab4667d7d4fe2325db862029a3a897111718bdf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\"" Sep 4 00:06:32.845742 containerd[1726]: time="2025-09-04T00:06:32.845691993Z" level=info msg="StartContainer for \"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\"" Sep 4 00:06:32.846718 containerd[1726]: time="2025-09-04T00:06:32.846672726Z" level=info msg="connecting to shim abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746" address="unix:///run/containerd/s/a3e77448d0b1815a30ae1cff33c09678c64d7f5a2b05fa42a1f1dc0c4c1e97d4" protocol=ttrpc version=3 Sep 4 00:06:32.868661 systemd[1]: Started cri-containerd-abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746.scope - libcontainer container abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746. Sep 4 00:06:32.921739 containerd[1726]: time="2025-09-04T00:06:32.921709142Z" level=info msg="StartContainer for \"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" returns successfully" Sep 4 00:06:33.775234 containerd[1726]: time="2025-09-04T00:06:33.775193490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" id:\"228d3f98d5bf2b005dfb0df5a0cd3f0faa8f0d603f8bef755ea2b3eca388fcb2\" pid:5449 exited_at:{seconds:1756944393 nanos:774855755}" Sep 4 00:06:33.787486 kubelet[3138]: I0904 00:06:33.787346 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-97c55758-c7zbb" podStartSLOduration=39.241180828 podStartE2EDuration="46.787330074s" podCreationTimestamp="2025-09-04 00:05:47 +0000 UTC" firstStartedPulling="2025-09-04 00:06:25.245015296 +0000 UTC m=+56.822764965" lastFinishedPulling="2025-09-04 00:06:32.791164558 +0000 UTC m=+64.368914211" observedRunningTime="2025-09-04 00:06:33.74700691 +0000 UTC m=+65.324756576" watchObservedRunningTime="2025-09-04 00:06:33.787330074 +0000 UTC m=+65.365079753" Sep 4 00:06:34.078954 containerd[1726]: time="2025-09-04T00:06:34.078837892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.081407 containerd[1726]: time="2025-09-04T00:06:34.081371389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 00:06:34.084767 containerd[1726]: time="2025-09-04T00:06:34.084192054Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.087564 containerd[1726]: time="2025-09-04T00:06:34.087538118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.087992 containerd[1726]: time="2025-09-04T00:06:34.087970923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.292172529s" Sep 4 00:06:34.088070 containerd[1726]: time="2025-09-04T00:06:34.088058034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 00:06:34.089377 containerd[1726]: time="2025-09-04T00:06:34.089351114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:06:34.090171 containerd[1726]: time="2025-09-04T00:06:34.090149053Z" level=info msg="CreateContainer within sandbox \"990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 00:06:34.110538 containerd[1726]: time="2025-09-04T00:06:34.106286235Z" level=info msg="Container 28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:34.121775 containerd[1726]: time="2025-09-04T00:06:34.121580602Z" level=info msg="CreateContainer within sandbox \"990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6\"" Sep 4 00:06:34.122350 containerd[1726]: time="2025-09-04T00:06:34.122303176Z" level=info msg="StartContainer for \"28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6\"" Sep 4 00:06:34.126822 containerd[1726]: time="2025-09-04T00:06:34.126783949Z" level=info msg="connecting to shim 28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6" address="unix:///run/containerd/s/2cee674bafb2be59b9748f60e42d0a4fde9df6a7943baa31bd5066ea143dba23" protocol=ttrpc version=3 Sep 4 00:06:34.153663 systemd[1]: Started cri-containerd-28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6.scope - libcontainer container 28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6. Sep 4 00:06:34.183213 containerd[1726]: time="2025-09-04T00:06:34.183189525Z" level=info msg="StartContainer for \"28c591d5d17de6aea1426b17944946c9feb1cf62bbda52c50f3514dd186203a6\" returns successfully" Sep 4 00:06:34.394486 containerd[1726]: time="2025-09-04T00:06:34.394260793Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:34.396292 containerd[1726]: time="2025-09-04T00:06:34.396261987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 00:06:34.397821 containerd[1726]: time="2025-09-04T00:06:34.397790379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 308.403008ms" Sep 4 00:06:34.397928 containerd[1726]: time="2025-09-04T00:06:34.397827831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:06:34.399213 containerd[1726]: time="2025-09-04T00:06:34.398833244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 00:06:34.401050 containerd[1726]: time="2025-09-04T00:06:34.401019138Z" level=info msg="CreateContainer within sandbox \"c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:06:34.418923 containerd[1726]: time="2025-09-04T00:06:34.418621235Z" level=info msg="Container 6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:34.431615 containerd[1726]: time="2025-09-04T00:06:34.431588681Z" level=info msg="CreateContainer within sandbox \"c5b29410c877767678239b4e2f7397b41ec915f842eda07c60be8ef376351dfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85\"" Sep 4 00:06:34.431978 containerd[1726]: time="2025-09-04T00:06:34.431948230Z" level=info msg="StartContainer for \"6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85\"" Sep 4 00:06:34.433191 containerd[1726]: time="2025-09-04T00:06:34.433114270Z" level=info msg="connecting to shim 6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85" address="unix:///run/containerd/s/3ed527430ff5b91d181d888214597f01d4af20ca680ecd002a0d89e88011d497" protocol=ttrpc version=3 Sep 4 00:06:34.453687 systemd[1]: Started cri-containerd-6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85.scope - libcontainer container 6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85. Sep 4 00:06:34.500213 containerd[1726]: time="2025-09-04T00:06:34.500187820Z" level=info msg="StartContainer for \"6d757d8a3cfe2f26b823ee5562aa3b98f8ef1844fbaa8db1ec528d0be153ed85\" returns successfully" Sep 4 00:06:34.750028 kubelet[3138]: I0904 00:06:34.749903 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bf5df56b4-n5995" podStartSLOduration=44.398149974 podStartE2EDuration="51.749887975s" podCreationTimestamp="2025-09-04 00:05:43 +0000 UTC" firstStartedPulling="2025-09-04 00:06:27.046937095 +0000 UTC m=+58.624686762" lastFinishedPulling="2025-09-04 00:06:34.398675101 +0000 UTC m=+65.976424763" observedRunningTime="2025-09-04 00:06:34.749697742 +0000 UTC m=+66.327447412" watchObservedRunningTime="2025-09-04 00:06:34.749887975 +0000 UTC m=+66.327637681" Sep 4 00:06:35.997035 containerd[1726]: time="2025-09-04T00:06:35.996995737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:35.999782 containerd[1726]: time="2025-09-04T00:06:35.999751515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 00:06:36.003397 containerd[1726]: time="2025-09-04T00:06:36.003348634Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.006685 containerd[1726]: time="2025-09-04T00:06:36.006634574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.007002 containerd[1726]: time="2025-09-04T00:06:36.006978186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.608117036s" Sep 4 00:06:36.007043 containerd[1726]: time="2025-09-04T00:06:36.007010925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 00:06:36.010582 containerd[1726]: time="2025-09-04T00:06:36.009900579Z" level=info msg="CreateContainer within sandbox \"990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 00:06:36.041041 containerd[1726]: time="2025-09-04T00:06:36.041007656Z" level=info msg="Container 5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:36.047598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279487122.mount: Deactivated successfully. Sep 4 00:06:36.060434 containerd[1726]: time="2025-09-04T00:06:36.060402906Z" level=info msg="CreateContainer within sandbox \"990b3352dc8e3ecf8bd9baff34e3107399a44bd69c06fb14ab1a04b2992397bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9\"" Sep 4 00:06:36.061207 containerd[1726]: time="2025-09-04T00:06:36.061117216Z" level=info msg="StartContainer for \"5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9\"" Sep 4 00:06:36.062729 containerd[1726]: time="2025-09-04T00:06:36.062702358Z" level=info msg="connecting to shim 5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9" address="unix:///run/containerd/s/2cee674bafb2be59b9748f60e42d0a4fde9df6a7943baa31bd5066ea143dba23" protocol=ttrpc version=3 Sep 4 00:06:36.087634 systemd[1]: Started cri-containerd-5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9.scope - libcontainer container 5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9. Sep 4 00:06:36.117530 containerd[1726]: time="2025-09-04T00:06:36.117458730Z" level=info msg="StartContainer for \"5dd7d6621dcdb8a2972febad0bf58d72fcb8280f6e1944fa5156ac127990edc9\" returns successfully" Sep 4 00:06:36.615576 kubelet[3138]: I0904 00:06:36.615452 3138 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 00:06:36.615576 kubelet[3138]: I0904 00:06:36.615483 3138 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 00:06:47.891354 containerd[1726]: time="2025-09-04T00:06:47.891309364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" id:\"36a2aa18e7cb8c55a3c8e72fac6a81b0e1be5241d474de05e8e8c7e0cd18cb9f\" pid:5595 exited_at:{seconds:1756944407 nanos:890547593}" Sep 4 00:06:47.912214 kubelet[3138]: I0904 00:06:47.912135 3138 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jftxm" podStartSLOduration=51.804187539 podStartE2EDuration="1m0.912108459s" podCreationTimestamp="2025-09-04 00:05:47 +0000 UTC" firstStartedPulling="2025-09-04 00:06:26.900177666 +0000 UTC m=+58.477927334" lastFinishedPulling="2025-09-04 00:06:36.008098601 +0000 UTC m=+67.585848254" observedRunningTime="2025-09-04 00:06:36.752959012 +0000 UTC m=+68.330708680" watchObservedRunningTime="2025-09-04 00:06:47.912108459 +0000 UTC m=+79.489858128" Sep 4 00:06:49.150769 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:52850.service - OpenSSH per-connection server daemon (10.200.16.10:52850). Sep 4 00:06:49.798523 sshd[5612]: Accepted publickey for core from 10.200.16.10 port 52850 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:06:49.799427 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:49.806568 systemd-logind[1706]: New session 10 of user core. Sep 4 00:06:49.813704 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:06:50.324202 sshd[5614]: Connection closed by 10.200.16.10 port 52850 Sep 4 00:06:50.324955 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:50.328838 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:06:50.330922 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:52850.service: Deactivated successfully. Sep 4 00:06:50.333154 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:06:50.337557 systemd-logind[1706]: Removed session 10. Sep 4 00:06:50.813106 kubelet[3138]: I0904 00:06:50.813071 3138 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:55.442755 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:43834.service - OpenSSH per-connection server daemon (10.200.16.10:43834). Sep 4 00:06:56.100534 sshd[5630]: Accepted publickey for core from 10.200.16.10 port 43834 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:06:56.102004 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:56.106566 systemd-logind[1706]: New session 11 of user core. Sep 4 00:06:56.112655 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:06:56.613055 sshd[5632]: Connection closed by 10.200.16.10 port 43834 Sep 4 00:06:56.614682 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:56.618728 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:06:56.620889 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:43834.service: Deactivated successfully. Sep 4 00:06:56.624790 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:06:56.630463 systemd-logind[1706]: Removed session 11. Sep 4 00:07:00.801923 containerd[1726]: time="2025-09-04T00:07:00.801879195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"3d2d09176f7d4cc948a81e5c1d6626f3b92ed3cb48fefdfb57e77531b8ab6108\" pid:5663 exited_at:{seconds:1756944420 nanos:801390192}" Sep 4 00:07:01.734208 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:59686.service - OpenSSH per-connection server daemon (10.200.16.10:59686). Sep 4 00:07:02.389545 sshd[5674]: Accepted publickey for core from 10.200.16.10 port 59686 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:02.390659 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:02.395340 systemd-logind[1706]: New session 12 of user core. Sep 4 00:07:02.401651 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:07:02.904529 sshd[5676]: Connection closed by 10.200.16.10 port 59686 Sep 4 00:07:02.905279 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:02.909956 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:59686.service: Deactivated successfully. Sep 4 00:07:02.912154 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:07:02.913396 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:07:02.914618 systemd-logind[1706]: Removed session 12. Sep 4 00:07:03.808519 containerd[1726]: time="2025-09-04T00:07:03.808452724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" id:\"e9841be2df89270df14142ea48c92884d4c13999e5ce710ef27e6f756d2f66b6\" pid:5699 exited_at:{seconds:1756944423 nanos:807559683}" Sep 4 00:07:08.023757 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:59688.service - OpenSSH per-connection server daemon (10.200.16.10:59688). Sep 4 00:07:08.678251 sshd[5714]: Accepted publickey for core from 10.200.16.10 port 59688 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:08.679480 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:08.683886 systemd-logind[1706]: New session 13 of user core. Sep 4 00:07:08.694636 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:07:09.220395 sshd[5716]: Connection closed by 10.200.16.10 port 59688 Sep 4 00:07:09.220988 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:09.223780 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:59688.service: Deactivated successfully. Sep 4 00:07:09.225450 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:07:09.226468 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:07:09.228810 systemd-logind[1706]: Removed session 13. Sep 4 00:07:10.096088 containerd[1726]: time="2025-09-04T00:07:10.096043416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"f1d1b1d2269487ab3839ed4d900683828bd7c7f744ce525b5479de085f3065ec\" pid:5740 exited_at:{seconds:1756944430 nanos:95640435}" Sep 4 00:07:14.334583 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:47674.service - OpenSSH per-connection server daemon (10.200.16.10:47674). Sep 4 00:07:14.984075 sshd[5751]: Accepted publickey for core from 10.200.16.10 port 47674 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:14.985218 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:14.989603 systemd-logind[1706]: New session 14 of user core. Sep 4 00:07:14.995659 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:07:15.495228 sshd[5753]: Connection closed by 10.200.16.10 port 47674 Sep 4 00:07:15.495824 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:15.499478 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:07:15.500199 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:47674.service: Deactivated successfully. Sep 4 00:07:15.502184 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:07:15.505125 systemd-logind[1706]: Removed session 14. Sep 4 00:07:15.610736 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:47688.service - OpenSSH per-connection server daemon (10.200.16.10:47688). Sep 4 00:07:16.258318 sshd[5766]: Accepted publickey for core from 10.200.16.10 port 47688 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:16.260028 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:16.264776 systemd-logind[1706]: New session 15 of user core. Sep 4 00:07:16.274958 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:07:16.796690 sshd[5768]: Connection closed by 10.200.16.10 port 47688 Sep 4 00:07:16.797244 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:16.800513 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:47688.service: Deactivated successfully. Sep 4 00:07:16.802251 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:07:16.805844 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:07:16.807416 systemd-logind[1706]: Removed session 15. Sep 4 00:07:16.918752 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:47700.service - OpenSSH per-connection server daemon (10.200.16.10:47700). Sep 4 00:07:17.053720 containerd[1726]: time="2025-09-04T00:07:17.053598456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" id:\"96ca4dffde1c208aefd39286c701555cfa6e600f36869a25d0e2a6c176478c3b\" pid:5793 exited_at:{seconds:1756944437 nanos:53129055}" Sep 4 00:07:17.564524 sshd[5778]: Accepted publickey for core from 10.200.16.10 port 47700 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:17.566074 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:17.575958 systemd-logind[1706]: New session 16 of user core. Sep 4 00:07:17.581883 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:07:17.795706 containerd[1726]: time="2025-09-04T00:07:17.795661858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" id:\"c0efb59f2fa072fdbc0df04f13aa671276b806a194365c2d60bd7f2ac931494d\" pid:5815 exited_at:{seconds:1756944437 nanos:795363457}" Sep 4 00:07:18.090637 sshd[5802]: Connection closed by 10.200.16.10 port 47700 Sep 4 00:07:18.091287 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:18.094996 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:07:18.096123 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:47700.service: Deactivated successfully. Sep 4 00:07:18.100780 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:07:18.104611 systemd-logind[1706]: Removed session 16. Sep 4 00:07:23.208770 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:53426.service - OpenSSH per-connection server daemon (10.200.16.10:53426). Sep 4 00:07:23.854531 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 53426 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:23.855337 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:23.861696 systemd-logind[1706]: New session 17 of user core. Sep 4 00:07:23.868674 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:07:24.385492 sshd[5845]: Connection closed by 10.200.16.10 port 53426 Sep 4 00:07:24.387615 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:24.391462 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:53426.service: Deactivated successfully. Sep 4 00:07:24.396281 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:07:24.399656 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:07:24.401912 systemd-logind[1706]: Removed session 17. Sep 4 00:07:29.508771 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:53436.service - OpenSSH per-connection server daemon (10.200.16.10:53436). Sep 4 00:07:30.152236 sshd[5859]: Accepted publickey for core from 10.200.16.10 port 53436 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:30.154222 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:30.160458 systemd-logind[1706]: New session 18 of user core. Sep 4 00:07:30.164846 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:07:30.696546 sshd[5861]: Connection closed by 10.200.16.10 port 53436 Sep 4 00:07:30.697262 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:30.702745 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:53436.service: Deactivated successfully. Sep 4 00:07:30.705687 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:07:30.706973 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:07:30.710073 systemd-logind[1706]: Removed session 18. Sep 4 00:07:30.808536 containerd[1726]: time="2025-09-04T00:07:30.808479493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"be3925e97133c8bbdb9edacc227403f597e70cc61e52533cb6da7bf338d4bd68\" pid:5885 exited_at:{seconds:1756944450 nanos:808217432}" Sep 4 00:07:33.778118 containerd[1726]: time="2025-09-04T00:07:33.778074565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" id:\"120862d5ead9205d7010b7c7d3fe290e5594702e8c706b44621be0232b34da59\" pid:5907 exited_at:{seconds:1756944453 nanos:777407900}" Sep 4 00:07:35.811626 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:57118.service - OpenSSH per-connection server daemon (10.200.16.10:57118). Sep 4 00:07:36.457165 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 57118 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:36.458425 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:36.463055 systemd-logind[1706]: New session 19 of user core. Sep 4 00:07:36.468655 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:07:37.044915 sshd[5920]: Connection closed by 10.200.16.10 port 57118 Sep 4 00:07:37.045747 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:37.049944 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:07:37.051933 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:57118.service: Deactivated successfully. Sep 4 00:07:37.055047 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:07:37.058102 systemd-logind[1706]: Removed session 19. Sep 4 00:07:42.166311 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:39094.service - OpenSSH per-connection server daemon (10.200.16.10:39094). Sep 4 00:07:42.819249 sshd[5938]: Accepted publickey for core from 10.200.16.10 port 39094 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:42.820391 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:42.825083 systemd-logind[1706]: New session 20 of user core. Sep 4 00:07:42.830642 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:07:43.384614 sshd[5940]: Connection closed by 10.200.16.10 port 39094 Sep 4 00:07:43.383204 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:43.387585 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:07:43.389456 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:39094.service: Deactivated successfully. Sep 4 00:07:43.392866 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:07:43.395828 systemd-logind[1706]: Removed session 20. Sep 4 00:07:43.499960 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:39106.service - OpenSSH per-connection server daemon (10.200.16.10:39106). Sep 4 00:07:44.144247 sshd[5951]: Accepted publickey for core from 10.200.16.10 port 39106 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:44.145424 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:44.150359 systemd-logind[1706]: New session 21 of user core. Sep 4 00:07:44.154690 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:07:44.719825 sshd[5953]: Connection closed by 10.200.16.10 port 39106 Sep 4 00:07:44.720377 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:44.723841 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:39106.service: Deactivated successfully. Sep 4 00:07:44.727318 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:07:44.729394 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:07:44.732273 systemd-logind[1706]: Removed session 21. Sep 4 00:07:44.835555 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:39108.service - OpenSSH per-connection server daemon (10.200.16.10:39108). Sep 4 00:07:45.501754 sshd[5963]: Accepted publickey for core from 10.200.16.10 port 39108 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:45.503421 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:45.511794 systemd-logind[1706]: New session 22 of user core. Sep 4 00:07:45.517306 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:07:46.698853 sshd[5965]: Connection closed by 10.200.16.10 port 39108 Sep 4 00:07:46.700869 sshd-session[5963]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:46.704392 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:39108.service: Deactivated successfully. Sep 4 00:07:46.706985 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:07:46.709271 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:07:46.712620 systemd-logind[1706]: Removed session 22. Sep 4 00:07:46.812767 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:39114.service - OpenSSH per-connection server daemon (10.200.16.10:39114). Sep 4 00:07:47.461808 sshd[5982]: Accepted publickey for core from 10.200.16.10 port 39114 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:47.462982 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:47.467703 systemd-logind[1706]: New session 23 of user core. Sep 4 00:07:47.471627 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:07:47.735610 containerd[1726]: time="2025-09-04T00:07:47.735361982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" id:\"c8229e6b48e270e961e58caa7cb875b64250abca8310b4389edfae379e0b1333\" pid:5999 exited_at:{seconds:1756944467 nanos:735129871}" Sep 4 00:07:48.125693 sshd[5986]: Connection closed by 10.200.16.10 port 39114 Sep 4 00:07:48.126418 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:48.131250 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:39114.service: Deactivated successfully. Sep 4 00:07:48.134414 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:07:48.138989 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:07:48.140006 systemd-logind[1706]: Removed session 23. Sep 4 00:07:48.239740 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:39120.service - OpenSSH per-connection server daemon (10.200.16.10:39120). Sep 4 00:07:48.888672 sshd[6020]: Accepted publickey for core from 10.200.16.10 port 39120 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:48.889813 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:48.895180 systemd-logind[1706]: New session 24 of user core. Sep 4 00:07:48.897691 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:07:49.388364 sshd[6022]: Connection closed by 10.200.16.10 port 39120 Sep 4 00:07:49.388942 sshd-session[6020]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:49.391705 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:39120.service: Deactivated successfully. Sep 4 00:07:49.394542 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:07:49.396760 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:07:49.398423 systemd-logind[1706]: Removed session 24. Sep 4 00:07:54.511105 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:47122.service - OpenSSH per-connection server daemon (10.200.16.10:47122). Sep 4 00:07:55.172046 sshd[6036]: Accepted publickey for core from 10.200.16.10 port 47122 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:55.173212 sshd-session[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:55.177373 systemd-logind[1706]: New session 25 of user core. Sep 4 00:07:55.185632 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:07:55.675855 sshd[6039]: Connection closed by 10.200.16.10 port 47122 Sep 4 00:07:55.676382 sshd-session[6036]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:55.679807 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:47122.service: Deactivated successfully. Sep 4 00:07:55.681719 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:07:55.682879 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:07:55.684196 systemd-logind[1706]: Removed session 25. Sep 4 00:08:00.795629 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:60644.service - OpenSSH per-connection server daemon (10.200.16.10:60644). Sep 4 00:08:00.968601 containerd[1726]: time="2025-09-04T00:08:00.968560965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"5315edacb698c4d68763d6cdf0d06f8b636ec5bf2b7fd30d7af5fce2275b6e08\" pid:6083 exited_at:{seconds:1756944480 nanos:968094090}" Sep 4 00:08:01.455463 sshd[6090]: Accepted publickey for core from 10.200.16.10 port 60644 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:01.457096 sshd-session[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:01.463953 systemd-logind[1706]: New session 26 of user core. Sep 4 00:08:01.469949 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:08:01.957525 sshd[6096]: Connection closed by 10.200.16.10 port 60644 Sep 4 00:08:01.957449 sshd-session[6090]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:01.961841 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:08:01.963778 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:60644.service: Deactivated successfully. Sep 4 00:08:01.966046 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:08:01.968623 systemd-logind[1706]: Removed session 26. Sep 4 00:08:03.777259 containerd[1726]: time="2025-09-04T00:08:03.777207620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" id:\"4450f9d7cf34ee66eda646e584de3ff44f0958bba8aefe5dcbc564535e4f9cdf\" pid:6119 exited_at:{seconds:1756944483 nanos:776985183}" Sep 4 00:08:07.079751 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:60660.service - OpenSSH per-connection server daemon (10.200.16.10:60660). Sep 4 00:08:07.747041 sshd[6131]: Accepted publickey for core from 10.200.16.10 port 60660 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:07.748414 sshd-session[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:07.756755 systemd-logind[1706]: New session 27 of user core. Sep 4 00:08:07.765663 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 00:08:08.254232 sshd[6133]: Connection closed by 10.200.16.10 port 60660 Sep 4 00:08:08.253707 sshd-session[6131]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:08.258451 systemd-logind[1706]: Session 27 logged out. Waiting for processes to exit. Sep 4 00:08:08.261232 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:60660.service: Deactivated successfully. Sep 4 00:08:08.264263 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 00:08:08.268788 systemd-logind[1706]: Removed session 27. Sep 4 00:08:10.072340 containerd[1726]: time="2025-09-04T00:08:10.072301111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f80bcec46d910840e7c8b166c261040a8a2f0be168920dbd86b64fdfe4e340\" id:\"588def8c9a0210837ab42a8fe5659987b0d14a16f205deab264a30609a2b5971\" pid:6156 exited_at:{seconds:1756944490 nanos:72032756}" Sep 4 00:08:13.372636 systemd[1]: Started sshd@25-10.200.8.39:22-10.200.16.10:53064.service - OpenSSH per-connection server daemon (10.200.16.10:53064). Sep 4 00:08:14.020611 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 53064 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:14.022971 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:14.028539 systemd-logind[1706]: New session 28 of user core. Sep 4 00:08:14.036684 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 00:08:14.518722 sshd[6170]: Connection closed by 10.200.16.10 port 53064 Sep 4 00:08:14.520234 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:14.523368 systemd-logind[1706]: Session 28 logged out. Waiting for processes to exit. Sep 4 00:08:14.524054 systemd[1]: sshd@25-10.200.8.39:22-10.200.16.10:53064.service: Deactivated successfully. Sep 4 00:08:14.525853 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 00:08:14.527435 systemd-logind[1706]: Removed session 28. Sep 4 00:08:17.066082 containerd[1726]: time="2025-09-04T00:08:17.066027056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abdf506afe4827d1441613460254129f2f412d81eddc5b9e0d239f8ade521746\" id:\"3581fbacd136be80a7f26f2d7e2c0b28f9bb0329b523bdf9bdfe8bab69a0ce96\" pid:6193 exited_at:{seconds:1756944497 nanos:65800801}" Sep 4 00:08:17.745303 containerd[1726]: time="2025-09-04T00:08:17.745245610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf94ebe5684bdd22f6d8dc21cfdf6bf0f22ad313d5d230598c5079ba39bbeb0c\" id:\"301b71ca38610e86ca03e83b5f59e4f1066060f3d956dc7bc116e43cfc939167\" pid:6215 exited_at:{seconds:1756944497 nanos:745004932}" Sep 4 00:08:19.660421 systemd[1]: Started sshd@26-10.200.8.39:22-10.200.16.10:53066.service - OpenSSH per-connection server daemon (10.200.16.10:53066). Sep 4 00:08:20.303376 sshd[6228]: Accepted publickey for core from 10.200.16.10 port 53066 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:20.305044 sshd-session[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:20.312638 systemd-logind[1706]: New session 29 of user core. Sep 4 00:08:20.318807 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 00:08:20.806560 sshd[6230]: Connection closed by 10.200.16.10 port 53066 Sep 4 00:08:20.807091 sshd-session[6228]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:20.810273 systemd[1]: sshd@26-10.200.8.39:22-10.200.16.10:53066.service: Deactivated successfully. Sep 4 00:08:20.812105 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 00:08:20.812936 systemd-logind[1706]: Session 29 logged out. Waiting for processes to exit. Sep 4 00:08:20.814944 systemd-logind[1706]: Removed session 29.