Sep 4 00:04:49.933861 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:04:49.933886 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:04:49.933897 kernel: BIOS-provided physical RAM map: Sep 4 00:04:49.933904 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 00:04:49.933911 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 4 00:04:49.933918 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 4 00:04:49.933928 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 4 00:04:49.933935 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 4 00:04:49.933942 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 4 00:04:49.933949 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 4 00:04:49.933956 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 4 00:04:49.933963 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 4 00:04:49.933970 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 4 00:04:49.933977 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 4 00:04:49.933986 kernel: NX (Execute Disable) protection: active Sep 4 00:04:49.933994 kernel: APIC: Static calls initialized Sep 4 00:04:49.934001 kernel: efi: EFI v2.7 by Microsoft Sep 4 00:04:49.934009 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac0018 RNG=0x3ffd2018 Sep 4 00:04:49.934016 kernel: random: crng init done Sep 4 00:04:49.934024 kernel: secureboot: Secure boot disabled Sep 4 00:04:49.934031 kernel: SMBIOS 3.1.0 present. Sep 4 00:04:49.934038 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 4 00:04:49.934047 kernel: DMI: Memory slots populated: 2/2 Sep 4 00:04:49.934054 kernel: Hypervisor detected: Microsoft Hyper-V Sep 4 00:04:49.934062 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 4 00:04:49.934069 kernel: Hyper-V: Nested features: 0x3e0101 Sep 4 00:04:49.934076 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 4 00:04:49.934084 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 4 00:04:49.934091 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 00:04:49.934099 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 4 00:04:49.934106 kernel: tsc: Detected 2299.999 MHz processor Sep 4 00:04:49.934114 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:04:49.934122 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:04:49.934131 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 4 00:04:49.934139 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 00:04:49.934147 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:04:49.934154 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 4 00:04:49.934162 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 4 00:04:49.934169 kernel: Using GB pages for direct mapping Sep 4 00:04:49.934177 kernel: ACPI: Early table checksum verification disabled Sep 4 00:04:49.935872 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 4 00:04:49.935890 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:04:49.935897 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:04:49.935903 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 4 00:04:49.935910 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 4 00:04:49.935917 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:04:49.935924 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:04:49.935931 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:04:49.935938 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 4 00:04:49.935945 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 4 00:04:49.935952 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 00:04:49.935959 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 4 00:04:49.935966 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 4 00:04:49.935972 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 4 00:04:49.935979 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 4 00:04:49.935987 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 4 00:04:49.935993 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 4 00:04:49.936000 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 4 00:04:49.936007 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 4 00:04:49.936014 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 4 00:04:49.936020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 00:04:49.936027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 4 00:04:49.936034 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 4 00:04:49.936041 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 4 00:04:49.936049 kernel: Zone ranges: Sep 4 00:04:49.936056 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:04:49.936063 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 00:04:49.936070 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 00:04:49.936076 kernel: Device empty Sep 4 00:04:49.936083 kernel: Movable zone start for each node Sep 4 00:04:49.936090 kernel: Early memory node ranges Sep 4 00:04:49.936097 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 00:04:49.936103 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 4 00:04:49.936110 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 4 00:04:49.936118 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 4 00:04:49.936124 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 4 00:04:49.936131 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 4 00:04:49.936138 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:04:49.936145 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 00:04:49.936151 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 4 00:04:49.936158 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 4 00:04:49.936165 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 4 00:04:49.936172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:04:49.936180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:04:49.936202 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:04:49.936210 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 4 00:04:49.936216 kernel: TSC deadline timer available Sep 4 00:04:49.936223 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:04:49.936230 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:04:49.936237 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:04:49.936243 kernel: CPU topo: Max. threads per core: 2 Sep 4 00:04:49.936250 kernel: CPU topo: Num. cores per package: 1 Sep 4 00:04:49.936258 kernel: CPU topo: Num. threads per package: 2 Sep 4 00:04:49.936265 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 4 00:04:49.936271 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 4 00:04:49.936278 kernel: Booting paravirtualized kernel on Hyper-V Sep 4 00:04:49.936285 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:04:49.936292 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 00:04:49.936299 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 4 00:04:49.936305 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 4 00:04:49.936312 kernel: pcpu-alloc: [0] 0 1 Sep 4 00:04:49.936321 kernel: Hyper-V: PV spinlocks enabled Sep 4 00:04:49.936327 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:04:49.936335 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:04:49.936343 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:04:49.936350 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 00:04:49.936357 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:04:49.936363 kernel: Fallback order for Node 0: 0 Sep 4 00:04:49.936370 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 4 00:04:49.936379 kernel: Policy zone: Normal Sep 4 00:04:49.936385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:04:49.936392 kernel: software IO TLB: area num 2. Sep 4 00:04:49.936398 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 00:04:49.936405 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:04:49.936412 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:04:49.936419 kernel: Dynamic Preempt: voluntary Sep 4 00:04:49.936426 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:04:49.936434 kernel: rcu: RCU event tracing is enabled. Sep 4 00:04:49.936447 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 00:04:49.936455 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:04:49.936462 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:04:49.936470 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:04:49.936478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:04:49.936485 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 00:04:49.936492 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:04:49.936500 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:04:49.936507 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:04:49.936514 kernel: Using NULL legacy PIC Sep 4 00:04:49.936522 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 4 00:04:49.936530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:04:49.936537 kernel: Console: colour dummy device 80x25 Sep 4 00:04:49.936544 kernel: printk: legacy console [tty1] enabled Sep 4 00:04:49.936551 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:04:49.936559 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 4 00:04:49.936567 kernel: ACPI: Core revision 20240827 Sep 4 00:04:49.936575 kernel: Failed to register legacy timer interrupt Sep 4 00:04:49.936582 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:04:49.936589 kernel: x2apic enabled Sep 4 00:04:49.936596 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:04:49.936603 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 4 00:04:49.936611 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 00:04:49.936618 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 4 00:04:49.936626 kernel: Hyper-V: Using IPI hypercalls Sep 4 00:04:49.936634 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 4 00:04:49.936642 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 4 00:04:49.936649 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 4 00:04:49.936657 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 4 00:04:49.936664 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 4 00:04:49.936671 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 4 00:04:49.936678 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 4 00:04:49.936686 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Sep 4 00:04:49.936693 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 00:04:49.936702 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 4 00:04:49.936709 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 4 00:04:49.936716 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:04:49.936723 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 00:04:49.936730 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:04:49.936737 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 00:04:49.936745 kernel: RETBleed: Vulnerable Sep 4 00:04:49.936752 kernel: Speculative Store Bypass: Vulnerable Sep 4 00:04:49.936759 kernel: active return thunk: its_return_thunk Sep 4 00:04:49.936766 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 00:04:49.936772 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:04:49.936781 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:04:49.936788 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:04:49.936794 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 00:04:49.936801 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 00:04:49.936808 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 00:04:49.936816 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 4 00:04:49.936823 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 4 00:04:49.936830 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 4 00:04:49.936837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:04:49.936844 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 4 00:04:49.936851 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 4 00:04:49.936860 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 4 00:04:49.936867 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 4 00:04:49.936874 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 4 00:04:49.936881 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 4 00:04:49.936888 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 4 00:04:49.936895 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:04:49.936902 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:04:49.936909 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:04:49.936916 kernel: landlock: Up and running. Sep 4 00:04:49.936923 kernel: SELinux: Initializing. Sep 4 00:04:49.936931 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:04:49.936939 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:04:49.936946 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 4 00:04:49.936953 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 4 00:04:49.936961 kernel: signal: max sigframe size: 11952 Sep 4 00:04:49.936968 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:04:49.936976 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:04:49.936983 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:04:49.936990 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 00:04:49.936998 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:04:49.937005 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:04:49.937013 kernel: .... node #0, CPUs: #1 Sep 4 00:04:49.937021 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 00:04:49.937028 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 00:04:49.937036 kernel: Memory: 8079080K/8383228K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 297940K reserved, 0K cma-reserved) Sep 4 00:04:49.937043 kernel: devtmpfs: initialized Sep 4 00:04:49.937050 kernel: x86/mm: Memory block size: 128MB Sep 4 00:04:49.937058 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 4 00:04:49.937065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:04:49.937072 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 00:04:49.937081 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:04:49.937088 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:04:49.937096 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:04:49.937102 kernel: audit: type=2000 audit(1756944287.027:1): state=initialized audit_enabled=0 res=1 Sep 4 00:04:49.937109 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:04:49.937116 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:04:49.937123 kernel: cpuidle: using governor menu Sep 4 00:04:49.937131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:04:49.937138 kernel: dca service started, version 1.12.1 Sep 4 00:04:49.937146 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 4 00:04:49.937154 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 4 00:04:49.937161 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:04:49.937168 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:04:49.937176 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:04:49.937183 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:04:49.937198 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:04:49.937206 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:04:49.937214 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:04:49.937222 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:04:49.937229 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 00:04:49.937237 kernel: ACPI: Interpreter enabled Sep 4 00:04:49.937244 kernel: ACPI: PM: (supports S0 S5) Sep 4 00:04:49.937251 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:04:49.937259 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:04:49.937266 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 00:04:49.937273 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 4 00:04:49.937281 kernel: iommu: Default domain type: Translated Sep 4 00:04:49.937289 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:04:49.937297 kernel: efivars: Registered efivars operations Sep 4 00:04:49.937304 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:04:49.937311 kernel: PCI: System does not support PCI Sep 4 00:04:49.937318 kernel: vgaarb: loaded Sep 4 00:04:49.937325 kernel: clocksource: Switched to clocksource tsc-early Sep 4 00:04:49.937332 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:04:49.937340 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:04:49.937347 kernel: pnp: PnP ACPI init Sep 4 00:04:49.937356 kernel: pnp: PnP ACPI: found 3 devices Sep 4 00:04:49.937363 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:04:49.937370 kernel: NET: Registered PF_INET protocol family Sep 4 00:04:49.937377 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 00:04:49.937384 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 00:04:49.937392 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:04:49.937399 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:04:49.937406 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 00:04:49.937415 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 00:04:49.937423 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:04:49.937430 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:04:49.937437 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:04:49.937444 kernel: NET: Registered PF_XDP protocol family Sep 4 00:04:49.937451 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:04:49.937458 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 00:04:49.937465 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 4 00:04:49.937473 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 4 00:04:49.937482 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 4 00:04:49.937489 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 4 00:04:49.937496 kernel: clocksource: Switched to clocksource tsc Sep 4 00:04:49.937503 kernel: Initialise system trusted keyrings Sep 4 00:04:49.937511 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 00:04:49.937518 kernel: Key type asymmetric registered Sep 4 00:04:49.937525 kernel: Asymmetric key parser 'x509' registered Sep 4 00:04:49.937532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:04:49.937540 kernel: io scheduler mq-deadline registered Sep 4 00:04:49.937548 kernel: io scheduler kyber registered Sep 4 00:04:49.937556 kernel: io scheduler bfq registered Sep 4 00:04:49.937563 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:04:49.937570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:04:49.937577 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:04:49.937585 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 00:04:49.937592 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:04:49.937599 kernel: i8042: PNP: No PS/2 controller found. Sep 4 00:04:49.937715 kernel: rtc_cmos 00:02: registered as rtc0 Sep 4 00:04:49.937807 kernel: rtc_cmos 00:02: setting system clock to 2025-09-04T00:04:49 UTC (1756944289) Sep 4 00:04:49.937884 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 4 00:04:49.937896 kernel: intel_pstate: Intel P-state driver initializing Sep 4 00:04:49.937905 kernel: efifb: probing for efifb Sep 4 00:04:49.937914 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 00:04:49.937922 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 00:04:49.937930 kernel: efifb: scrolling: redraw Sep 4 00:04:49.937938 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 00:04:49.937949 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 00:04:49.937957 kernel: fb0: EFI VGA frame buffer device Sep 4 00:04:49.937965 kernel: pstore: Using crash dump compression: deflate Sep 4 00:04:49.937974 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:04:49.937982 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:04:49.937990 kernel: Segment Routing with IPv6 Sep 4 00:04:49.937998 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:04:49.938007 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:04:49.938015 kernel: Key type dns_resolver registered Sep 4 00:04:49.938024 kernel: IPI shorthand broadcast: enabled Sep 4 00:04:49.938033 kernel: sched_clock: Marking stable (2586003115, 81029798)->(2933033677, -266000764) Sep 4 00:04:49.938042 kernel: registered taskstats version 1 Sep 4 00:04:49.938051 kernel: Loading compiled-in X.509 certificates Sep 4 00:04:49.938061 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:04:49.938069 kernel: Demotion targets for Node 0: null Sep 4 00:04:49.938077 kernel: Key type .fscrypt registered Sep 4 00:04:49.938086 kernel: Key type fscrypt-provisioning registered Sep 4 00:04:49.938095 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 00:04:49.938107 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:04:49.938114 kernel: ima: No architecture policies found Sep 4 00:04:49.938124 kernel: clk: Disabling unused clocks Sep 4 00:04:49.938132 kernel: Warning: unable to open an initial console. Sep 4 00:04:49.938139 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:04:49.938147 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:04:49.938155 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:04:49.938162 kernel: Run /init as init process Sep 4 00:04:49.938170 kernel: with arguments: Sep 4 00:04:49.938178 kernel: /init Sep 4 00:04:49.938184 kernel: with environment: Sep 4 00:04:49.939269 kernel: HOME=/ Sep 4 00:04:49.939278 kernel: TERM=linux Sep 4 00:04:49.939287 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:04:49.939297 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:04:49.939309 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:04:49.939321 systemd[1]: Detected virtualization microsoft. Sep 4 00:04:49.939329 systemd[1]: Detected architecture x86-64. Sep 4 00:04:49.939337 systemd[1]: Running in initrd. Sep 4 00:04:49.939345 systemd[1]: No hostname configured, using default hostname. Sep 4 00:04:49.939354 systemd[1]: Hostname set to . Sep 4 00:04:49.939362 systemd[1]: Initializing machine ID from random generator. Sep 4 00:04:49.939370 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:04:49.939378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:04:49.939386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:04:49.939396 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:04:49.939404 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:04:49.939413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:04:49.939420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:04:49.939429 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:04:49.939437 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:04:49.939445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:04:49.939452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:04:49.939460 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:04:49.939467 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:04:49.939475 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:04:49.939482 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:04:49.939489 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:04:49.939497 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:04:49.939504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:04:49.939513 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:04:49.939521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:04:49.939529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:04:49.939536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:04:49.939544 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:04:49.939552 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:04:49.939559 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:04:49.939567 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:04:49.939576 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:04:49.939584 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:04:49.939592 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:04:49.939607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:04:49.939616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:04:49.939642 systemd-journald[205]: Collecting audit messages is disabled. Sep 4 00:04:49.939664 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:04:49.939674 systemd-journald[205]: Journal started Sep 4 00:04:49.939695 systemd-journald[205]: Runtime Journal (/run/log/journal/0cfa63d732214944b90ff3e76d2c9ea5) is 8M, max 158.9M, 150.9M free. Sep 4 00:04:49.944439 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:04:49.946465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:04:49.949111 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:04:49.953721 systemd-modules-load[207]: Inserted module 'overlay' Sep 4 00:04:49.958327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:04:49.965115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:04:49.978760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:49.983459 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:04:49.994210 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:04:50.001376 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:04:50.001401 kernel: Bridge firewalling registered Sep 4 00:04:49.997659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:04:50.001730 systemd-modules-load[207]: Inserted module 'br_netfilter' Sep 4 00:04:50.004601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:04:50.005078 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:04:50.008299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:04:50.009162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:04:50.027332 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:04:50.030800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:04:50.042277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:04:50.046446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:04:50.055080 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:04:50.067377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:04:50.095210 systemd-resolved[262]: Positive Trust Anchors: Sep 4 00:04:50.095712 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:04:50.095743 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:04:50.113998 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 4 00:04:50.116669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:04:50.119079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:04:50.126214 kernel: SCSI subsystem initialized Sep 4 00:04:50.132207 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:04:50.140203 kernel: iscsi: registered transport (tcp) Sep 4 00:04:50.155342 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:04:50.155382 kernel: QLogic iSCSI HBA Driver Sep 4 00:04:50.166641 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:04:50.179102 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:04:50.181279 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:04:50.210350 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:04:50.211372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:04:50.256204 kernel: raid6: avx512x4 gen() 45193 MB/s Sep 4 00:04:50.273202 kernel: raid6: avx512x2 gen() 45013 MB/s Sep 4 00:04:50.290196 kernel: raid6: avx512x1 gen() 30316 MB/s Sep 4 00:04:50.307198 kernel: raid6: avx2x4 gen() 42007 MB/s Sep 4 00:04:50.325198 kernel: raid6: avx2x2 gen() 43583 MB/s Sep 4 00:04:50.342653 kernel: raid6: avx2x1 gen() 31280 MB/s Sep 4 00:04:50.342669 kernel: raid6: using algorithm avx512x4 gen() 45193 MB/s Sep 4 00:04:50.360577 kernel: raid6: .... xor() 7976 MB/s, rmw enabled Sep 4 00:04:50.360597 kernel: raid6: using avx512x2 recovery algorithm Sep 4 00:04:50.377203 kernel: xor: automatically using best checksumming function avx Sep 4 00:04:50.482204 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:04:50.485677 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:04:50.489228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:04:50.513183 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 4 00:04:50.517555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:04:50.525677 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:04:50.542237 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Sep 4 00:04:50.558177 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:04:50.561887 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:04:50.593390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:04:50.596877 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:04:50.635206 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:04:50.647530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:04:50.647618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:50.656281 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:04:50.660242 kernel: AES CTR mode by8 optimization enabled Sep 4 00:04:50.663169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:04:50.673202 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 00:04:50.675267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:04:50.675343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:50.678085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:04:50.696461 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 00:04:50.696489 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 00:04:50.698397 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 00:04:50.702527 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 00:04:50.705205 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 00:04:50.708223 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 00:04:50.715201 kernel: PTP clock support registered Sep 4 00:04:50.725153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:50.728133 kernel: hv_vmbus: registering driver hv_pci Sep 4 00:04:50.733205 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 00:04:50.746332 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523462af (unnamed net_device) (uninitialized): VF slot 1 added Sep 4 00:04:50.746484 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 00:04:50.747726 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 00:04:50.755217 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 4 00:04:50.763904 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 4 00:04:50.764070 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 4 00:04:50.765532 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 00:04:50.767065 kernel: hv_vmbus: registering driver hv_utils Sep 4 00:04:50.771466 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 00:04:50.781745 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 4 00:04:50.781788 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 4 00:04:50.786658 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 00:04:50.786692 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 00:04:50.788287 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 00:04:51.174328 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 00:04:51.174358 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 00:04:51.174336 systemd-resolved[262]: Clock change detected. Flushing caches. Sep 4 00:04:51.179475 kernel: scsi host0: storvsc_host_t Sep 4 00:04:51.179652 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 4 00:04:51.179796 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 4 00:04:51.190692 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 00:04:51.190867 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 00:04:51.191620 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 00:04:51.196978 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 4 00:04:51.197159 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 4 00:04:51.208621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#220 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:04:51.222624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:04:51.341677 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 00:04:51.345614 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:04:51.406616 kernel: nvme nvme0: using unchecked data buffer Sep 4 00:04:51.475799 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 4 00:04:51.500905 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 4 00:04:51.503918 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:04:51.513319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 4 00:04:51.526164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 4 00:04:51.528042 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 4 00:04:51.531544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:04:51.535542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:04:51.537053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:04:51.541706 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:04:51.542405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:04:51.568402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:04:51.581622 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:04:51.594619 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:04:52.151190 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 4 00:04:52.151340 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 4 00:04:52.153922 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 4 00:04:52.155464 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 00:04:52.159715 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 4 00:04:52.162650 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 4 00:04:52.175689 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 4 00:04:52.177707 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 4 00:04:52.191946 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 00:04:52.192159 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 4 00:04:52.192276 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 4 00:04:52.198520 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 4 00:04:52.207630 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 4 00:04:52.210118 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523462af eth0: VF registering: eth1 Sep 4 00:04:52.210252 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 4 00:04:52.213620 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 4 00:04:52.599089 disk-uuid[676]: The operation has completed successfully. Sep 4 00:04:52.601255 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 00:04:52.634476 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:04:52.634542 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:04:52.667919 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:04:52.691270 sh[713]: Success Sep 4 00:04:52.708204 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:04:52.708238 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:04:52.710028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:04:52.719626 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 00:04:52.779881 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:04:52.784013 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:04:52.792452 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:04:52.803832 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (726) Sep 4 00:04:52.803861 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:04:52.804665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:52.864943 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 00:04:52.864979 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:04:52.866158 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:04:52.873799 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:04:52.875562 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:04:52.877969 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:04:52.878471 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:04:52.890088 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:04:52.907620 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (761) Sep 4 00:04:52.910067 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:52.910094 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:52.922876 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:04:52.922909 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 4 00:04:52.922974 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:04:52.928629 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:52.929993 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:04:52.934978 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:04:52.961249 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:04:52.963709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:04:52.986805 systemd-networkd[895]: lo: Link UP Sep 4 00:04:52.993579 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 4 00:04:52.994047 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 4 00:04:52.986811 systemd-networkd[895]: lo: Gained carrier Sep 4 00:04:52.996841 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523462af eth0: Data path switched to VF: enP30832s1 Sep 4 00:04:52.987629 systemd-networkd[895]: Enumeration completed Sep 4 00:04:52.987935 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:52.987938 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:04:52.988298 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:04:52.990013 systemd[1]: Reached target network.target - Network. Sep 4 00:04:52.997219 systemd-networkd[895]: enP30832s1: Link UP Sep 4 00:04:52.997275 systemd-networkd[895]: eth0: Link UP Sep 4 00:04:52.997354 systemd-networkd[895]: eth0: Gained carrier Sep 4 00:04:52.997364 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:04:53.006102 systemd-networkd[895]: enP30832s1: Gained carrier Sep 4 00:04:53.013093 systemd-networkd[895]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 00:04:53.286227 ignition[838]: Ignition 2.21.0 Sep 4 00:04:53.286238 ignition[838]: Stage: fetch-offline Sep 4 00:04:53.288175 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:04:53.286317 ignition[838]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:53.293028 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 00:04:53.286324 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:53.286405 ignition[838]: parsed url from cmdline: "" Sep 4 00:04:53.286407 ignition[838]: no config URL provided Sep 4 00:04:53.286411 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:04:53.286416 ignition[838]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:04:53.286419 ignition[838]: failed to fetch config: resource requires networking Sep 4 00:04:53.286595 ignition[838]: Ignition finished successfully Sep 4 00:04:53.311825 ignition[911]: Ignition 2.21.0 Sep 4 00:04:53.311834 ignition[911]: Stage: fetch Sep 4 00:04:53.312003 ignition[911]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:53.312009 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:53.312069 ignition[911]: parsed url from cmdline: "" Sep 4 00:04:53.312071 ignition[911]: no config URL provided Sep 4 00:04:53.312075 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:04:53.312080 ignition[911]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:04:53.312103 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 00:04:53.477116 ignition[911]: GET result: OK Sep 4 00:04:53.477170 ignition[911]: config has been read from IMDS userdata Sep 4 00:04:53.477194 ignition[911]: parsing config with SHA512: e4362ec4f795be873cba74d90c3b8de9a0b2202b5779b517560967dda23a62d120759347ebd044f9aca4d803f146cb1c07b07816f84062335ff5892aae67277a Sep 4 00:04:53.482800 unknown[911]: fetched base config from "system" Sep 4 00:04:53.482806 unknown[911]: fetched base config from "system" Sep 4 00:04:53.483066 ignition[911]: fetch: fetch complete Sep 4 00:04:53.482811 unknown[911]: fetched user config from "azure" Sep 4 00:04:53.483070 ignition[911]: fetch: fetch passed Sep 4 00:04:53.485087 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 00:04:53.483100 ignition[911]: Ignition finished successfully Sep 4 00:04:53.489714 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:04:53.506965 ignition[918]: Ignition 2.21.0 Sep 4 00:04:53.506974 ignition[918]: Stage: kargs Sep 4 00:04:53.507142 ignition[918]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:53.510445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:04:53.507149 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:53.509353 ignition[918]: kargs: kargs passed Sep 4 00:04:53.509394 ignition[918]: Ignition finished successfully Sep 4 00:04:53.518105 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:04:53.531736 ignition[925]: Ignition 2.21.0 Sep 4 00:04:53.531745 ignition[925]: Stage: disks Sep 4 00:04:53.533600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:04:53.531899 ignition[925]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:53.534842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:04:53.531906 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:53.535231 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:04:53.532482 ignition[925]: disks: disks passed Sep 4 00:04:53.535254 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:04:53.532509 ignition[925]: Ignition finished successfully Sep 4 00:04:53.535272 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:04:53.535535 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:04:53.536134 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:04:53.579566 systemd-fsck[933]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 4 00:04:53.583198 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:04:53.587177 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:04:53.725507 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:04:53.730501 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:04:53.727928 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:04:53.735712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:04:53.741681 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:04:53.745720 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 00:04:53.750015 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:04:53.754030 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (942) Sep 4 00:04:53.750673 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:04:53.758482 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:53.758512 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:53.760209 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:04:53.764841 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:04:53.772203 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:04:53.772229 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 4 00:04:53.772241 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:04:53.774774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:04:53.882524 coreos-metadata[944]: Sep 04 00:04:53.882 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 00:04:53.885725 coreos-metadata[944]: Sep 04 00:04:53.885 INFO Fetch successful Sep 4 00:04:53.887085 coreos-metadata[944]: Sep 04 00:04:53.885 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 00:04:53.891935 coreos-metadata[944]: Sep 04 00:04:53.891 INFO Fetch successful Sep 4 00:04:53.896340 coreos-metadata[944]: Sep 04 00:04:53.896 INFO wrote hostname ci-4372.1.0-n-f08c63113b to /sysroot/etc/hostname Sep 4 00:04:53.898257 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 00:04:53.908460 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:04:53.918005 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:04:53.929765 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:04:53.936333 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:04:54.137696 systemd-networkd[895]: eth0: Gained IPv6LL Sep 4 00:04:54.269432 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:04:54.272956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:04:54.275203 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:04:54.291554 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:04:54.293630 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:54.306878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:04:54.316556 ignition[1062]: INFO : Ignition 2.21.0 Sep 4 00:04:54.316556 ignition[1062]: INFO : Stage: mount Sep 4 00:04:54.321525 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:54.321525 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:54.321525 ignition[1062]: INFO : mount: mount passed Sep 4 00:04:54.321525 ignition[1062]: INFO : Ignition finished successfully Sep 4 00:04:54.318852 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:04:54.321643 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:04:54.339960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:04:54.355721 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1072) Sep 4 00:04:54.355750 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:04:54.356734 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:04:54.361347 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 00:04:54.361375 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 4 00:04:54.363105 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 4 00:04:54.364490 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:04:54.384628 ignition[1089]: INFO : Ignition 2.21.0 Sep 4 00:04:54.384628 ignition[1089]: INFO : Stage: files Sep 4 00:04:54.389280 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:54.389280 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:54.389280 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:04:54.389280 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:04:54.389280 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:04:54.402638 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:04:54.402638 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:04:54.402638 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:04:54.396592 unknown[1089]: wrote ssh authorized keys file for user: core Sep 4 00:04:54.411937 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 00:04:54.411937 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 00:04:54.537869 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:04:54.624390 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:04:54.637659 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 4 00:04:55.105285 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 00:04:55.700436 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 4 00:04:55.700436 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 00:04:55.705538 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:04:55.713539 ignition[1089]: INFO : files: files passed Sep 4 00:04:55.713539 ignition[1089]: INFO : Ignition finished successfully Sep 4 00:04:55.718443 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:04:55.733288 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:04:55.735397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:04:55.745474 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:04:55.745564 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:04:55.755128 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:04:55.755128 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:04:55.767130 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:04:55.758953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:04:55.761271 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:04:55.762082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:04:55.799034 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:04:55.799118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:04:55.803812 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:04:55.808663 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:04:55.810078 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:04:55.811708 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:04:55.823932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:04:55.829418 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:04:55.854174 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:04:55.854372 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:04:55.854578 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:04:55.854893 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:04:55.854993 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:04:55.855506 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:04:55.856028 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:04:55.856544 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:04:55.857095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:04:55.857562 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:04:55.857889 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:04:55.858439 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:04:55.858712 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:04:55.859280 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:04:55.859837 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:04:55.860153 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:04:55.860640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:04:55.860732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:04:55.861279 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:04:55.861564 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:04:55.861854 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:04:55.866347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:04:55.886667 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:04:55.886784 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:04:55.916862 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:04:55.916968 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:04:55.920166 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:04:55.920253 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:04:55.921017 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 00:04:55.921097 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 00:04:55.921876 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:04:55.922229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:04:55.922311 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:04:55.924735 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:04:55.924972 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:04:55.956357 ignition[1144]: INFO : Ignition 2.21.0 Sep 4 00:04:55.956357 ignition[1144]: INFO : Stage: umount Sep 4 00:04:55.956357 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:04:55.956357 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 00:04:55.956357 ignition[1144]: INFO : umount: umount passed Sep 4 00:04:55.956357 ignition[1144]: INFO : Ignition finished successfully Sep 4 00:04:55.925064 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:04:55.925652 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:04:55.925725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:04:55.935836 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:04:55.939925 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:04:55.956192 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:04:55.956270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:04:55.959128 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:04:55.959202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:04:55.962687 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:04:55.962755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:04:55.967695 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 00:04:55.967738 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 00:04:55.973103 systemd[1]: Stopped target network.target - Network. Sep 4 00:04:55.973258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:04:55.973305 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:04:55.973817 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:04:55.973836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:04:55.976476 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:04:55.982600 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:04:55.999067 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:04:56.002681 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:04:56.002717 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:04:56.007160 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:04:56.007193 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:04:56.008468 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:04:56.008507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:04:56.009025 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:04:56.009054 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:04:56.009409 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:04:56.009702 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:04:56.010791 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:04:56.018350 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:04:56.018423 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:04:56.024717 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:04:56.027061 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:04:56.027128 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:04:56.030878 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:04:56.031272 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:04:56.071094 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523462af eth0: Data path switched from VF: enP30832s1 Sep 4 00:04:56.034685 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:04:56.034720 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:04:56.035461 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:04:56.035552 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:04:56.035582 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:04:56.035652 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:04:56.035677 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:04:56.038683 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:04:56.038727 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:04:56.039155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:04:56.039187 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:04:56.041022 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:04:56.041985 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:04:56.042034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:04:56.059871 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:04:56.059990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:04:56.064067 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:04:56.064120 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:04:56.067142 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:04:56.067174 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:04:56.076635 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 4 00:04:56.110984 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:04:56.111037 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:04:56.115927 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:04:56.115966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:04:56.118212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:04:56.118250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:04:56.122709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:04:56.124981 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:04:56.125032 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:04:56.127954 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:04:56.127996 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:04:56.135773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:04:56.135819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:04:56.142068 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:04:56.142113 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:04:56.142145 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:04:56.142367 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:04:56.142431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:04:56.144951 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:04:56.145027 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:04:58.528104 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:04:58.528189 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:04:58.528406 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:04:58.528947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:04:58.528987 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:04:58.530710 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:04:58.543864 systemd[1]: Switching root. Sep 4 00:04:58.584662 systemd-journald[205]: Journal stopped Sep 4 00:05:02.359963 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 4 00:05:02.359994 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:05:02.360005 kernel: SELinux: policy capability open_perms=1 Sep 4 00:05:02.360013 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:05:02.360020 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:05:02.360028 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:05:02.360038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:05:02.360046 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:05:02.360053 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:05:02.360061 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:05:02.360069 kernel: audit: type=1403 audit(1756944301.302:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:05:02.360078 systemd[1]: Successfully loaded SELinux policy in 61.983ms. Sep 4 00:05:02.360088 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.063ms. Sep 4 00:05:02.360100 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:05:02.360109 systemd[1]: Detected virtualization microsoft. Sep 4 00:05:02.360118 systemd[1]: Detected architecture x86-64. Sep 4 00:05:02.360126 systemd[1]: Detected first boot. Sep 4 00:05:02.360136 systemd[1]: Hostname set to . Sep 4 00:05:02.360145 systemd[1]: Initializing machine ID from random generator. Sep 4 00:05:02.360153 zram_generator::config[1186]: No configuration found. Sep 4 00:05:02.360163 kernel: Guest personality initialized and is inactive Sep 4 00:05:02.360171 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 4 00:05:02.360179 kernel: Initialized host personality Sep 4 00:05:02.360187 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:05:02.360195 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:05:02.360205 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:05:02.360214 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:05:02.360222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:05:02.360232 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:05:02.360240 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:05:02.360250 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:05:02.360258 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:05:02.360268 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:05:02.360277 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:05:02.360287 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:05:02.360296 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:05:02.360304 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:05:02.360312 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:05:02.360322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:05:02.360331 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:05:02.360342 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:05:02.360353 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:05:02.360362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:05:02.360371 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:05:02.371644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:05:02.371668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:05:02.371678 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:05:02.371688 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:05:02.371702 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:05:02.371712 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:05:02.371722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:05:02.371732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:05:02.371742 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:05:02.371752 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:05:02.371763 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:05:02.371773 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:05:02.371785 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:05:02.371795 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:05:02.371805 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:05:02.371816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:05:02.371826 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:05:02.371837 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:05:02.371848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:05:02.371858 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:05:02.371869 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:02.371879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:05:02.371891 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:05:02.371901 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:05:02.371912 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:05:02.371924 systemd[1]: Reached target machines.target - Containers. Sep 4 00:05:02.371934 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:05:02.371945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:02.371955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:05:02.371965 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:05:02.371976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:05:02.371986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:05:02.371996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:05:02.372008 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:05:02.372018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:05:02.372028 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:05:02.372039 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:05:02.372049 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:05:02.372059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:05:02.372069 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:05:02.372080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:02.372091 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:05:02.372102 kernel: fuse: init (API version 7.41) Sep 4 00:05:02.372112 kernel: loop: module loaded Sep 4 00:05:02.372122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:05:02.372133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:05:02.372143 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:05:02.372153 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:05:02.372187 systemd-journald[1286]: Collecting audit messages is disabled. Sep 4 00:05:02.372211 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:05:02.372221 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:05:02.372231 systemd[1]: Stopped verity-setup.service. Sep 4 00:05:02.372242 systemd-journald[1286]: Journal started Sep 4 00:05:02.372266 systemd-journald[1286]: Runtime Journal (/run/log/journal/c48f7afff21842b5925395f40acc4c2a) is 8M, max 158.9M, 150.9M free. Sep 4 00:05:02.019593 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:05:02.027799 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 00:05:02.028125 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:05:02.378703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:02.381628 kernel: ACPI: bus type drm_connector registered Sep 4 00:05:02.381667 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:05:02.383887 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:05:02.386790 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:05:02.389270 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:05:02.393191 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:05:02.395868 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:05:02.398191 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:05:02.400684 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:05:02.403021 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:05:02.406285 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:05:02.406463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:05:02.409374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:05:02.409557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:05:02.412310 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:05:02.412438 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:05:02.415187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:05:02.415309 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:05:02.418416 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:05:02.418589 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:05:02.421324 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:05:02.421487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:05:02.424216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:05:02.426893 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:05:02.430101 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:05:02.433367 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:05:02.440295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:05:02.444834 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:05:02.448344 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:05:02.460581 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:05:02.462300 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:05:02.462391 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:05:02.466223 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:05:02.470721 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:05:02.472677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:02.473293 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:05:02.477588 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:05:02.479383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:05:02.481624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:05:02.484050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:05:02.486013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:05:02.496341 systemd-journald[1286]: Time spent on flushing to /var/log/journal/c48f7afff21842b5925395f40acc4c2a is 29.200ms for 988 entries. Sep 4 00:05:02.496341 systemd-journald[1286]: System Journal (/var/log/journal/c48f7afff21842b5925395f40acc4c2a) is 8M, max 2.6G, 2.6G free. Sep 4 00:05:02.546247 systemd-journald[1286]: Received client request to flush runtime journal. Sep 4 00:05:02.546276 kernel: loop0: detected capacity change from 0 to 221472 Sep 4 00:05:02.491715 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:05:02.497039 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:05:02.501455 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:05:02.505791 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:05:02.508631 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:05:02.515525 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:05:02.524705 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:05:02.548851 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:05:02.556635 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:05:02.866332 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:05:03.281470 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:05:03.285739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:05:03.476755 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 4 00:05:03.476769 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 4 00:05:03.479740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:05:03.836621 kernel: loop1: detected capacity change from 0 to 28504 Sep 4 00:05:03.869022 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:05:03.869433 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:05:03.936768 kernel: loop2: detected capacity change from 0 to 146240 Sep 4 00:05:04.042640 kernel: loop3: detected capacity change from 0 to 113872 Sep 4 00:05:04.121960 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:05:04.124975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:05:04.144764 kernel: loop4: detected capacity change from 0 to 221472 Sep 4 00:05:04.152773 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 4 00:05:04.154650 kernel: loop5: detected capacity change from 0 to 28504 Sep 4 00:05:04.162617 kernel: loop6: detected capacity change from 0 to 146240 Sep 4 00:05:04.174631 kernel: loop7: detected capacity change from 0 to 113872 Sep 4 00:05:04.181910 (sd-merge)[1353]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 00:05:04.183092 (sd-merge)[1353]: Merged extensions into '/usr'. Sep 4 00:05:04.186355 systemd[1]: Reload requested from client PID 1328 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:05:04.186366 systemd[1]: Reloading... Sep 4 00:05:04.269630 zram_generator::config[1405]: No configuration found. Sep 4 00:05:04.387624 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:05:04.419631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#272 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 4 00:05:04.423327 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 00:05:04.425658 kernel: hv_vmbus: registering driver hv_balloon Sep 4 00:05:04.434154 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 00:05:04.435631 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 00:05:04.438143 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 00:05:04.440648 kernel: Console: switching to colour dummy device 80x25 Sep 4 00:05:04.446618 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 00:05:04.507365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:04.679368 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:05:04.679697 systemd[1]: Reloading finished in 493 ms. Sep 4 00:05:04.693014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:05:04.698536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:05:04.768051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 4 00:05:04.779391 systemd[1]: Starting ensure-sysext.service... Sep 4 00:05:04.784829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:05:04.789796 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 4 00:05:04.789964 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:05:04.799420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:05:04.803731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:04.817762 systemd[1]: Reload requested from client PID 1512 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:05:04.817774 systemd[1]: Reloading... Sep 4 00:05:04.832408 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:05:04.832430 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:05:04.832597 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:05:04.832767 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:05:04.833164 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:05:04.833309 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Sep 4 00:05:04.833351 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Sep 4 00:05:04.836101 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:05:04.836111 systemd-tmpfiles[1515]: Skipping /boot Sep 4 00:05:04.841563 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:05:04.841573 systemd-tmpfiles[1515]: Skipping /boot Sep 4 00:05:04.881697 zram_generator::config[1544]: No configuration found. Sep 4 00:05:04.954528 ldconfig[1323]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:05:04.958252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:05.035195 systemd[1]: Reloading finished in 217 ms. Sep 4 00:05:05.055974 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:05:05.067673 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:05:05.071916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:05:05.073804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:05:05.073934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:05.082360 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:05:05.092283 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:05:05.094982 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:05:05.102687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:05:05.108583 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:05:05.118722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:05:05.122204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:05.131965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:05.132152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:05.134902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:05:05.140549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:05:05.147098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:05:05.149477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:05.149653 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:05.149790 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:05.154703 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:05:05.160418 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:05:05.163544 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:05:05.172001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:05.172170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:05.172306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:05.172382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:05.172462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:05.176244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:05:05.177106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:05:05.180451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:05.181729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:05:05.183949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:05:05.189395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:05:05.189989 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:05:05.190094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:05:05.190216 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:05:05.192588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:05:05.194448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:05:05.195400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:05:05.198169 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:05:05.198513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:05:05.202019 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:05:05.202202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:05:05.205415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:05:05.209287 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:05:05.213708 augenrules[1647]: No rules Sep 4 00:05:05.214966 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:05:05.215379 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:05:05.220096 systemd[1]: Finished ensure-sysext.service. Sep 4 00:05:05.224894 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:05:05.247183 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:05:05.247369 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:05:05.280192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:05.282803 systemd-networkd[1514]: lo: Link UP Sep 4 00:05:05.282810 systemd-networkd[1514]: lo: Gained carrier Sep 4 00:05:05.285280 systemd-networkd[1514]: Enumeration completed Sep 4 00:05:05.285424 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:05:05.288427 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:05.288435 systemd-networkd[1514]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:05:05.288732 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:05:05.290529 systemd-resolved[1610]: Positive Trust Anchors: Sep 4 00:05:05.290738 systemd-resolved[1610]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:05:05.290769 systemd-resolved[1610]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:05:05.292404 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:05:05.296672 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 4 00:05:05.298447 systemd-resolved[1610]: Using system hostname 'ci-4372.1.0-n-f08c63113b'. Sep 4 00:05:05.300667 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 4 00:05:05.300959 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e523462af eth0: Data path switched to VF: enP30832s1 Sep 4 00:05:05.303750 systemd-networkd[1514]: enP30832s1: Link UP Sep 4 00:05:05.303824 systemd-networkd[1514]: eth0: Link UP Sep 4 00:05:05.303827 systemd-networkd[1514]: eth0: Gained carrier Sep 4 00:05:05.303841 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:05.304530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:05:05.306487 systemd[1]: Reached target network.target - Network. Sep 4 00:05:05.307747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:05:05.309431 systemd-networkd[1514]: enP30832s1: Gained carrier Sep 4 00:05:05.310762 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:05:05.313056 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:05:05.315573 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:05:05.318699 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:05:05.320236 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:05:05.322696 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:05:05.325652 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:05:05.326642 systemd-networkd[1514]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 00:05:05.327708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:05:05.327737 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:05:05.329040 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:05:05.332138 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:05:05.336284 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:05:05.340150 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:05:05.343755 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:05:05.346657 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:05:05.358951 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:05:05.361908 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:05:05.364058 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:05:05.366749 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:05:05.370442 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:05:05.371805 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:05:05.373068 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:05:05.373087 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:05:05.374743 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 00:05:05.376600 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:05:05.381261 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 00:05:05.387044 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:05:05.390728 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:05:05.394760 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:05:05.400720 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:05:05.402940 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:05:05.404846 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:05:05.406561 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 4 00:05:05.408278 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 4 00:05:05.410441 jq[1678]: false Sep 4 00:05:05.410155 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 4 00:05:05.412932 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:05:05.416833 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:05:05.428351 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:05:05.431894 KVP[1683]: KVP starting; pid is:1683 Sep 4 00:05:05.432412 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:05:05.439789 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Refreshing passwd entry cache Sep 4 00:05:05.436533 oslogin_cache_refresh[1682]: Refreshing passwd entry cache Sep 4 00:05:05.440207 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:05:05.441940 KVP[1683]: KVP LIC Version: 3.1 Sep 4 00:05:05.445924 kernel: hv_utils: KVP IC version 4.0 Sep 4 00:05:05.444299 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 00:05:05.447065 extend-filesystems[1681]: Found /dev/nvme0n1p6 Sep 4 00:05:05.448862 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:05:05.449684 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:05:05.456256 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:05:05.459563 extend-filesystems[1681]: Found /dev/nvme0n1p9 Sep 4 00:05:05.463065 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Failure getting users, quitting Sep 4 00:05:05.463065 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:05:05.463065 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Refreshing group entry cache Sep 4 00:05:05.462578 oslogin_cache_refresh[1682]: Failure getting users, quitting Sep 4 00:05:05.462592 oslogin_cache_refresh[1682]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:05:05.462633 oslogin_cache_refresh[1682]: Refreshing group entry cache Sep 4 00:05:05.464108 extend-filesystems[1681]: Checking size of /dev/nvme0n1p9 Sep 4 00:05:05.468722 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:05:05.471625 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:05:05.471778 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:05:05.474969 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:05:05.475124 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:05:05.481190 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Failure getting groups, quitting Sep 4 00:05:05.481190 google_oslogin_nss_cache[1682]: oslogin_cache_refresh[1682]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:05:05.480410 oslogin_cache_refresh[1682]: Failure getting groups, quitting Sep 4 00:05:05.480419 oslogin_cache_refresh[1682]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:05:05.481861 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:05:05.486777 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:05:05.495936 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:05:05.497282 extend-filesystems[1681]: Old size kept for /dev/nvme0n1p9 Sep 4 00:05:05.499923 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:05:05.502941 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:05:05.503090 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:05:05.511537 update_engine[1695]: I20250904 00:05:05.509292 1695 main.cc:92] Flatcar Update Engine starting Sep 4 00:05:05.511721 jq[1696]: true Sep 4 00:05:05.512093 (chronyd)[1672]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 00:05:05.521269 chronyd[1726]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 00:05:05.525053 chronyd[1726]: Timezone right/UTC failed leap second check, ignoring Sep 4 00:05:05.526054 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 00:05:05.525178 chronyd[1726]: Loaded seccomp filter (level 2) Sep 4 00:05:05.531878 tar[1706]: linux-amd64/helm Sep 4 00:05:05.539433 (ntainerd)[1717]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:05:05.546764 jq[1722]: true Sep 4 00:05:05.580485 dbus-daemon[1675]: [system] SELinux support is enabled Sep 4 00:05:05.580691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:05:05.586151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:05:05.586436 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:05:05.590368 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:05:05.590385 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:05:05.601935 update_engine[1695]: I20250904 00:05:05.601897 1695 update_check_scheduler.cc:74] Next update check in 6m40s Sep 4 00:05:05.602055 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:05:05.605319 systemd-logind[1692]: New seat seat0. Sep 4 00:05:05.608598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:05:05.616413 systemd-logind[1692]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:05:05.616539 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:05:05.663279 bash[1755]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:05:05.667398 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:05:05.670765 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 00:05:05.700630 coreos-metadata[1674]: Sep 04 00:05:05.700 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 00:05:05.705920 coreos-metadata[1674]: Sep 04 00:05:05.705 INFO Fetch successful Sep 4 00:05:05.705920 coreos-metadata[1674]: Sep 04 00:05:05.705 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 00:05:05.784154 coreos-metadata[1674]: Sep 04 00:05:05.784 INFO Fetch successful Sep 4 00:05:05.785735 coreos-metadata[1674]: Sep 04 00:05:05.785 INFO Fetching http://168.63.129.16/machine/de5da391-b4d8-4fa6-9f21-1b9e9fdf3614/ca0f9f7e%2Df54f%2D44e7%2D9a36%2D91b2b8b6544a.%5Fci%2D4372.1.0%2Dn%2Df08c63113b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 00:05:05.808662 locksmithd[1744]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:05:05.825260 coreos-metadata[1674]: Sep 04 00:05:05.824 INFO Fetch successful Sep 4 00:05:05.825260 coreos-metadata[1674]: Sep 04 00:05:05.825 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 00:05:05.834315 coreos-metadata[1674]: Sep 04 00:05:05.834 INFO Fetch successful Sep 4 00:05:05.852298 sshd_keygen[1708]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:05:05.878931 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 00:05:05.881490 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:05:05.888515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:05:05.892388 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:05:05.913260 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:05:05.913432 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:05:05.917845 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:05:05.923520 containerd[1717]: time="2025-09-04T00:05:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:05:05.926582 containerd[1717]: time="2025-09-04T00:05:05.926554419Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:05:05.942734 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:05:05.945412 containerd[1717]: time="2025-09-04T00:05:05.945386833Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.169µs" Sep 4 00:05:05.945412 containerd[1717]: time="2025-09-04T00:05:05.945411745Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:05:05.945488 containerd[1717]: time="2025-09-04T00:05:05.945426822Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:05:05.945539 containerd[1717]: time="2025-09-04T00:05:05.945529100Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:05:05.945556 containerd[1717]: time="2025-09-04T00:05:05.945542740Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:05:05.945576 containerd[1717]: time="2025-09-04T00:05:05.945561018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945625 containerd[1717]: time="2025-09-04T00:05:05.945596680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945649 containerd[1717]: time="2025-09-04T00:05:05.945631688Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945804 containerd[1717]: time="2025-09-04T00:05:05.945791056Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945828 containerd[1717]: time="2025-09-04T00:05:05.945805204Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945828 containerd[1717]: time="2025-09-04T00:05:05.945815261Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945828 containerd[1717]: time="2025-09-04T00:05:05.945822698Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:05:05.945882 containerd[1717]: time="2025-09-04T00:05:05.945872696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:05:05.946016 containerd[1717]: time="2025-09-04T00:05:05.946006261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:05:05.946041 containerd[1717]: time="2025-09-04T00:05:05.946028325Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:05:05.946058 containerd[1717]: time="2025-09-04T00:05:05.946036581Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:05:05.946074 containerd[1717]: time="2025-09-04T00:05:05.946064887Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:05:05.946248 containerd[1717]: time="2025-09-04T00:05:05.946238309Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:05:05.946289 containerd[1717]: time="2025-09-04T00:05:05.946279090Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:05:05.949544 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:05:05.954911 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:05:05.957218 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:05:05.964450 containerd[1717]: time="2025-09-04T00:05:05.964423394Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:05:05.964504 containerd[1717]: time="2025-09-04T00:05:05.964468381Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:05:05.964504 containerd[1717]: time="2025-09-04T00:05:05.964481021Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:05:05.964504 containerd[1717]: time="2025-09-04T00:05:05.964490181Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:05:05.964504 containerd[1717]: time="2025-09-04T00:05:05.964502592Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964511209Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964521379Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964535505Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964546252Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964555790Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964563699Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:05:05.964584 containerd[1717]: time="2025-09-04T00:05:05.964574379Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:05:05.964706 containerd[1717]: time="2025-09-04T00:05:05.964676039Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:05:05.964706 containerd[1717]: time="2025-09-04T00:05:05.964691039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:05:05.964706 containerd[1717]: time="2025-09-04T00:05:05.964702689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:05:05.964759 containerd[1717]: time="2025-09-04T00:05:05.964712047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:05:05.964759 containerd[1717]: time="2025-09-04T00:05:05.964720791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:05:05.964759 containerd[1717]: time="2025-09-04T00:05:05.964729110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:05:05.964759 containerd[1717]: time="2025-09-04T00:05:05.964737974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:05:05.964759 containerd[1717]: time="2025-09-04T00:05:05.964746030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:05:05.964759 containerd[1717]: time="2025-09-04T00:05:05.964755015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:05:05.964850 containerd[1717]: time="2025-09-04T00:05:05.964763291Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:05:05.964850 containerd[1717]: time="2025-09-04T00:05:05.964771923Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:05:05.965072 containerd[1717]: time="2025-09-04T00:05:05.964989561Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:05:05.965072 containerd[1717]: time="2025-09-04T00:05:05.965007187Z" level=info msg="Start snapshots syncer" Sep 4 00:05:05.965072 containerd[1717]: time="2025-09-04T00:05:05.965031775Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:05:05.965811 containerd[1717]: time="2025-09-04T00:05:05.965400654Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:05:05.965811 containerd[1717]: time="2025-09-04T00:05:05.965461792Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965640973Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965735302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965760025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965773226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965788421Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965802205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965812864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965825641Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965849757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965864284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965876632Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965909917Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965925671Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:05:05.965946 containerd[1717]: time="2025-09-04T00:05:05.965937914Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.965951296Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.965959506Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.965971414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.965982978Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.965996727Z" level=info msg="runtime interface created" Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.966004388Z" level=info msg="created NRI interface" Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.966011859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.966026319Z" level=info msg="Connect containerd service" Sep 4 00:05:05.966171 containerd[1717]: time="2025-09-04T00:05:05.966053717Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:05:05.968636 containerd[1717]: time="2025-09-04T00:05:05.968019664Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:05:06.111653 tar[1706]: linux-amd64/LICENSE Sep 4 00:05:06.111653 tar[1706]: linux-amd64/README.md Sep 4 00:05:06.123182 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:05:06.188388 containerd[1717]: time="2025-09-04T00:05:06.188314017Z" level=info msg="Start subscribing containerd event" Sep 4 00:05:06.188388 containerd[1717]: time="2025-09-04T00:05:06.188347800Z" level=info msg="Start recovering state" Sep 4 00:05:06.188548 containerd[1717]: time="2025-09-04T00:05:06.188464610Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:05:06.188548 containerd[1717]: time="2025-09-04T00:05:06.188499387Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:05:06.188548 containerd[1717]: time="2025-09-04T00:05:06.188522623Z" level=info msg="Start event monitor" Sep 4 00:05:06.188548 containerd[1717]: time="2025-09-04T00:05:06.188533214Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:05:06.188732 containerd[1717]: time="2025-09-04T00:05:06.188538993Z" level=info msg="Start streaming server" Sep 4 00:05:06.188732 containerd[1717]: time="2025-09-04T00:05:06.188672185Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:05:06.188732 containerd[1717]: time="2025-09-04T00:05:06.188677838Z" level=info msg="runtime interface starting up..." Sep 4 00:05:06.188732 containerd[1717]: time="2025-09-04T00:05:06.188683079Z" level=info msg="starting plugins..." Sep 4 00:05:06.188732 containerd[1717]: time="2025-09-04T00:05:06.188692490Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:05:06.191382 containerd[1717]: time="2025-09-04T00:05:06.188877328Z" level=info msg="containerd successfully booted in 0.265629s" Sep 4 00:05:06.188944 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:05:07.129758 systemd-networkd[1514]: eth0: Gained IPv6LL Sep 4 00:05:07.131402 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:05:07.133458 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:05:07.135958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:07.145330 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:05:07.148504 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 00:05:07.169703 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 00:05:07.180801 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:05:07.775632 waagent[1821]: 2025-09-04T00:05:07.774997Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 4 00:05:07.777825 waagent[1821]: 2025-09-04T00:05:07.777673Z INFO Daemon Daemon OS: flatcar 4372.1.0 Sep 4 00:05:07.780696 waagent[1821]: 2025-09-04T00:05:07.780654Z INFO Daemon Daemon Python: 3.11.12 Sep 4 00:05:07.784269 waagent[1821]: 2025-09-04T00:05:07.783833Z INFO Daemon Daemon Run daemon Sep 4 00:05:07.786766 waagent[1821]: 2025-09-04T00:05:07.786730Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.1.0' Sep 4 00:05:07.790690 waagent[1821]: 2025-09-04T00:05:07.790649Z INFO Daemon Daemon Using waagent for provisioning Sep 4 00:05:07.793865 waagent[1821]: 2025-09-04T00:05:07.793825Z INFO Daemon Daemon Activate resource disk Sep 4 00:05:07.795091 waagent[1821]: 2025-09-04T00:05:07.795060Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 00:05:07.798432 waagent[1821]: 2025-09-04T00:05:07.798400Z INFO Daemon Daemon Found device: None Sep 4 00:05:07.800994 waagent[1821]: 2025-09-04T00:05:07.800659Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 00:05:07.803690 waagent[1821]: 2025-09-04T00:05:07.803657Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 00:05:07.806598 waagent[1821]: 2025-09-04T00:05:07.806565Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 00:05:07.808160 waagent[1821]: 2025-09-04T00:05:07.808132Z INFO Daemon Daemon Running default provisioning handler Sep 4 00:05:07.816043 waagent[1821]: 2025-09-04T00:05:07.815999Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 00:05:07.819930 waagent[1821]: 2025-09-04T00:05:07.819893Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 00:05:07.824707 waagent[1821]: 2025-09-04T00:05:07.824662Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 00:05:07.828152 waagent[1821]: 2025-09-04T00:05:07.826433Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 00:05:07.862624 waagent[1821]: 2025-09-04T00:05:07.861888Z INFO Daemon Daemon Successfully mounted dvd Sep 4 00:05:07.874702 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 00:05:07.876584 waagent[1821]: 2025-09-04T00:05:07.876545Z INFO Daemon Daemon Detect protocol endpoint Sep 4 00:05:07.877860 waagent[1821]: 2025-09-04T00:05:07.877831Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 00:05:07.879511 waagent[1821]: 2025-09-04T00:05:07.879259Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 00:05:07.881964 waagent[1821]: 2025-09-04T00:05:07.881920Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 00:05:07.885090 waagent[1821]: 2025-09-04T00:05:07.883949Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 00:05:07.885660 waagent[1821]: 2025-09-04T00:05:07.885619Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 00:05:07.893305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:07.895272 waagent[1821]: 2025-09-04T00:05:07.893892Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 00:05:07.895601 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:05:07.899989 systemd[1]: Startup finished in 2.697s (kernel) + 11.179s (initrd) + 6.658s (userspace) = 20.535s. Sep 4 00:05:07.900578 waagent[1821]: 2025-09-04T00:05:07.900553Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 00:05:07.903331 waagent[1821]: 2025-09-04T00:05:07.903290Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 00:05:07.907841 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:07.987257 login[1789]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 00:05:07.993120 login[1790]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 00:05:07.995310 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:05:07.996952 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:05:08.008099 systemd-logind[1692]: New session 1 of user core. Sep 4 00:05:08.016421 systemd-logind[1692]: New session 2 of user core. Sep 4 00:05:08.024581 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:05:08.026980 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:05:08.039598 (systemd)[1850]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:05:08.043018 systemd-logind[1692]: New session c1 of user core. Sep 4 00:05:08.055622 waagent[1821]: 2025-09-04T00:05:08.052329Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 00:05:08.055622 waagent[1821]: 2025-09-04T00:05:08.052486Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 00:05:08.059383 waagent[1821]: 2025-09-04T00:05:08.057916Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 00:05:08.073696 waagent[1821]: 2025-09-04T00:05:08.073673Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 4 00:05:08.074197 waagent[1821]: 2025-09-04T00:05:08.074172Z INFO Daemon Sep 4 00:05:08.074529 waagent[1821]: 2025-09-04T00:05:08.074510Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4e275b3b-c5bc-4e8a-8a40-57fae774d8b0 eTag: 7441546150430752559 source: Fabric] Sep 4 00:05:08.075069 waagent[1821]: 2025-09-04T00:05:08.075050Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 00:05:08.075349 waagent[1821]: 2025-09-04T00:05:08.075334Z INFO Daemon Sep 4 00:05:08.075516 waagent[1821]: 2025-09-04T00:05:08.075504Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 00:05:08.087632 waagent[1821]: 2025-09-04T00:05:08.085436Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 00:05:08.203656 systemd[1850]: Queued start job for default target default.target. Sep 4 00:05:08.208656 systemd[1850]: Created slice app.slice - User Application Slice. Sep 4 00:05:08.208685 systemd[1850]: Reached target paths.target - Paths. Sep 4 00:05:08.208760 systemd[1850]: Reached target timers.target - Timers. Sep 4 00:05:08.209646 systemd[1850]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:05:08.218695 systemd[1850]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:05:08.219434 systemd[1850]: Reached target sockets.target - Sockets. Sep 4 00:05:08.219517 systemd[1850]: Reached target basic.target - Basic System. Sep 4 00:05:08.219565 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:05:08.220366 systemd[1850]: Reached target default.target - Main User Target. Sep 4 00:05:08.220387 systemd[1850]: Startup finished in 166ms. Sep 4 00:05:08.224932 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:05:08.225894 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:05:08.260036 waagent[1821]: 2025-09-04T00:05:08.260003Z INFO Daemon Downloaded certificate {'thumbprint': '983FC915F4C0979D72FDE357AF489DFA14E6556E', 'hasPrivateKey': True} Sep 4 00:05:08.260489 waagent[1821]: 2025-09-04T00:05:08.260460Z INFO Daemon Fetch goal state completed Sep 4 00:05:08.307929 waagent[1821]: 2025-09-04T00:05:08.307043Z INFO Daemon Daemon Starting provisioning Sep 4 00:05:08.307929 waagent[1821]: 2025-09-04T00:05:08.307197Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 00:05:08.307929 waagent[1821]: 2025-09-04T00:05:08.307627Z INFO Daemon Daemon Set hostname [ci-4372.1.0-n-f08c63113b] Sep 4 00:05:08.309177 waagent[1821]: 2025-09-04T00:05:08.309146Z INFO Daemon Daemon Publish hostname [ci-4372.1.0-n-f08c63113b] Sep 4 00:05:08.309474 waagent[1821]: 2025-09-04T00:05:08.309452Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 00:05:08.309694 waagent[1821]: 2025-09-04T00:05:08.309675Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 00:05:08.317059 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:05:08.317551 systemd-networkd[1514]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:05:08.317741 systemd-networkd[1514]: eth0: DHCP lease lost Sep 4 00:05:08.317986 waagent[1821]: 2025-09-04T00:05:08.317952Z INFO Daemon Daemon Create user account if not exists Sep 4 00:05:08.318184 waagent[1821]: 2025-09-04T00:05:08.318163Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 00:05:08.318418 waagent[1821]: 2025-09-04T00:05:08.318402Z INFO Daemon Daemon Configure sudoer Sep 4 00:05:08.325745 waagent[1821]: 2025-09-04T00:05:08.325697Z INFO Daemon Daemon Configure sshd Sep 4 00:05:08.331391 waagent[1821]: 2025-09-04T00:05:08.331345Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 00:05:08.336928 waagent[1821]: 2025-09-04T00:05:08.331488Z INFO Daemon Daemon Deploy ssh public key. Sep 4 00:05:08.346652 systemd-networkd[1514]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 4 00:05:08.445029 kubelet[1842]: E0904 00:05:08.445000 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:08.446492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:08.446626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:08.446907 systemd[1]: kubelet.service: Consumed 795ms CPU time, 265M memory peak. Sep 4 00:05:09.408309 waagent[1821]: 2025-09-04T00:05:09.408276Z INFO Daemon Daemon Provisioning complete Sep 4 00:05:09.415929 waagent[1821]: 2025-09-04T00:05:09.415907Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 00:05:09.417513 waagent[1821]: 2025-09-04T00:05:09.417488Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 00:05:09.419655 waagent[1821]: 2025-09-04T00:05:09.419632Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 4 00:05:09.506009 waagent[1898]: 2025-09-04T00:05:09.505962Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 4 00:05:09.506179 waagent[1898]: 2025-09-04T00:05:09.506042Z INFO ExtHandler ExtHandler OS: flatcar 4372.1.0 Sep 4 00:05:09.506179 waagent[1898]: 2025-09-04T00:05:09.506080Z INFO ExtHandler ExtHandler Python: 3.11.12 Sep 4 00:05:09.506179 waagent[1898]: 2025-09-04T00:05:09.506116Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 4 00:05:09.517131 waagent[1898]: 2025-09-04T00:05:09.517090Z INFO ExtHandler ExtHandler Distro: flatcar-4372.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 4 00:05:09.517252 waagent[1898]: 2025-09-04T00:05:09.517230Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 00:05:09.517308 waagent[1898]: 2025-09-04T00:05:09.517275Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 00:05:09.525795 waagent[1898]: 2025-09-04T00:05:09.525745Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 00:05:09.540801 waagent[1898]: 2025-09-04T00:05:09.540775Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 4 00:05:09.541096 waagent[1898]: 2025-09-04T00:05:09.541070Z INFO ExtHandler Sep 4 00:05:09.541134 waagent[1898]: 2025-09-04T00:05:09.541114Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a43bbb9f-ccad-4bbf-a5e0-ab1b120a9cc6 eTag: 7441546150430752559 source: Fabric] Sep 4 00:05:09.541310 waagent[1898]: 2025-09-04T00:05:09.541289Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 00:05:09.541592 waagent[1898]: 2025-09-04T00:05:09.541567Z INFO ExtHandler Sep 4 00:05:09.541639 waagent[1898]: 2025-09-04T00:05:09.541624Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 00:05:09.545231 waagent[1898]: 2025-09-04T00:05:09.545202Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 00:05:09.623380 waagent[1898]: 2025-09-04T00:05:09.623337Z INFO ExtHandler Downloaded certificate {'thumbprint': '983FC915F4C0979D72FDE357AF489DFA14E6556E', 'hasPrivateKey': True} Sep 4 00:05:09.623686 waagent[1898]: 2025-09-04T00:05:09.623660Z INFO ExtHandler Fetch goal state completed Sep 4 00:05:09.631936 waagent[1898]: 2025-09-04T00:05:09.631896Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Sep 4 00:05:09.635580 waagent[1898]: 2025-09-04T00:05:09.635533Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1898 Sep 4 00:05:09.635715 waagent[1898]: 2025-09-04T00:05:09.635691Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 00:05:09.635925 waagent[1898]: 2025-09-04T00:05:09.635906Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 4 00:05:09.636774 waagent[1898]: 2025-09-04T00:05:09.636747Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 00:05:09.637027 waagent[1898]: 2025-09-04T00:05:09.637006Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 4 00:05:09.637117 waagent[1898]: 2025-09-04T00:05:09.637101Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 4 00:05:09.637444 waagent[1898]: 2025-09-04T00:05:09.637423Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 00:05:09.643118 waagent[1898]: 2025-09-04T00:05:09.643096Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 00:05:09.643226 waagent[1898]: 2025-09-04T00:05:09.643208Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 00:05:09.647622 waagent[1898]: 2025-09-04T00:05:09.647316Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 00:05:09.651760 systemd[1]: Reload requested from client PID 1913 ('systemctl') (unit waagent.service)... Sep 4 00:05:09.651771 systemd[1]: Reloading... Sep 4 00:05:09.719701 zram_generator::config[1951]: No configuration found. Sep 4 00:05:09.786746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:09.868112 systemd[1]: Reloading finished in 216 ms. Sep 4 00:05:09.880812 waagent[1898]: 2025-09-04T00:05:09.878414Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 00:05:09.880812 waagent[1898]: 2025-09-04T00:05:09.878514Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 00:05:09.886621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Sep 4 00:05:09.975454 waagent[1898]: 2025-09-04T00:05:09.975411Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 00:05:09.975698 waagent[1898]: 2025-09-04T00:05:09.975675Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 4 00:05:09.976398 waagent[1898]: 2025-09-04T00:05:09.976265Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 00:05:09.976398 waagent[1898]: 2025-09-04T00:05:09.976314Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 00:05:09.976544 waagent[1898]: 2025-09-04T00:05:09.976509Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 00:05:09.976845 waagent[1898]: 2025-09-04T00:05:09.976824Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 00:05:09.976881 waagent[1898]: 2025-09-04T00:05:09.976858Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 00:05:09.977073 waagent[1898]: 2025-09-04T00:05:09.977025Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 00:05:09.977233 waagent[1898]: 2025-09-04T00:05:09.977211Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 00:05:09.977233 waagent[1898]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 00:05:09.977233 waagent[1898]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 00:05:09.977233 waagent[1898]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 00:05:09.977233 waagent[1898]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 00:05:09.977233 waagent[1898]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 00:05:09.977233 waagent[1898]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 00:05:09.977473 waagent[1898]: 2025-09-04T00:05:09.977441Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 00:05:09.977501 waagent[1898]: 2025-09-04T00:05:09.977481Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 00:05:09.977593 waagent[1898]: 2025-09-04T00:05:09.977574Z INFO EnvHandler ExtHandler Configure routes Sep 4 00:05:09.977670 waagent[1898]: 2025-09-04T00:05:09.977641Z INFO EnvHandler ExtHandler Gateway:None Sep 4 00:05:09.977700 waagent[1898]: 2025-09-04T00:05:09.977677Z INFO EnvHandler ExtHandler Routes:None Sep 4 00:05:09.977841 waagent[1898]: 2025-09-04T00:05:09.977825Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 00:05:09.978100 waagent[1898]: 2025-09-04T00:05:09.978053Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 00:05:09.978129 waagent[1898]: 2025-09-04T00:05:09.978106Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 00:05:09.978376 waagent[1898]: 2025-09-04T00:05:09.978355Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 00:05:09.988150 waagent[1898]: 2025-09-04T00:05:09.988118Z INFO ExtHandler ExtHandler Sep 4 00:05:09.988200 waagent[1898]: 2025-09-04T00:05:09.988178Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 14237a77-c539-4a01-bfd3-9b7f943eac66 correlation 7e512696-d61f-48e1-b38d-047b7f71a74c created: 2025-09-04T00:04:35.007210Z] Sep 4 00:05:09.988441 waagent[1898]: 2025-09-04T00:05:09.988418Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 00:05:09.988884 waagent[1898]: 2025-09-04T00:05:09.988860Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 4 00:05:09.993234 waagent[1898]: 2025-09-04T00:05:09.993167Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 00:05:09.993234 waagent[1898]: Executing ['ip', '-a', '-o', 'link']: Sep 4 00:05:09.993234 waagent[1898]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 00:05:09.993234 waagent[1898]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:62:af brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 4 00:05:09.993234 waagent[1898]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:62:af brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 4 00:05:09.993234 waagent[1898]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 00:05:09.993234 waagent[1898]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 00:05:09.993234 waagent[1898]: 2: eth0 inet 10.200.8.18/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 00:05:09.993234 waagent[1898]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 00:05:09.993234 waagent[1898]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 00:05:09.993234 waagent[1898]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:62af/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 00:05:10.018171 waagent[1898]: 2025-09-04T00:05:10.018131Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 4 00:05:10.018171 waagent[1898]: Try `iptables -h' or 'iptables --help' for more information.) Sep 4 00:05:10.018475 waagent[1898]: 2025-09-04T00:05:10.018444Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CCAC207A-9EB2-4215-A254-AB3CFD059E8B;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 4 00:05:10.026885 waagent[1898]: 2025-09-04T00:05:10.026842Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 4 00:05:10.026885 waagent[1898]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:05:10.026885 waagent[1898]: pkts bytes target prot opt in out source destination Sep 4 00:05:10.026885 waagent[1898]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:05:10.026885 waagent[1898]: pkts bytes target prot opt in out source destination Sep 4 00:05:10.026885 waagent[1898]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:05:10.026885 waagent[1898]: pkts bytes target prot opt in out source destination Sep 4 00:05:10.026885 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 00:05:10.026885 waagent[1898]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 00:05:10.026885 waagent[1898]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 00:05:10.029570 waagent[1898]: 2025-09-04T00:05:10.029529Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 00:05:10.029570 waagent[1898]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:05:10.029570 waagent[1898]: pkts bytes target prot opt in out source destination Sep 4 00:05:10.029570 waagent[1898]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:05:10.029570 waagent[1898]: pkts bytes target prot opt in out source destination Sep 4 00:05:10.029570 waagent[1898]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 00:05:10.029570 waagent[1898]: pkts bytes target prot opt in out source destination Sep 4 00:05:10.029570 waagent[1898]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 00:05:10.029570 waagent[1898]: 4 348 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 00:05:10.029570 waagent[1898]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 00:05:18.697278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:05:18.698433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:21.713683 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:05:21.714633 systemd[1]: Started sshd@0-10.200.8.18:22-10.200.16.10:45400.service - OpenSSH per-connection server daemon (10.200.16.10:45400). Sep 4 00:05:22.691626 sshd[2045]: Accepted publickey for core from 10.200.16.10 port 45400 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:22.691817 sshd-session[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:22.696023 systemd-logind[1692]: New session 3 of user core. Sep 4 00:05:22.701739 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:05:22.946351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:22.951840 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:22.982676 kubelet[2053]: E0904 00:05:22.982628 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:22.985260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:22.985347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:22.985665 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.5M memory peak. Sep 4 00:05:23.253526 systemd[1]: Started sshd@1-10.200.8.18:22-10.200.16.10:45410.service - OpenSSH per-connection server daemon (10.200.16.10:45410). Sep 4 00:05:23.889133 sshd[2062]: Accepted publickey for core from 10.200.16.10 port 45410 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:23.889883 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:23.893397 systemd-logind[1692]: New session 4 of user core. Sep 4 00:05:23.899724 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:05:24.336339 sshd[2064]: Connection closed by 10.200.16.10 port 45410 Sep 4 00:05:24.336837 sshd-session[2062]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:24.338592 systemd[1]: sshd@1-10.200.8.18:22-10.200.16.10:45410.service: Deactivated successfully. Sep 4 00:05:24.340119 systemd-logind[1692]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:05:24.340337 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:05:24.341543 systemd-logind[1692]: Removed session 4. Sep 4 00:05:24.447446 systemd[1]: Started sshd@2-10.200.8.18:22-10.200.16.10:45424.service - OpenSSH per-connection server daemon (10.200.16.10:45424). Sep 4 00:05:25.082796 sshd[2070]: Accepted publickey for core from 10.200.16.10 port 45424 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:25.083521 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:25.086953 systemd-logind[1692]: New session 5 of user core. Sep 4 00:05:25.096754 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:05:25.527759 sshd[2072]: Connection closed by 10.200.16.10 port 45424 Sep 4 00:05:25.528176 sshd-session[2070]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:25.530384 systemd[1]: sshd@2-10.200.8.18:22-10.200.16.10:45424.service: Deactivated successfully. Sep 4 00:05:25.531457 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:05:25.532005 systemd-logind[1692]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:05:25.532853 systemd-logind[1692]: Removed session 5. Sep 4 00:05:25.643432 systemd[1]: Started sshd@3-10.200.8.18:22-10.200.16.10:45432.service - OpenSSH per-connection server daemon (10.200.16.10:45432). Sep 4 00:05:26.279145 sshd[2078]: Accepted publickey for core from 10.200.16.10 port 45432 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:26.279889 sshd-session[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:26.283266 systemd-logind[1692]: New session 6 of user core. Sep 4 00:05:26.289731 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:05:26.726178 sshd[2080]: Connection closed by 10.200.16.10 port 45432 Sep 4 00:05:26.726504 sshd-session[2078]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:26.728209 systemd[1]: sshd@3-10.200.8.18:22-10.200.16.10:45432.service: Deactivated successfully. Sep 4 00:05:26.729308 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:05:26.730650 systemd-logind[1692]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:05:26.731282 systemd-logind[1692]: Removed session 6. Sep 4 00:05:26.840407 systemd[1]: Started sshd@4-10.200.8.18:22-10.200.16.10:45438.service - OpenSSH per-connection server daemon (10.200.16.10:45438). Sep 4 00:05:27.476169 sshd[2086]: Accepted publickey for core from 10.200.16.10 port 45438 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:27.476879 sshd-session[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:27.480399 systemd-logind[1692]: New session 7 of user core. Sep 4 00:05:27.482737 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:05:27.847442 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:05:27.847645 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:27.862364 sudo[2089]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:27.964914 sshd[2088]: Connection closed by 10.200.16.10 port 45438 Sep 4 00:05:27.965365 sshd-session[2086]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:27.967797 systemd[1]: sshd@4-10.200.8.18:22-10.200.16.10:45438.service: Deactivated successfully. Sep 4 00:05:27.968800 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:05:27.969403 systemd-logind[1692]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:05:27.970220 systemd-logind[1692]: Removed session 7. Sep 4 00:05:28.079555 systemd[1]: Started sshd@5-10.200.8.18:22-10.200.16.10:45454.service - OpenSSH per-connection server daemon (10.200.16.10:45454). Sep 4 00:05:28.715474 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 45454 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:28.716244 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:28.719798 systemd-logind[1692]: New session 8 of user core. Sep 4 00:05:28.725738 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:05:29.061582 sudo[2099]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:05:29.061775 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:29.066814 sudo[2099]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:29.069986 sudo[2098]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:05:29.070165 sudo[2098]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:29.076291 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:05:29.105933 augenrules[2121]: No rules Sep 4 00:05:29.106709 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:05:29.106891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:05:29.107711 sudo[2098]: pam_unix(sudo:session): session closed for user root Sep 4 00:05:29.211206 sshd[2097]: Connection closed by 10.200.16.10 port 45454 Sep 4 00:05:29.211505 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Sep 4 00:05:29.213285 systemd[1]: sshd@5-10.200.8.18:22-10.200.16.10:45454.service: Deactivated successfully. Sep 4 00:05:29.214978 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:05:29.215632 systemd-logind[1692]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:05:29.216271 systemd-logind[1692]: Removed session 8. Sep 4 00:05:29.306677 chronyd[1726]: Selected source PHC0 Sep 4 00:05:29.321554 systemd[1]: Started sshd@6-10.200.8.18:22-10.200.16.10:45456.service - OpenSSH per-connection server daemon (10.200.16.10:45456). Sep 4 00:05:29.954503 sshd[2130]: Accepted publickey for core from 10.200.16.10 port 45456 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:05:29.955233 sshd-session[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:05:29.958662 systemd-logind[1692]: New session 9 of user core. Sep 4 00:05:29.967756 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:05:30.299303 sudo[2133]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:05:30.299485 sudo[2133]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:05:31.600928 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:05:31.611874 (dockerd)[2151]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:05:31.815426 dockerd[2151]: time="2025-09-04T00:05:31.815391298Z" level=info msg="Starting up" Sep 4 00:05:31.816311 dockerd[2151]: time="2025-09-04T00:05:31.816276889Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:05:31.955886 dockerd[2151]: time="2025-09-04T00:05:31.955811418Z" level=info msg="Loading containers: start." Sep 4 00:05:31.966638 kernel: Initializing XFRM netlink socket Sep 4 00:05:32.129262 systemd-networkd[1514]: docker0: Link UP Sep 4 00:05:32.144417 dockerd[2151]: time="2025-09-04T00:05:32.144391976Z" level=info msg="Loading containers: done." Sep 4 00:05:32.162547 dockerd[2151]: time="2025-09-04T00:05:32.162520469Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:05:32.162653 dockerd[2151]: time="2025-09-04T00:05:32.162572427Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:05:32.162690 dockerd[2151]: time="2025-09-04T00:05:32.162660775Z" level=info msg="Initializing buildkit" Sep 4 00:05:32.209336 dockerd[2151]: time="2025-09-04T00:05:32.209176244Z" level=info msg="Completed buildkit initialization" Sep 4 00:05:32.214209 dockerd[2151]: time="2025-09-04T00:05:32.214172320Z" level=info msg="Daemon has completed initialization" Sep 4 00:05:32.214684 dockerd[2151]: time="2025-09-04T00:05:32.214592187Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:05:32.214346 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:05:33.098819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:05:33.099778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:33.403151 containerd[1717]: time="2025-09-04T00:05:33.403002195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 00:05:33.595423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:33.598213 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:33.631624 kubelet[2359]: E0904 00:05:33.630486 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:33.634769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:33.634875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:33.635095 systemd[1]: kubelet.service: Consumed 112ms CPU time, 110.5M memory peak. Sep 4 00:05:34.293177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697942120.mount: Deactivated successfully. Sep 4 00:05:35.344060 containerd[1717]: time="2025-09-04T00:05:35.344030390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:35.346301 containerd[1717]: time="2025-09-04T00:05:35.346272083Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079639" Sep 4 00:05:35.349052 containerd[1717]: time="2025-09-04T00:05:35.349017939Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:35.354622 containerd[1717]: time="2025-09-04T00:05:35.354377764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:35.354945 containerd[1717]: time="2025-09-04T00:05:35.354922644Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 1.951891029s" Sep 4 00:05:35.354981 containerd[1717]: time="2025-09-04T00:05:35.354953190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 4 00:05:35.355549 containerd[1717]: time="2025-09-04T00:05:35.355533592Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 00:05:36.495309 containerd[1717]: time="2025-09-04T00:05:36.495281426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:36.497597 containerd[1717]: time="2025-09-04T00:05:36.497497503Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714689" Sep 4 00:05:36.500032 containerd[1717]: time="2025-09-04T00:05:36.500013059Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:36.503501 containerd[1717]: time="2025-09-04T00:05:36.503477004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:36.503975 containerd[1717]: time="2025-09-04T00:05:36.503953679Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.148334382s" Sep 4 00:05:36.504036 containerd[1717]: time="2025-09-04T00:05:36.504026432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 4 00:05:36.504461 containerd[1717]: time="2025-09-04T00:05:36.504438210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 00:05:37.529062 containerd[1717]: time="2025-09-04T00:05:37.529034983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:37.532248 containerd[1717]: time="2025-09-04T00:05:37.532222895Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782435" Sep 4 00:05:37.536091 containerd[1717]: time="2025-09-04T00:05:37.536059042Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:37.539787 containerd[1717]: time="2025-09-04T00:05:37.539752144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:37.540306 containerd[1717]: time="2025-09-04T00:05:37.540224146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.035765222s" Sep 4 00:05:37.540306 containerd[1717]: time="2025-09-04T00:05:37.540246868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 4 00:05:37.540617 containerd[1717]: time="2025-09-04T00:05:37.540546165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 00:05:38.318663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142463755.mount: Deactivated successfully. Sep 4 00:05:38.634778 containerd[1717]: time="2025-09-04T00:05:38.634754422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:38.637813 containerd[1717]: time="2025-09-04T00:05:38.637778920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384263" Sep 4 00:05:38.640894 containerd[1717]: time="2025-09-04T00:05:38.640469478Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:38.643407 containerd[1717]: time="2025-09-04T00:05:38.643386545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:38.643663 containerd[1717]: time="2025-09-04T00:05:38.643644422Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 1.103076821s" Sep 4 00:05:38.643717 containerd[1717]: time="2025-09-04T00:05:38.643707578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 4 00:05:38.643995 containerd[1717]: time="2025-09-04T00:05:38.643975763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:05:39.126890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103447016.mount: Deactivated successfully. Sep 4 00:05:39.964684 containerd[1717]: time="2025-09-04T00:05:39.964657933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:39.968248 containerd[1717]: time="2025-09-04T00:05:39.968220904Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 4 00:05:39.971502 containerd[1717]: time="2025-09-04T00:05:39.971469730Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:39.975281 containerd[1717]: time="2025-09-04T00:05:39.975144990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:39.975676 containerd[1717]: time="2025-09-04T00:05:39.975656870Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.331602181s" Sep 4 00:05:39.975716 containerd[1717]: time="2025-09-04T00:05:39.975680491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:05:39.976251 containerd[1717]: time="2025-09-04T00:05:39.976226846Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:05:40.509897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967393439.mount: Deactivated successfully. Sep 4 00:05:40.527016 containerd[1717]: time="2025-09-04T00:05:40.526992649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:40.529339 containerd[1717]: time="2025-09-04T00:05:40.529312806Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 4 00:05:40.534156 containerd[1717]: time="2025-09-04T00:05:40.534126538Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:40.537701 containerd[1717]: time="2025-09-04T00:05:40.537666757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:05:40.538050 containerd[1717]: time="2025-09-04T00:05:40.537970959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 561.723611ms" Sep 4 00:05:40.538050 containerd[1717]: time="2025-09-04T00:05:40.537990928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:05:40.538346 containerd[1717]: time="2025-09-04T00:05:40.538283130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 00:05:41.048317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638473965.mount: Deactivated successfully. Sep 4 00:05:42.571663 containerd[1717]: time="2025-09-04T00:05:42.571635294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:42.574359 containerd[1717]: time="2025-09-04T00:05:42.574331096Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 4 00:05:42.578024 containerd[1717]: time="2025-09-04T00:05:42.577990181Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:42.582095 containerd[1717]: time="2025-09-04T00:05:42.582061860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:05:42.582744 containerd[1717]: time="2025-09-04T00:05:42.582628729Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.044325723s" Sep 4 00:05:42.582744 containerd[1717]: time="2025-09-04T00:05:42.582652213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 4 00:05:43.848995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 00:05:43.851760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:44.318972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:44.326955 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:05:44.367225 kubelet[2574]: E0904 00:05:44.367200 2574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:05:44.369168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:05:44.369282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:05:44.369530 systemd[1]: kubelet.service: Consumed 131ms CPU time, 111.2M memory peak. Sep 4 00:05:44.742013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:44.742274 systemd[1]: kubelet.service: Consumed 131ms CPU time, 111.2M memory peak. Sep 4 00:05:44.743835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:44.764722 systemd[1]: Reload requested from client PID 2588 ('systemctl') (unit session-9.scope)... Sep 4 00:05:44.764811 systemd[1]: Reloading... Sep 4 00:05:44.838622 zram_generator::config[2631]: No configuration found. Sep 4 00:05:44.966061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:45.048794 systemd[1]: Reloading finished in 283 ms. Sep 4 00:05:45.077264 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:05:45.077319 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:05:45.077508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:45.077537 systemd[1]: kubelet.service: Consumed 57ms CPU time, 70.1M memory peak. Sep 4 00:05:45.080059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:45.592465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:45.595593 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:05:45.627351 kubelet[2701]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:45.627351 kubelet[2701]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 00:05:45.627351 kubelet[2701]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:45.627572 kubelet[2701]: I0904 00:05:45.627399 2701 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:05:45.766121 kubelet[2701]: I0904 00:05:45.766087 2701 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 00:05:45.766121 kubelet[2701]: I0904 00:05:45.766107 2701 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:05:45.766611 kubelet[2701]: I0904 00:05:45.766556 2701 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 00:05:45.786076 kubelet[2701]: E0904 00:05:45.786053 2701 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:45.786744 kubelet[2701]: I0904 00:05:45.786729 2701 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:05:45.791123 kubelet[2701]: I0904 00:05:45.791114 2701 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:05:45.793689 kubelet[2701]: I0904 00:05:45.793659 2701 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:05:45.794198 kubelet[2701]: I0904 00:05:45.794183 2701 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 00:05:45.794333 kubelet[2701]: I0904 00:05:45.794298 2701 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:05:45.794452 kubelet[2701]: I0904 00:05:45.794331 2701 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-f08c63113b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:05:45.794546 kubelet[2701]: I0904 00:05:45.794459 2701 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:05:45.794546 kubelet[2701]: I0904 00:05:45.794467 2701 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 00:05:45.794546 kubelet[2701]: I0904 00:05:45.794537 2701 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:45.796698 kubelet[2701]: I0904 00:05:45.796274 2701 kubelet.go:408] "Attempting to sync node with API server" Sep 4 00:05:45.796698 kubelet[2701]: I0904 00:05:45.796290 2701 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:05:45.796698 kubelet[2701]: I0904 00:05:45.796317 2701 kubelet.go:314] "Adding apiserver pod source" Sep 4 00:05:45.796698 kubelet[2701]: I0904 00:05:45.796330 2701 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:05:45.802773 kubelet[2701]: W0904 00:05:45.802739 2701 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-f08c63113b&limit=500&resourceVersion=0": dial tcp 10.200.8.18:6443: connect: connection refused Sep 4 00:05:45.802880 kubelet[2701]: E0904 00:05:45.802869 2701 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-f08c63113b&limit=500&resourceVersion=0\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:45.803064 kubelet[2701]: I0904 00:05:45.803056 2701 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:05:45.803453 kubelet[2701]: I0904 00:05:45.803444 2701 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:05:45.804000 kubelet[2701]: W0904 00:05:45.803980 2701 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:05:45.805588 kubelet[2701]: W0904 00:05:45.805446 2701 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.18:6443: connect: connection refused Sep 4 00:05:45.805588 kubelet[2701]: E0904 00:05:45.805492 2701 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:45.806921 kubelet[2701]: I0904 00:05:45.806902 2701 server.go:1274] "Started kubelet" Sep 4 00:05:45.808291 kubelet[2701]: I0904 00:05:45.808253 2701 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:05:45.814261 kubelet[2701]: I0904 00:05:45.813732 2701 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:05:45.815217 kubelet[2701]: I0904 00:05:45.814431 2701 server.go:449] "Adding debug handlers to kubelet server" Sep 4 00:05:45.815328 kubelet[2701]: E0904 00:05:45.814199 2701 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-f08c63113b.1861eb92e5f341b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-f08c63113b,UID:ci-4372.1.0-n-f08c63113b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-f08c63113b,},FirstTimestamp:2025-09-04 00:05:45.806881201 +0000 UTC m=+0.208428172,LastTimestamp:2025-09-04 00:05:45.806881201 +0000 UTC m=+0.208428172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-f08c63113b,}" Sep 4 00:05:45.816384 kubelet[2701]: I0904 00:05:45.816364 2701 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:05:45.816627 kubelet[2701]: I0904 00:05:45.816618 2701 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:05:45.816680 kubelet[2701]: I0904 00:05:45.816670 2701 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 00:05:45.816736 kubelet[2701]: I0904 00:05:45.816647 2701 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 00:05:45.816809 kubelet[2701]: E0904 00:05:45.816799 2701 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-f08c63113b\" not found" Sep 4 00:05:45.816952 kubelet[2701]: I0904 00:05:45.816945 2701 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:05:45.817041 kubelet[2701]: I0904 00:05:45.817036 2701 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:05:45.817741 kubelet[2701]: E0904 00:05:45.817719 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-f08c63113b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="200ms" Sep 4 00:05:45.817957 kubelet[2701]: I0904 00:05:45.817948 2701 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:05:45.818076 kubelet[2701]: I0904 00:05:45.818067 2701 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:05:45.820027 kubelet[2701]: W0904 00:05:45.819997 2701 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.18:6443: connect: connection refused Sep 4 00:05:45.820107 kubelet[2701]: E0904 00:05:45.820096 2701 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:45.820292 kubelet[2701]: I0904 00:05:45.820284 2701 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:05:45.826633 kubelet[2701]: I0904 00:05:45.825372 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:05:45.826633 kubelet[2701]: I0904 00:05:45.826231 2701 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:05:45.826633 kubelet[2701]: I0904 00:05:45.826246 2701 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 00:05:45.826633 kubelet[2701]: I0904 00:05:45.826261 2701 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 00:05:45.826633 kubelet[2701]: E0904 00:05:45.826287 2701 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:05:45.827793 kubelet[2701]: E0904 00:05:45.827778 2701 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:05:45.831625 kubelet[2701]: W0904 00:05:45.831578 2701 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.18:6443: connect: connection refused Sep 4 00:05:45.831695 kubelet[2701]: E0904 00:05:45.831633 2701 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:05:45.843820 kubelet[2701]: I0904 00:05:45.843219 2701 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 00:05:45.843820 kubelet[2701]: I0904 00:05:45.843230 2701 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 00:05:45.843820 kubelet[2701]: I0904 00:05:45.843243 2701 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:45.847713 kubelet[2701]: I0904 00:05:45.847699 2701 policy_none.go:49] "None policy: Start" Sep 4 00:05:45.848085 kubelet[2701]: I0904 00:05:45.848048 2701 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 00:05:45.848085 kubelet[2701]: I0904 00:05:45.848061 2701 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:05:45.858773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:05:45.867299 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:05:45.869553 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:05:45.877051 kubelet[2701]: I0904 00:05:45.877038 2701 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:05:45.877443 kubelet[2701]: I0904 00:05:45.877434 2701 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:05:45.877688 kubelet[2701]: I0904 00:05:45.877660 2701 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:05:45.877916 kubelet[2701]: I0904 00:05:45.877907 2701 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:05:45.879210 kubelet[2701]: E0904 00:05:45.879199 2701 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-f08c63113b\" not found" Sep 4 00:05:45.907909 kubelet[2701]: E0904 00:05:45.907857 2701 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-f08c63113b.1861eb92e5f341b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-f08c63113b,UID:ci-4372.1.0-n-f08c63113b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-f08c63113b,},FirstTimestamp:2025-09-04 00:05:45.806881201 +0000 UTC m=+0.208428172,LastTimestamp:2025-09-04 00:05:45.806881201 +0000 UTC m=+0.208428172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-f08c63113b,}" Sep 4 00:05:45.936442 systemd[1]: Created slice kubepods-burstable-pod8e5611c4e6e2a180358d0d044b611c15.slice - libcontainer container kubepods-burstable-pod8e5611c4e6e2a180358d0d044b611c15.slice. Sep 4 00:05:45.953630 systemd[1]: Created slice kubepods-burstable-pod0b692af34f6273c7c898a1904b1dc7c5.slice - libcontainer container kubepods-burstable-pod0b692af34f6273c7c898a1904b1dc7c5.slice. Sep 4 00:05:45.956328 systemd[1]: Created slice kubepods-burstable-pod6fa26ab31163669cb5833e6d452bf6fb.slice - libcontainer container kubepods-burstable-pod6fa26ab31163669cb5833e6d452bf6fb.slice. Sep 4 00:05:45.979852 kubelet[2701]: I0904 00:05:45.979832 2701 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:45.980119 kubelet[2701]: E0904 00:05:45.980093 2701 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018542 kubelet[2701]: I0904 00:05:46.018376 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e5611c4e6e2a180358d0d044b611c15-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" (UID: \"8e5611c4e6e2a180358d0d044b611c15\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018542 kubelet[2701]: I0904 00:05:46.018398 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e5611c4e6e2a180358d0d044b611c15-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" (UID: \"8e5611c4e6e2a180358d0d044b611c15\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018542 kubelet[2701]: I0904 00:05:46.018413 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018542 kubelet[2701]: I0904 00:05:46.018427 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018542 kubelet[2701]: I0904 00:05:46.018439 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018704 kubelet[2701]: I0904 00:05:46.018451 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018704 kubelet[2701]: I0904 00:05:46.018463 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018704 kubelet[2701]: I0904 00:05:46.018476 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fa26ab31163669cb5833e6d452bf6fb-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-f08c63113b\" (UID: \"6fa26ab31163669cb5833e6d452bf6fb\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018704 kubelet[2701]: I0904 00:05:46.018490 2701 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e5611c4e6e2a180358d0d044b611c15-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" (UID: \"8e5611c4e6e2a180358d0d044b611c15\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.018704 kubelet[2701]: E0904 00:05:46.018526 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-f08c63113b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="400ms" Sep 4 00:05:46.181865 kubelet[2701]: I0904 00:05:46.181711 2701 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.181940 kubelet[2701]: E0904 00:05:46.181901 2701 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.252046 containerd[1717]: time="2025-09-04T00:05:46.252019598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-f08c63113b,Uid:8e5611c4e6e2a180358d0d044b611c15,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:46.256457 containerd[1717]: time="2025-09-04T00:05:46.256413906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-f08c63113b,Uid:0b692af34f6273c7c898a1904b1dc7c5,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:46.259071 containerd[1717]: time="2025-09-04T00:05:46.259050768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-f08c63113b,Uid:6fa26ab31163669cb5833e6d452bf6fb,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:46.307566 containerd[1717]: time="2025-09-04T00:05:46.307533285Z" level=info msg="connecting to shim fd5d062ece6f9d7315bbdac58d0b79b8d4a58770197aa60c24ed3a01c0bc2bc1" address="unix:///run/containerd/s/5a70aed88e40bad51530a4b98f0ca12019656ee5d4e13c4d97ad6f0b664212fd" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:46.334850 systemd[1]: Started cri-containerd-fd5d062ece6f9d7315bbdac58d0b79b8d4a58770197aa60c24ed3a01c0bc2bc1.scope - libcontainer container fd5d062ece6f9d7315bbdac58d0b79b8d4a58770197aa60c24ed3a01c0bc2bc1. Sep 4 00:05:46.338422 containerd[1717]: time="2025-09-04T00:05:46.338401785Z" level=info msg="connecting to shim 986bb094c0b549ffc79a8a21fb6b7418019585839d62ab889d076634e72f97a7" address="unix:///run/containerd/s/927a915a9e5bfba2d9e6a4c3f6e69979fb4273a9fe29b0ef9600d069b7b814a4" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:46.354644 containerd[1717]: time="2025-09-04T00:05:46.353835282Z" level=info msg="connecting to shim 5f7e6c8d447242b231083fb564aa27ca7eeec54ce65ee68331b99bb8c6614f23" address="unix:///run/containerd/s/c34ff0a6e34bd5e01f36aec48020f8ed9e0568e5a69daa88039037f88dfe3031" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:05:46.378720 systemd[1]: Started cri-containerd-986bb094c0b549ffc79a8a21fb6b7418019585839d62ab889d076634e72f97a7.scope - libcontainer container 986bb094c0b549ffc79a8a21fb6b7418019585839d62ab889d076634e72f97a7. Sep 4 00:05:46.382574 systemd[1]: Started cri-containerd-5f7e6c8d447242b231083fb564aa27ca7eeec54ce65ee68331b99bb8c6614f23.scope - libcontainer container 5f7e6c8d447242b231083fb564aa27ca7eeec54ce65ee68331b99bb8c6614f23. Sep 4 00:05:46.406293 containerd[1717]: time="2025-09-04T00:05:46.406162364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-f08c63113b,Uid:8e5611c4e6e2a180358d0d044b611c15,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd5d062ece6f9d7315bbdac58d0b79b8d4a58770197aa60c24ed3a01c0bc2bc1\"" Sep 4 00:05:46.410569 containerd[1717]: time="2025-09-04T00:05:46.410547784Z" level=info msg="CreateContainer within sandbox \"fd5d062ece6f9d7315bbdac58d0b79b8d4a58770197aa60c24ed3a01c0bc2bc1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:05:46.419719 kubelet[2701]: E0904 00:05:46.419681 2701 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-f08c63113b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="800ms" Sep 4 00:05:46.432496 containerd[1717]: time="2025-09-04T00:05:46.431985825Z" level=info msg="Container 59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:46.448628 containerd[1717]: time="2025-09-04T00:05:46.448573891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-f08c63113b,Uid:0b692af34f6273c7c898a1904b1dc7c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f7e6c8d447242b231083fb564aa27ca7eeec54ce65ee68331b99bb8c6614f23\"" Sep 4 00:05:46.450173 containerd[1717]: time="2025-09-04T00:05:46.450155335Z" level=info msg="CreateContainer within sandbox \"5f7e6c8d447242b231083fb564aa27ca7eeec54ce65ee68331b99bb8c6614f23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:05:46.450991 containerd[1717]: time="2025-09-04T00:05:46.450964426Z" level=info msg="CreateContainer within sandbox \"fd5d062ece6f9d7315bbdac58d0b79b8d4a58770197aa60c24ed3a01c0bc2bc1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db\"" Sep 4 00:05:46.451306 containerd[1717]: time="2025-09-04T00:05:46.451288193Z" level=info msg="StartContainer for \"59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db\"" Sep 4 00:05:46.452251 containerd[1717]: time="2025-09-04T00:05:46.451907142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-f08c63113b,Uid:6fa26ab31163669cb5833e6d452bf6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"986bb094c0b549ffc79a8a21fb6b7418019585839d62ab889d076634e72f97a7\"" Sep 4 00:05:46.452251 containerd[1717]: time="2025-09-04T00:05:46.452215056Z" level=info msg="connecting to shim 59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db" address="unix:///run/containerd/s/5a70aed88e40bad51530a4b98f0ca12019656ee5d4e13c4d97ad6f0b664212fd" protocol=ttrpc version=3 Sep 4 00:05:46.454945 containerd[1717]: time="2025-09-04T00:05:46.454923260Z" level=info msg="CreateContainer within sandbox \"986bb094c0b549ffc79a8a21fb6b7418019585839d62ab889d076634e72f97a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:05:46.466857 systemd[1]: Started cri-containerd-59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db.scope - libcontainer container 59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db. Sep 4 00:05:46.473772 containerd[1717]: time="2025-09-04T00:05:46.473745694Z" level=info msg="Container 7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:46.480824 containerd[1717]: time="2025-09-04T00:05:46.480804289Z" level=info msg="Container f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:05:46.489781 containerd[1717]: time="2025-09-04T00:05:46.489759141Z" level=info msg="CreateContainer within sandbox \"986bb094c0b549ffc79a8a21fb6b7418019585839d62ab889d076634e72f97a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d\"" Sep 4 00:05:46.490210 containerd[1717]: time="2025-09-04T00:05:46.490178548Z" level=info msg="StartContainer for \"7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d\"" Sep 4 00:05:46.490993 containerd[1717]: time="2025-09-04T00:05:46.490968259Z" level=info msg="connecting to shim 7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d" address="unix:///run/containerd/s/927a915a9e5bfba2d9e6a4c3f6e69979fb4273a9fe29b0ef9600d069b7b814a4" protocol=ttrpc version=3 Sep 4 00:05:46.502111 containerd[1717]: time="2025-09-04T00:05:46.502057421Z" level=info msg="CreateContainer within sandbox \"5f7e6c8d447242b231083fb564aa27ca7eeec54ce65ee68331b99bb8c6614f23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf\"" Sep 4 00:05:46.502911 containerd[1717]: time="2025-09-04T00:05:46.502887576Z" level=info msg="StartContainer for \"f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf\"" Sep 4 00:05:46.504318 containerd[1717]: time="2025-09-04T00:05:46.503728362Z" level=info msg="connecting to shim f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf" address="unix:///run/containerd/s/c34ff0a6e34bd5e01f36aec48020f8ed9e0568e5a69daa88039037f88dfe3031" protocol=ttrpc version=3 Sep 4 00:05:46.507919 systemd[1]: Started cri-containerd-7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d.scope - libcontainer container 7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d. Sep 4 00:05:46.519491 containerd[1717]: time="2025-09-04T00:05:46.519304770Z" level=info msg="StartContainer for \"59b8c14434fbf39f8cbe37ecb545f159308e6b6ab5cfff9fe70bbc7c85f101db\" returns successfully" Sep 4 00:05:46.534369 systemd[1]: Started cri-containerd-f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf.scope - libcontainer container f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf. Sep 4 00:05:46.563394 containerd[1717]: time="2025-09-04T00:05:46.563371361Z" level=info msg="StartContainer for \"7962233c19a672f1926161616e8b959be61fab77cea4dbc7760ff6a4da80b60d\" returns successfully" Sep 4 00:05:46.583798 kubelet[2701]: I0904 00:05:46.583708 2701 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.584405 kubelet[2701]: E0904 00:05:46.584302 2701 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:46.592998 containerd[1717]: time="2025-09-04T00:05:46.592902868Z" level=info msg="StartContainer for \"f5c0ff75288f1908dfbbf326d83812c34ff5032a80520a896bdc5dc221af8ebf\" returns successfully" Sep 4 00:05:47.387612 kubelet[2701]: I0904 00:05:47.387533 2701 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:47.958118 kubelet[2701]: E0904 00:05:47.958093 2701 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-n-f08c63113b\" not found" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:48.013177 kubelet[2701]: I0904 00:05:48.013155 2701 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:48.805728 kubelet[2701]: I0904 00:05:48.805696 2701 apiserver.go:52] "Watching apiserver" Sep 4 00:05:48.817430 kubelet[2701]: I0904 00:05:48.817414 2701 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 00:05:49.294195 kubelet[2701]: W0904 00:05:49.294139 2701 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:49.611918 kubelet[2701]: W0904 00:05:49.611846 2701 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:50.033887 systemd[1]: Reload requested from client PID 2973 ('systemctl') (unit session-9.scope)... Sep 4 00:05:50.033899 systemd[1]: Reloading... Sep 4 00:05:50.116623 zram_generator::config[3019]: No configuration found. Sep 4 00:05:50.187505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:05:50.278555 systemd[1]: Reloading finished in 244 ms. Sep 4 00:05:50.298164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:50.317277 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:05:50.317487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:50.317529 systemd[1]: kubelet.service: Consumed 454ms CPU time, 128.8M memory peak. Sep 4 00:05:50.318896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:05:50.364982 update_engine[1695]: I20250904 00:05:50.364691 1695 update_attempter.cc:509] Updating boot flags... Sep 4 00:05:50.837308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:05:50.842851 (kubelet)[3131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:05:50.875720 kubelet[3131]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:50.876623 kubelet[3131]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 00:05:50.876623 kubelet[3131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:05:50.876623 kubelet[3131]: I0904 00:05:50.875962 3131 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:05:50.883175 kubelet[3131]: I0904 00:05:50.883161 3131 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 00:05:50.883240 kubelet[3131]: I0904 00:05:50.883235 3131 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:05:50.883391 kubelet[3131]: I0904 00:05:50.883387 3131 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 00:05:50.884169 kubelet[3131]: I0904 00:05:50.884159 3131 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:05:50.887641 kubelet[3131]: I0904 00:05:50.886257 3131 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:05:50.890468 kubelet[3131]: I0904 00:05:50.890458 3131 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:05:50.892728 kubelet[3131]: I0904 00:05:50.892712 3131 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:05:50.892869 kubelet[3131]: I0904 00:05:50.892861 3131 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 00:05:50.892987 kubelet[3131]: I0904 00:05:50.892964 3131 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:05:50.893245 kubelet[3131]: I0904 00:05:50.893030 3131 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-f08c63113b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:05:50.893353 kubelet[3131]: I0904 00:05:50.893348 3131 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:05:50.893386 kubelet[3131]: I0904 00:05:50.893382 3131 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 00:05:50.893433 kubelet[3131]: I0904 00:05:50.893429 3131 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:50.893531 kubelet[3131]: I0904 00:05:50.893527 3131 kubelet.go:408] "Attempting to sync node with API server" Sep 4 00:05:50.893569 kubelet[3131]: I0904 00:05:50.893565 3131 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:05:50.893646 kubelet[3131]: I0904 00:05:50.893641 3131 kubelet.go:314] "Adding apiserver pod source" Sep 4 00:05:50.893702 kubelet[3131]: I0904 00:05:50.893696 3131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:05:50.898621 kubelet[3131]: I0904 00:05:50.897992 3131 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:05:50.898621 kubelet[3131]: I0904 00:05:50.898295 3131 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:05:50.898721 kubelet[3131]: I0904 00:05:50.898600 3131 server.go:1274] "Started kubelet" Sep 4 00:05:50.904124 kubelet[3131]: I0904 00:05:50.904109 3131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:05:50.911234 kubelet[3131]: I0904 00:05:50.911212 3131 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:05:50.912359 kubelet[3131]: I0904 00:05:50.912350 3131 server.go:449] "Adding debug handlers to kubelet server" Sep 4 00:05:50.912780 kubelet[3131]: I0904 00:05:50.912720 3131 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:05:50.912964 kubelet[3131]: I0904 00:05:50.912952 3131 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:05:50.913127 kubelet[3131]: I0904 00:05:50.913116 3131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:05:50.913214 kubelet[3131]: I0904 00:05:50.913207 3131 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 00:05:50.913393 kubelet[3131]: E0904 00:05:50.913384 3131 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-f08c63113b\" not found" Sep 4 00:05:50.914922 kubelet[3131]: I0904 00:05:50.914907 3131 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 00:05:50.915076 kubelet[3131]: I0904 00:05:50.915069 3131 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:05:50.919624 kubelet[3131]: I0904 00:05:50.919268 3131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:05:50.920515 kubelet[3131]: I0904 00:05:50.920500 3131 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:05:50.920592 kubelet[3131]: I0904 00:05:50.920587 3131 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 00:05:50.920656 kubelet[3131]: I0904 00:05:50.920651 3131 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 00:05:50.920723 kubelet[3131]: E0904 00:05:50.920714 3131 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:05:50.924148 kubelet[3131]: I0904 00:05:50.924128 3131 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:05:50.924210 kubelet[3131]: I0904 00:05:50.924197 3131 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:05:50.925045 kubelet[3131]: E0904 00:05:50.924976 3131 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:05:50.929484 kubelet[3131]: I0904 00:05:50.929467 3131 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:05:50.966250 kubelet[3131]: I0904 00:05:50.966240 3131 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 00:05:50.966471 kubelet[3131]: I0904 00:05:50.966316 3131 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 00:05:50.966471 kubelet[3131]: I0904 00:05:50.966327 3131 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:05:50.966471 kubelet[3131]: I0904 00:05:50.966413 3131 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:05:50.966471 kubelet[3131]: I0904 00:05:50.966418 3131 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:05:50.966471 kubelet[3131]: I0904 00:05:50.966429 3131 policy_none.go:49] "None policy: Start" Sep 4 00:05:50.966949 kubelet[3131]: I0904 00:05:50.966938 3131 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 00:05:50.966999 kubelet[3131]: I0904 00:05:50.966953 3131 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:05:50.967069 kubelet[3131]: I0904 00:05:50.967061 3131 state_mem.go:75] "Updated machine memory state" Sep 4 00:05:50.969784 kubelet[3131]: I0904 00:05:50.969770 3131 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:05:50.969880 kubelet[3131]: I0904 00:05:50.969870 3131 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:05:50.969907 kubelet[3131]: I0904 00:05:50.969880 3131 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:05:50.970942 kubelet[3131]: I0904 00:05:50.970895 3131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:05:51.031443 kubelet[3131]: W0904 00:05:51.031380 3131 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:51.031579 kubelet[3131]: E0904 00:05:51.031564 3131 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.031579 kubelet[3131]: W0904 00:05:51.031390 3131 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:51.031830 kubelet[3131]: W0904 00:05:51.031404 3131 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 00:05:51.031921 kubelet[3131]: E0904 00:05:51.031895 3131 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.1.0-n-f08c63113b\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.074378 kubelet[3131]: I0904 00:05:51.073784 3131 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.090993 kubelet[3131]: I0904 00:05:51.090808 3131 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.090993 kubelet[3131]: I0904 00:05:51.090993 3131 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116649 kubelet[3131]: I0904 00:05:51.116617 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116649 kubelet[3131]: I0904 00:05:51.116645 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116740 kubelet[3131]: I0904 00:05:51.116663 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116740 kubelet[3131]: I0904 00:05:51.116675 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e5611c4e6e2a180358d0d044b611c15-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" (UID: \"8e5611c4e6e2a180358d0d044b611c15\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116740 kubelet[3131]: I0904 00:05:51.116686 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e5611c4e6e2a180358d0d044b611c15-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" (UID: \"8e5611c4e6e2a180358d0d044b611c15\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116740 kubelet[3131]: I0904 00:05:51.116700 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116740 kubelet[3131]: I0904 00:05:51.116713 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fa26ab31163669cb5833e6d452bf6fb-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-f08c63113b\" (UID: \"6fa26ab31163669cb5833e6d452bf6fb\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116851 kubelet[3131]: I0904 00:05:51.116727 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e5611c4e6e2a180358d0d044b611c15-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-f08c63113b\" (UID: \"8e5611c4e6e2a180358d0d044b611c15\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.116851 kubelet[3131]: I0904 00:05:51.116741 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b692af34f6273c7c898a1904b1dc7c5-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-f08c63113b\" (UID: \"0b692af34f6273c7c898a1904b1dc7c5\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" Sep 4 00:05:51.896451 kubelet[3131]: I0904 00:05:51.896431 3131 apiserver.go:52] "Watching apiserver" Sep 4 00:05:51.915219 kubelet[3131]: I0904 00:05:51.915178 3131 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 00:05:51.937020 kubelet[3131]: I0904 00:05:51.936969 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-f08c63113b" podStartSLOduration=2.9369462459999998 podStartE2EDuration="2.936946246s" podCreationTimestamp="2025-09-04 00:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:51.936900835 +0000 UTC m=+1.090748237" watchObservedRunningTime="2025-09-04 00:05:51.936946246 +0000 UTC m=+1.090793653" Sep 4 00:05:51.954304 kubelet[3131]: I0904 00:05:51.954054 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-f08c63113b" podStartSLOduration=0.954043239 podStartE2EDuration="954.043239ms" podCreationTimestamp="2025-09-04 00:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:51.945652577 +0000 UTC m=+1.099499977" watchObservedRunningTime="2025-09-04 00:05:51.954043239 +0000 UTC m=+1.107890643" Sep 4 00:05:51.964447 kubelet[3131]: I0904 00:05:51.964416 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-f08c63113b" podStartSLOduration=2.964404714 podStartE2EDuration="2.964404714s" podCreationTimestamp="2025-09-04 00:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:05:51.954818959 +0000 UTC m=+1.108666364" watchObservedRunningTime="2025-09-04 00:05:51.964404714 +0000 UTC m=+1.118252118" Sep 4 00:05:52.560247 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 4 00:05:55.680042 systemd[1]: Created slice kubepods-besteffort-pod12e8a813_b10d_4a0c_a43e_7c89ae05f8eb.slice - libcontainer container kubepods-besteffort-pod12e8a813_b10d_4a0c_a43e_7c89ae05f8eb.slice. Sep 4 00:05:55.698708 kubelet[3131]: I0904 00:05:55.698639 3131 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:05:55.700629 containerd[1717]: time="2025-09-04T00:05:55.698966916Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:05:55.700862 kubelet[3131]: I0904 00:05:55.699150 3131 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:05:55.747052 kubelet[3131]: I0904 00:05:55.747015 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfvw\" (UniqueName: \"kubernetes.io/projected/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-kube-api-access-ljfvw\") pod \"kube-proxy-pwcsb\" (UID: \"12e8a813-b10d-4a0c-a43e-7c89ae05f8eb\") " pod="kube-system/kube-proxy-pwcsb" Sep 4 00:05:55.747181 kubelet[3131]: I0904 00:05:55.747056 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-kube-proxy\") pod \"kube-proxy-pwcsb\" (UID: \"12e8a813-b10d-4a0c-a43e-7c89ae05f8eb\") " pod="kube-system/kube-proxy-pwcsb" Sep 4 00:05:55.747181 kubelet[3131]: I0904 00:05:55.747072 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-xtables-lock\") pod \"kube-proxy-pwcsb\" (UID: \"12e8a813-b10d-4a0c-a43e-7c89ae05f8eb\") " pod="kube-system/kube-proxy-pwcsb" Sep 4 00:05:55.747181 kubelet[3131]: I0904 00:05:55.747094 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-lib-modules\") pod \"kube-proxy-pwcsb\" (UID: \"12e8a813-b10d-4a0c-a43e-7c89ae05f8eb\") " pod="kube-system/kube-proxy-pwcsb" Sep 4 00:05:55.851030 kubelet[3131]: E0904 00:05:55.851012 3131 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 00:05:55.851030 kubelet[3131]: E0904 00:05:55.851031 3131 projected.go:194] Error preparing data for projected volume kube-api-access-ljfvw for pod kube-system/kube-proxy-pwcsb: configmap "kube-root-ca.crt" not found Sep 4 00:05:55.851122 kubelet[3131]: E0904 00:05:55.851074 3131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-kube-api-access-ljfvw podName:12e8a813-b10d-4a0c-a43e-7c89ae05f8eb nodeName:}" failed. No retries permitted until 2025-09-04 00:05:56.351057605 +0000 UTC m=+5.504904999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ljfvw" (UniqueName: "kubernetes.io/projected/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-kube-api-access-ljfvw") pod "kube-proxy-pwcsb" (UID: "12e8a813-b10d-4a0c-a43e-7c89ae05f8eb") : configmap "kube-root-ca.crt" not found Sep 4 00:05:56.352165 kubelet[3131]: E0904 00:05:56.351826 3131 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 00:05:56.352165 kubelet[3131]: E0904 00:05:56.351845 3131 projected.go:194] Error preparing data for projected volume kube-api-access-ljfvw for pod kube-system/kube-proxy-pwcsb: configmap "kube-root-ca.crt" not found Sep 4 00:05:56.352165 kubelet[3131]: E0904 00:05:56.351875 3131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-kube-api-access-ljfvw podName:12e8a813-b10d-4a0c-a43e-7c89ae05f8eb nodeName:}" failed. No retries permitted until 2025-09-04 00:05:57.351862529 +0000 UTC m=+6.505709921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ljfvw" (UniqueName: "kubernetes.io/projected/12e8a813-b10d-4a0c-a43e-7c89ae05f8eb-kube-api-access-ljfvw") pod "kube-proxy-pwcsb" (UID: "12e8a813-b10d-4a0c-a43e-7c89ae05f8eb") : configmap "kube-root-ca.crt" not found Sep 4 00:05:56.796045 systemd[1]: Created slice kubepods-besteffort-pod970e7a2c_647c_465f_a770_553beef45e17.slice - libcontainer container kubepods-besteffort-pod970e7a2c_647c_465f_a770_553beef45e17.slice. Sep 4 00:05:57.416226 kubelet[3131]: I0904 00:05:56.855127 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/970e7a2c-647c-465f-a770-553beef45e17-var-lib-calico\") pod \"tigera-operator-58fc44c59b-tnh2p\" (UID: \"970e7a2c-647c-465f-a770-553beef45e17\") " pod="tigera-operator/tigera-operator-58fc44c59b-tnh2p" Sep 4 00:05:57.416226 kubelet[3131]: I0904 00:05:56.855155 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwfc\" (UniqueName: \"kubernetes.io/projected/970e7a2c-647c-465f-a770-553beef45e17-kube-api-access-bwwfc\") pod \"tigera-operator-58fc44c59b-tnh2p\" (UID: \"970e7a2c-647c-465f-a770-553beef45e17\") " pod="tigera-operator/tigera-operator-58fc44c59b-tnh2p" Sep 4 00:05:57.488528 containerd[1717]: time="2025-09-04T00:05:57.488470994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwcsb,Uid:12e8a813-b10d-4a0c-a43e-7c89ae05f8eb,Namespace:kube-system,Attempt:0,}" Sep 4 00:05:57.717892 containerd[1717]: time="2025-09-04T00:05:57.717815607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tnh2p,Uid:970e7a2c-647c-465f-a770-553beef45e17,Namespace:tigera-operator,Attempt:0,}" Sep 4 00:06:03.890245 containerd[1717]: time="2025-09-04T00:06:03.890161786Z" level=info msg="connecting to shim 1840af9234fb3a93bcbcf616bde42fc3f25bd950ce26c7c8937239016386ed27" address="unix:///run/containerd/s/ab60e8c1e00019147fa2bd27b847a2f2a39f737906045b93fb116f5432ad7c24" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:03.890890 containerd[1717]: time="2025-09-04T00:06:03.890871304Z" level=info msg="connecting to shim f341fd69647d0a9444456c483869288bfc504825dcbbd423ba0b1b7bf18509ee" address="unix:///run/containerd/s/a7f93800622d2fc9046e5f7d522f8baaf29009beeab2f4d7cf317886cf390fb1" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:03.921772 systemd[1]: Started cri-containerd-1840af9234fb3a93bcbcf616bde42fc3f25bd950ce26c7c8937239016386ed27.scope - libcontainer container 1840af9234fb3a93bcbcf616bde42fc3f25bd950ce26c7c8937239016386ed27. Sep 4 00:06:03.923267 systemd[1]: Started cri-containerd-f341fd69647d0a9444456c483869288bfc504825dcbbd423ba0b1b7bf18509ee.scope - libcontainer container f341fd69647d0a9444456c483869288bfc504825dcbbd423ba0b1b7bf18509ee. Sep 4 00:06:03.950261 containerd[1717]: time="2025-09-04T00:06:03.950239998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwcsb,Uid:12e8a813-b10d-4a0c-a43e-7c89ae05f8eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1840af9234fb3a93bcbcf616bde42fc3f25bd950ce26c7c8937239016386ed27\"" Sep 4 00:06:03.954046 containerd[1717]: time="2025-09-04T00:06:03.953822018Z" level=info msg="CreateContainer within sandbox \"1840af9234fb3a93bcbcf616bde42fc3f25bd950ce26c7c8937239016386ed27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:06:04.122099 containerd[1717]: time="2025-09-04T00:06:04.122075921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tnh2p,Uid:970e7a2c-647c-465f-a770-553beef45e17,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f341fd69647d0a9444456c483869288bfc504825dcbbd423ba0b1b7bf18509ee\"" Sep 4 00:06:04.123409 containerd[1717]: time="2025-09-04T00:06:04.123394661Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 00:06:04.223167 containerd[1717]: time="2025-09-04T00:06:04.223113450Z" level=info msg="Container e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:04.423747 containerd[1717]: time="2025-09-04T00:06:04.423711020Z" level=info msg="CreateContainer within sandbox \"1840af9234fb3a93bcbcf616bde42fc3f25bd950ce26c7c8937239016386ed27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c\"" Sep 4 00:06:04.424147 containerd[1717]: time="2025-09-04T00:06:04.424131494Z" level=info msg="StartContainer for \"e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c\"" Sep 4 00:06:04.425491 containerd[1717]: time="2025-09-04T00:06:04.425469616Z" level=info msg="connecting to shim e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c" address="unix:///run/containerd/s/ab60e8c1e00019147fa2bd27b847a2f2a39f737906045b93fb116f5432ad7c24" protocol=ttrpc version=3 Sep 4 00:06:04.441740 systemd[1]: Started cri-containerd-e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c.scope - libcontainer container e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c. Sep 4 00:06:04.468621 containerd[1717]: time="2025-09-04T00:06:04.468587710Z" level=info msg="StartContainer for \"e5bc56801087d65e0be7db2b9ef868f4d0964dffc10c26867026753460aa079c\" returns successfully" Sep 4 00:06:05.462334 kubelet[3131]: I0904 00:06:05.462281 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pwcsb" podStartSLOduration=10.46226926 podStartE2EDuration="10.46226926s" podCreationTimestamp="2025-09-04 00:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:04.983154633 +0000 UTC m=+14.137002055" watchObservedRunningTime="2025-09-04 00:06:05.46226926 +0000 UTC m=+14.616116658" Sep 4 00:06:06.544263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622048373.mount: Deactivated successfully. Sep 4 00:06:07.457275 containerd[1717]: time="2025-09-04T00:06:07.457249392Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:07.459779 containerd[1717]: time="2025-09-04T00:06:07.459753681Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 00:06:07.462495 containerd[1717]: time="2025-09-04T00:06:07.462464919Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:07.466458 containerd[1717]: time="2025-09-04T00:06:07.466422354Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:07.466831 containerd[1717]: time="2025-09-04T00:06:07.466749463Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.343263235s" Sep 4 00:06:07.466831 containerd[1717]: time="2025-09-04T00:06:07.466773350Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 00:06:07.468684 containerd[1717]: time="2025-09-04T00:06:07.468655716Z" level=info msg="CreateContainer within sandbox \"f341fd69647d0a9444456c483869288bfc504825dcbbd423ba0b1b7bf18509ee\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 00:06:07.673003 containerd[1717]: time="2025-09-04T00:06:07.672060409Z" level=info msg="Container 3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:07.672326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680678398.mount: Deactivated successfully. Sep 4 00:06:07.769883 containerd[1717]: time="2025-09-04T00:06:07.769629228Z" level=info msg="CreateContainer within sandbox \"f341fd69647d0a9444456c483869288bfc504825dcbbd423ba0b1b7bf18509ee\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a\"" Sep 4 00:06:07.770314 containerd[1717]: time="2025-09-04T00:06:07.770292293Z" level=info msg="StartContainer for \"3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a\"" Sep 4 00:06:07.771089 containerd[1717]: time="2025-09-04T00:06:07.771046819Z" level=info msg="connecting to shim 3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a" address="unix:///run/containerd/s/a7f93800622d2fc9046e5f7d522f8baaf29009beeab2f4d7cf317886cf390fb1" protocol=ttrpc version=3 Sep 4 00:06:07.790754 systemd[1]: Started cri-containerd-3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a.scope - libcontainer container 3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a. Sep 4 00:06:07.815017 containerd[1717]: time="2025-09-04T00:06:07.814990412Z" level=info msg="StartContainer for \"3b7ce0356c6d4c7ab8c436f8d862e2eb270c01a3ff9621e16747befbbede276a\" returns successfully" Sep 4 00:06:07.986983 kubelet[3131]: I0904 00:06:07.986940 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-tnh2p" podStartSLOduration=8.642369904 podStartE2EDuration="11.986928032s" podCreationTimestamp="2025-09-04 00:05:56 +0000 UTC" firstStartedPulling="2025-09-04 00:06:04.122862143 +0000 UTC m=+13.276709534" lastFinishedPulling="2025-09-04 00:06:07.467420258 +0000 UTC m=+16.621267662" observedRunningTime="2025-09-04 00:06:07.986758559 +0000 UTC m=+17.140605978" watchObservedRunningTime="2025-09-04 00:06:07.986928032 +0000 UTC m=+17.140775434" Sep 4 00:06:13.084939 sudo[2133]: pam_unix(sudo:session): session closed for user root Sep 4 00:06:13.189427 sshd[2132]: Connection closed by 10.200.16.10 port 45456 Sep 4 00:06:13.188549 sshd-session[2130]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:13.192298 systemd[1]: sshd@6-10.200.8.18:22-10.200.16.10:45456.service: Deactivated successfully. Sep 4 00:06:13.196031 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:06:13.196366 systemd[1]: session-9.scope: Consumed 3.076s CPU time, 222.6M memory peak. Sep 4 00:06:13.199275 systemd-logind[1692]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:06:13.201628 systemd-logind[1692]: Removed session 9. Sep 4 00:06:16.189273 systemd[1]: Created slice kubepods-besteffort-pod9f9278b5_ace2_49aa_9984_9e0b10870e71.slice - libcontainer container kubepods-besteffort-pod9f9278b5_ace2_49aa_9984_9e0b10870e71.slice. Sep 4 00:06:16.284433 kubelet[3131]: I0904 00:06:16.284403 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f9278b5-ace2-49aa-9984-9e0b10870e71-tigera-ca-bundle\") pod \"calico-typha-cdd7d9fdf-bp8jj\" (UID: \"9f9278b5-ace2-49aa-9984-9e0b10870e71\") " pod="calico-system/calico-typha-cdd7d9fdf-bp8jj" Sep 4 00:06:16.285182 kubelet[3131]: I0904 00:06:16.285137 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f9278b5-ace2-49aa-9984-9e0b10870e71-typha-certs\") pod \"calico-typha-cdd7d9fdf-bp8jj\" (UID: \"9f9278b5-ace2-49aa-9984-9e0b10870e71\") " pod="calico-system/calico-typha-cdd7d9fdf-bp8jj" Sep 4 00:06:16.285182 kubelet[3131]: I0904 00:06:16.285166 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxzl\" (UniqueName: \"kubernetes.io/projected/9f9278b5-ace2-49aa-9984-9e0b10870e71-kube-api-access-qpxzl\") pod \"calico-typha-cdd7d9fdf-bp8jj\" (UID: \"9f9278b5-ace2-49aa-9984-9e0b10870e71\") " pod="calico-system/calico-typha-cdd7d9fdf-bp8jj" Sep 4 00:06:16.425542 systemd[1]: Created slice kubepods-besteffort-poda13a5ba2_2d4a_4484_be50_9c9c2f49a9b3.slice - libcontainer container kubepods-besteffort-poda13a5ba2_2d4a_4484_be50_9c9c2f49a9b3.slice. Sep 4 00:06:16.486663 kubelet[3131]: I0904 00:06:16.486353 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-cni-log-dir\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486663 kubelet[3131]: I0904 00:06:16.486418 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-var-lib-calico\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486663 kubelet[3131]: I0904 00:06:16.486447 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6jp\" (UniqueName: \"kubernetes.io/projected/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-kube-api-access-kx6jp\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486663 kubelet[3131]: I0904 00:06:16.486465 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-cni-net-dir\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486663 kubelet[3131]: I0904 00:06:16.486478 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-var-run-calico\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486785 kubelet[3131]: I0904 00:06:16.486493 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-lib-modules\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486785 kubelet[3131]: I0904 00:06:16.486507 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-policysync\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486785 kubelet[3131]: I0904 00:06:16.486531 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-tigera-ca-bundle\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486785 kubelet[3131]: I0904 00:06:16.486545 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-cni-bin-dir\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486785 kubelet[3131]: I0904 00:06:16.486560 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-flexvol-driver-host\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486855 kubelet[3131]: I0904 00:06:16.486574 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-xtables-lock\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.486855 kubelet[3131]: I0904 00:06:16.486595 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3-node-certs\") pod \"calico-node-vtnsf\" (UID: \"a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3\") " pod="calico-system/calico-node-vtnsf" Sep 4 00:06:16.493851 containerd[1717]: time="2025-09-04T00:06:16.493819763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdd7d9fdf-bp8jj,Uid:9f9278b5-ace2-49aa-9984-9e0b10870e71,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:16.532077 containerd[1717]: time="2025-09-04T00:06:16.532044802Z" level=info msg="connecting to shim 6ba23e661a4e78b95031d73a5817abc23887a139e00e0649845ed5752fdef5fe" address="unix:///run/containerd/s/ae1ed883aac851cea4279d046644d59f31a35442d54165319d7cbeb13d8ac283" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:16.553738 systemd[1]: Started cri-containerd-6ba23e661a4e78b95031d73a5817abc23887a139e00e0649845ed5752fdef5fe.scope - libcontainer container 6ba23e661a4e78b95031d73a5817abc23887a139e00e0649845ed5752fdef5fe. Sep 4 00:06:16.591122 containerd[1717]: time="2025-09-04T00:06:16.591005512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdd7d9fdf-bp8jj,Uid:9f9278b5-ace2-49aa-9984-9e0b10870e71,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ba23e661a4e78b95031d73a5817abc23887a139e00e0649845ed5752fdef5fe\"" Sep 4 00:06:16.594595 kubelet[3131]: E0904 00:06:16.594582 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.594719 kubelet[3131]: W0904 00:06:16.594709 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.594817 kubelet[3131]: E0904 00:06:16.594807 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.595105 containerd[1717]: time="2025-09-04T00:06:16.595079441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 00:06:16.597975 kubelet[3131]: E0904 00:06:16.597957 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.597975 kubelet[3131]: W0904 00:06:16.597975 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.598365 kubelet[3131]: E0904 00:06:16.598350 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.703155 kubelet[3131]: E0904 00:06:16.703128 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4rms" podUID="dad91c5e-f1ce-4b93-bc07-61538ab43fa7" Sep 4 00:06:16.732706 containerd[1717]: time="2025-09-04T00:06:16.732683277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vtnsf,Uid:a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:16.769590 containerd[1717]: time="2025-09-04T00:06:16.769346191Z" level=info msg="connecting to shim e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb" address="unix:///run/containerd/s/6ba3bbf8f083203a168f606d56abbe724afbaced08a4c7ae547f940d2b779690" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:16.772484 kubelet[3131]: E0904 00:06:16.772467 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.772484 kubelet[3131]: W0904 00:06:16.772484 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.772583 kubelet[3131]: E0904 00:06:16.772499 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.774103 kubelet[3131]: E0904 00:06:16.774078 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.774103 kubelet[3131]: W0904 00:06:16.774098 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.774189 kubelet[3131]: E0904 00:06:16.774112 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.774517 kubelet[3131]: E0904 00:06:16.774475 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.774517 kubelet[3131]: W0904 00:06:16.774485 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.774517 kubelet[3131]: E0904 00:06:16.774498 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.774794 kubelet[3131]: E0904 00:06:16.774780 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.774917 kubelet[3131]: W0904 00:06:16.774795 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.774917 kubelet[3131]: E0904 00:06:16.774805 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775132 kubelet[3131]: E0904 00:06:16.775079 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775132 kubelet[3131]: W0904 00:06:16.775089 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775132 kubelet[3131]: E0904 00:06:16.775099 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775402 kubelet[3131]: E0904 00:06:16.775392 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775432 kubelet[3131]: W0904 00:06:16.775404 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775432 kubelet[3131]: E0904 00:06:16.775414 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775556 kubelet[3131]: E0904 00:06:16.775518 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775556 kubelet[3131]: W0904 00:06:16.775523 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775556 kubelet[3131]: E0904 00:06:16.775529 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775636 kubelet[3131]: E0904 00:06:16.775620 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775636 kubelet[3131]: W0904 00:06:16.775624 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775636 kubelet[3131]: E0904 00:06:16.775630 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775851 kubelet[3131]: E0904 00:06:16.775711 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775851 kubelet[3131]: W0904 00:06:16.775715 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775851 kubelet[3131]: E0904 00:06:16.775720 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775851 kubelet[3131]: E0904 00:06:16.775790 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775851 kubelet[3131]: W0904 00:06:16.775793 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775851 kubelet[3131]: E0904 00:06:16.775798 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775985 kubelet[3131]: E0904 00:06:16.775863 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775985 kubelet[3131]: W0904 00:06:16.775867 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775985 kubelet[3131]: E0904 00:06:16.775872 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.775985 kubelet[3131]: E0904 00:06:16.775938 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.775985 kubelet[3131]: W0904 00:06:16.775942 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.775985 kubelet[3131]: E0904 00:06:16.775947 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776099 kubelet[3131]: E0904 00:06:16.776019 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776099 kubelet[3131]: W0904 00:06:16.776023 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776099 kubelet[3131]: E0904 00:06:16.776028 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776099 kubelet[3131]: E0904 00:06:16.776098 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776184 kubelet[3131]: W0904 00:06:16.776102 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776184 kubelet[3131]: E0904 00:06:16.776107 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776184 kubelet[3131]: E0904 00:06:16.776172 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776184 kubelet[3131]: W0904 00:06:16.776176 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776184 kubelet[3131]: E0904 00:06:16.776181 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776277 kubelet[3131]: E0904 00:06:16.776245 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776277 kubelet[3131]: W0904 00:06:16.776249 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776277 kubelet[3131]: E0904 00:06:16.776253 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776333 kubelet[3131]: E0904 00:06:16.776327 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776333 kubelet[3131]: W0904 00:06:16.776331 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776374 kubelet[3131]: E0904 00:06:16.776336 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776852 kubelet[3131]: E0904 00:06:16.776405 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776852 kubelet[3131]: W0904 00:06:16.776410 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776852 kubelet[3131]: E0904 00:06:16.776415 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776852 kubelet[3131]: E0904 00:06:16.776495 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776852 kubelet[3131]: W0904 00:06:16.776499 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776852 kubelet[3131]: E0904 00:06:16.776504 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.776852 kubelet[3131]: E0904 00:06:16.776596 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.776852 kubelet[3131]: W0904 00:06:16.776614 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.776852 kubelet[3131]: E0904 00:06:16.776620 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.787965 kubelet[3131]: E0904 00:06:16.787937 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.787965 kubelet[3131]: W0904 00:06:16.787961 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.788058 kubelet[3131]: E0904 00:06:16.787973 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.788080 kubelet[3131]: I0904 00:06:16.788055 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dad91c5e-f1ce-4b93-bc07-61538ab43fa7-varrun\") pod \"csi-node-driver-t4rms\" (UID: \"dad91c5e-f1ce-4b93-bc07-61538ab43fa7\") " pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:16.788447 kubelet[3131]: E0904 00:06:16.788434 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.788447 kubelet[3131]: W0904 00:06:16.788444 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.788509 kubelet[3131]: E0904 00:06:16.788455 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.788509 kubelet[3131]: I0904 00:06:16.788472 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dad91c5e-f1ce-4b93-bc07-61538ab43fa7-kubelet-dir\") pod \"csi-node-driver-t4rms\" (UID: \"dad91c5e-f1ce-4b93-bc07-61538ab43fa7\") " pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:16.788678 kubelet[3131]: E0904 00:06:16.788664 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.788678 kubelet[3131]: W0904 00:06:16.788674 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.788731 kubelet[3131]: E0904 00:06:16.788688 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.788731 kubelet[3131]: I0904 00:06:16.788701 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dad91c5e-f1ce-4b93-bc07-61538ab43fa7-registration-dir\") pod \"csi-node-driver-t4rms\" (UID: \"dad91c5e-f1ce-4b93-bc07-61538ab43fa7\") " pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:16.788850 kubelet[3131]: E0904 00:06:16.788838 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.788850 kubelet[3131]: W0904 00:06:16.788845 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.788906 kubelet[3131]: E0904 00:06:16.788858 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.788906 kubelet[3131]: I0904 00:06:16.788870 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czwz\" (UniqueName: \"kubernetes.io/projected/dad91c5e-f1ce-4b93-bc07-61538ab43fa7-kube-api-access-6czwz\") pod \"csi-node-driver-t4rms\" (UID: \"dad91c5e-f1ce-4b93-bc07-61538ab43fa7\") " pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:16.789509 kubelet[3131]: E0904 00:06:16.789478 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.789509 kubelet[3131]: W0904 00:06:16.789504 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.789589 kubelet[3131]: E0904 00:06:16.789523 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.789589 kubelet[3131]: I0904 00:06:16.789540 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dad91c5e-f1ce-4b93-bc07-61538ab43fa7-socket-dir\") pod \"csi-node-driver-t4rms\" (UID: \"dad91c5e-f1ce-4b93-bc07-61538ab43fa7\") " pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:16.790276 kubelet[3131]: E0904 00:06:16.790261 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.790276 kubelet[3131]: W0904 00:06:16.790275 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.790465 kubelet[3131]: E0904 00:06:16.790438 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.791095 kubelet[3131]: E0904 00:06:16.791000 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.791095 kubelet[3131]: W0904 00:06:16.791013 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.791439 kubelet[3131]: E0904 00:06:16.791416 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.791562 kubelet[3131]: E0904 00:06:16.791542 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.791562 kubelet[3131]: W0904 00:06:16.791563 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.791777 kubelet[3131]: E0904 00:06:16.791748 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.791971 kubelet[3131]: E0904 00:06:16.791953 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.791971 kubelet[3131]: W0904 00:06:16.791967 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.792460 kubelet[3131]: E0904 00:06:16.792442 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.792577 kubelet[3131]: E0904 00:06:16.792564 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.792577 kubelet[3131]: W0904 00:06:16.792574 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.792925 kubelet[3131]: E0904 00:06:16.792902 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.794013 kubelet[3131]: E0904 00:06:16.793991 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.794013 kubelet[3131]: W0904 00:06:16.794006 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.794107 kubelet[3131]: E0904 00:06:16.794018 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.796808 kubelet[3131]: E0904 00:06:16.796791 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.796808 kubelet[3131]: W0904 00:06:16.796804 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.796892 kubelet[3131]: E0904 00:06:16.796816 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.797010 kubelet[3131]: E0904 00:06:16.796996 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.797010 kubelet[3131]: W0904 00:06:16.797005 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.797064 kubelet[3131]: E0904 00:06:16.797013 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.797188 kubelet[3131]: E0904 00:06:16.797177 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.797188 kubelet[3131]: W0904 00:06:16.797185 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.797234 kubelet[3131]: E0904 00:06:16.797192 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.797318 kubelet[3131]: E0904 00:06:16.797307 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.797318 kubelet[3131]: W0904 00:06:16.797314 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.797363 kubelet[3131]: E0904 00:06:16.797321 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.798775 systemd[1]: Started cri-containerd-e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb.scope - libcontainer container e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb. Sep 4 00:06:16.834749 containerd[1717]: time="2025-09-04T00:06:16.834726948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vtnsf,Uid:a13a5ba2-2d4a-4484-be50-9c9c2f49a9b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\"" Sep 4 00:06:16.890399 kubelet[3131]: E0904 00:06:16.890383 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.890399 kubelet[3131]: W0904 00:06:16.890396 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.890585 kubelet[3131]: E0904 00:06:16.890408 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.890700 kubelet[3131]: E0904 00:06:16.890690 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.890731 kubelet[3131]: W0904 00:06:16.890699 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.890731 kubelet[3131]: E0904 00:06:16.890717 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.890859 kubelet[3131]: E0904 00:06:16.890849 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.890859 kubelet[3131]: W0904 00:06:16.890855 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.890912 kubelet[3131]: E0904 00:06:16.890866 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.890994 kubelet[3131]: E0904 00:06:16.890982 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.890994 kubelet[3131]: W0904 00:06:16.890988 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.891043 kubelet[3131]: E0904 00:06:16.891002 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.891141 kubelet[3131]: E0904 00:06:16.891129 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.891141 kubelet[3131]: W0904 00:06:16.891136 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.891193 kubelet[3131]: E0904 00:06:16.891146 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.891266 kubelet[3131]: E0904 00:06:16.891263 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.891287 kubelet[3131]: W0904 00:06:16.891268 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.891338 kubelet[3131]: E0904 00:06:16.891279 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.891443 kubelet[3131]: E0904 00:06:16.891435 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.891466 kubelet[3131]: W0904 00:06:16.891442 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.891466 kubelet[3131]: E0904 00:06:16.891458 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.891567 kubelet[3131]: E0904 00:06:16.891552 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.891567 kubelet[3131]: W0904 00:06:16.891558 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.891567 kubelet[3131]: E0904 00:06:16.891564 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.891704 kubelet[3131]: E0904 00:06:16.891682 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.891704 kubelet[3131]: W0904 00:06:16.891701 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.891757 kubelet[3131]: E0904 00:06:16.891710 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.891943 kubelet[3131]: E0904 00:06:16.891913 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.891943 kubelet[3131]: W0904 00:06:16.891936 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892004 kubelet[3131]: E0904 00:06:16.891947 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892061 kubelet[3131]: E0904 00:06:16.892036 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892061 kubelet[3131]: W0904 00:06:16.892057 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892120 kubelet[3131]: E0904 00:06:16.892064 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892183 kubelet[3131]: E0904 00:06:16.892159 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892183 kubelet[3131]: W0904 00:06:16.892179 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892238 kubelet[3131]: E0904 00:06:16.892191 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892297 kubelet[3131]: E0904 00:06:16.892292 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892297 kubelet[3131]: W0904 00:06:16.892297 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892365 kubelet[3131]: E0904 00:06:16.892353 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892388 kubelet[3131]: E0904 00:06:16.892377 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892388 kubelet[3131]: W0904 00:06:16.892381 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892550 kubelet[3131]: E0904 00:06:16.892466 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892550 kubelet[3131]: E0904 00:06:16.892478 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892550 kubelet[3131]: W0904 00:06:16.892483 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892550 kubelet[3131]: E0904 00:06:16.892496 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892652 kubelet[3131]: E0904 00:06:16.892643 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892678 kubelet[3131]: W0904 00:06:16.892652 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892678 kubelet[3131]: E0904 00:06:16.892660 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892789 kubelet[3131]: E0904 00:06:16.892765 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892789 kubelet[3131]: W0904 00:06:16.892787 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892842 kubelet[3131]: E0904 00:06:16.892798 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.892931 kubelet[3131]: E0904 00:06:16.892921 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.892931 kubelet[3131]: W0904 00:06:16.892929 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.892977 kubelet[3131]: E0904 00:06:16.892942 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.893688 kubelet[3131]: E0904 00:06:16.893658 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.893688 kubelet[3131]: W0904 00:06:16.893672 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.893952 kubelet[3131]: E0904 00:06:16.893836 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.894139 kubelet[3131]: E0904 00:06:16.894124 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.894252 kubelet[3131]: W0904 00:06:16.894137 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.894252 kubelet[3131]: E0904 00:06:16.894227 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.894701 kubelet[3131]: E0904 00:06:16.894282 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.894701 kubelet[3131]: W0904 00:06:16.894288 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.894701 kubelet[3131]: E0904 00:06:16.894418 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.894701 kubelet[3131]: W0904 00:06:16.894423 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.894701 kubelet[3131]: E0904 00:06:16.894432 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.894701 kubelet[3131]: E0904 00:06:16.894594 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.894701 kubelet[3131]: W0904 00:06:16.894600 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.894701 kubelet[3131]: E0904 00:06:16.894625 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.894701 kubelet[3131]: E0904 00:06:16.894642 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.894929 kubelet[3131]: E0904 00:06:16.894740 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.894929 kubelet[3131]: W0904 00:06:16.894745 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.894929 kubelet[3131]: E0904 00:06:16.894751 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.894929 kubelet[3131]: E0904 00:06:16.894879 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.894929 kubelet[3131]: W0904 00:06:16.894883 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.894929 kubelet[3131]: E0904 00:06:16.894889 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:16.898520 kubelet[3131]: E0904 00:06:16.898506 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:16.898520 kubelet[3131]: W0904 00:06:16.898517 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:16.898611 kubelet[3131]: E0904 00:06:16.898528 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:17.872069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421589111.mount: Deactivated successfully. Sep 4 00:06:18.407402 containerd[1717]: time="2025-09-04T00:06:18.407379873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:18.410628 containerd[1717]: time="2025-09-04T00:06:18.410560245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 00:06:18.414170 containerd[1717]: time="2025-09-04T00:06:18.414140436Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:18.417583 containerd[1717]: time="2025-09-04T00:06:18.417540624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:18.418150 containerd[1717]: time="2025-09-04T00:06:18.418071619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.822902828s" Sep 4 00:06:18.418150 containerd[1717]: time="2025-09-04T00:06:18.418096511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 00:06:18.418826 containerd[1717]: time="2025-09-04T00:06:18.418782107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 00:06:18.428432 containerd[1717]: time="2025-09-04T00:06:18.428410532Z" level=info msg="CreateContainer within sandbox \"6ba23e661a4e78b95031d73a5817abc23887a139e00e0649845ed5752fdef5fe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 00:06:18.449557 containerd[1717]: time="2025-09-04T00:06:18.448758808Z" level=info msg="Container b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:18.467287 containerd[1717]: time="2025-09-04T00:06:18.467239840Z" level=info msg="CreateContainer within sandbox \"6ba23e661a4e78b95031d73a5817abc23887a139e00e0649845ed5752fdef5fe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa\"" Sep 4 00:06:18.468506 containerd[1717]: time="2025-09-04T00:06:18.468481516Z" level=info msg="StartContainer for \"b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa\"" Sep 4 00:06:18.469657 containerd[1717]: time="2025-09-04T00:06:18.469597810Z" level=info msg="connecting to shim b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa" address="unix:///run/containerd/s/ae1ed883aac851cea4279d046644d59f31a35442d54165319d7cbeb13d8ac283" protocol=ttrpc version=3 Sep 4 00:06:18.490739 systemd[1]: Started cri-containerd-b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa.scope - libcontainer container b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa. Sep 4 00:06:18.534980 containerd[1717]: time="2025-09-04T00:06:18.534958261Z" level=info msg="StartContainer for \"b660e6471d107b90880e227675871d4df160354f767076ae888ccfc8fd6c9dfa\" returns successfully" Sep 4 00:06:18.922031 kubelet[3131]: E0904 00:06:18.920971 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4rms" podUID="dad91c5e-f1ce-4b93-bc07-61538ab43fa7" Sep 4 00:06:19.093143 kubelet[3131]: E0904 00:06:19.093125 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093143 kubelet[3131]: W0904 00:06:19.093140 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093277 kubelet[3131]: E0904 00:06:19.093154 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093277 kubelet[3131]: E0904 00:06:19.093240 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093277 kubelet[3131]: W0904 00:06:19.093246 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093277 kubelet[3131]: E0904 00:06:19.093252 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093377 kubelet[3131]: E0904 00:06:19.093327 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093377 kubelet[3131]: W0904 00:06:19.093332 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093377 kubelet[3131]: E0904 00:06:19.093337 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093456 kubelet[3131]: E0904 00:06:19.093408 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093456 kubelet[3131]: W0904 00:06:19.093412 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093456 kubelet[3131]: E0904 00:06:19.093417 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093528 kubelet[3131]: E0904 00:06:19.093491 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093528 kubelet[3131]: W0904 00:06:19.093495 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093528 kubelet[3131]: E0904 00:06:19.093501 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093622 kubelet[3131]: E0904 00:06:19.093565 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093622 kubelet[3131]: W0904 00:06:19.093570 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093622 kubelet[3131]: E0904 00:06:19.093575 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093703 kubelet[3131]: E0904 00:06:19.093662 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093703 kubelet[3131]: W0904 00:06:19.093666 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093703 kubelet[3131]: E0904 00:06:19.093674 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093774 kubelet[3131]: E0904 00:06:19.093745 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093774 kubelet[3131]: W0904 00:06:19.093749 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093774 kubelet[3131]: E0904 00:06:19.093754 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093861 kubelet[3131]: E0904 00:06:19.093826 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093861 kubelet[3131]: W0904 00:06:19.093830 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093861 kubelet[3131]: E0904 00:06:19.093835 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.093941 kubelet[3131]: E0904 00:06:19.093901 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.093941 kubelet[3131]: W0904 00:06:19.093905 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.093941 kubelet[3131]: E0904 00:06:19.093910 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.094017 kubelet[3131]: E0904 00:06:19.093974 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.094017 kubelet[3131]: W0904 00:06:19.093979 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.094017 kubelet[3131]: E0904 00:06:19.093984 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.094095 kubelet[3131]: E0904 00:06:19.094049 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.094095 kubelet[3131]: W0904 00:06:19.094052 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.094095 kubelet[3131]: E0904 00:06:19.094058 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.094167 kubelet[3131]: E0904 00:06:19.094124 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.094167 kubelet[3131]: W0904 00:06:19.094128 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.094167 kubelet[3131]: E0904 00:06:19.094133 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.094247 kubelet[3131]: E0904 00:06:19.094198 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.094247 kubelet[3131]: W0904 00:06:19.094201 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.094247 kubelet[3131]: E0904 00:06:19.094206 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.094337 kubelet[3131]: E0904 00:06:19.094270 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.094337 kubelet[3131]: W0904 00:06:19.094274 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.094337 kubelet[3131]: E0904 00:06:19.094278 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.102998 kubelet[3131]: E0904 00:06:19.102969 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.102998 kubelet[3131]: W0904 00:06:19.102995 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103120 kubelet[3131]: E0904 00:06:19.103007 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.103120 kubelet[3131]: E0904 00:06:19.103117 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.103165 kubelet[3131]: W0904 00:06:19.103122 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103165 kubelet[3131]: E0904 00:06:19.103129 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.103338 kubelet[3131]: E0904 00:06:19.103316 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.103338 kubelet[3131]: W0904 00:06:19.103337 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103386 kubelet[3131]: E0904 00:06:19.103346 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.103484 kubelet[3131]: E0904 00:06:19.103461 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.103505 kubelet[3131]: W0904 00:06:19.103487 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103505 kubelet[3131]: E0904 00:06:19.103495 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.103591 kubelet[3131]: E0904 00:06:19.103583 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.103591 kubelet[3131]: W0904 00:06:19.103589 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103664 kubelet[3131]: E0904 00:06:19.103597 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.103694 kubelet[3131]: E0904 00:06:19.103678 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.103694 kubelet[3131]: W0904 00:06:19.103683 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103694 kubelet[3131]: E0904 00:06:19.103691 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.103803 kubelet[3131]: E0904 00:06:19.103781 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.103803 kubelet[3131]: W0904 00:06:19.103801 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.103845 kubelet[3131]: E0904 00:06:19.103808 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104078 kubelet[3131]: E0904 00:06:19.104052 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104078 kubelet[3131]: W0904 00:06:19.104076 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104138 kubelet[3131]: E0904 00:06:19.104088 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104198 kubelet[3131]: E0904 00:06:19.104175 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104198 kubelet[3131]: W0904 00:06:19.104196 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104262 kubelet[3131]: E0904 00:06:19.104202 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104294 kubelet[3131]: E0904 00:06:19.104276 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104294 kubelet[3131]: W0904 00:06:19.104280 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104365 kubelet[3131]: E0904 00:06:19.104360 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104420 kubelet[3131]: W0904 00:06:19.104367 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104420 kubelet[3131]: E0904 00:06:19.104358 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104420 kubelet[3131]: E0904 00:06:19.104403 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104486 kubelet[3131]: E0904 00:06:19.104433 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104486 kubelet[3131]: W0904 00:06:19.104437 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104486 kubelet[3131]: E0904 00:06:19.104444 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104550 kubelet[3131]: E0904 00:06:19.104535 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104550 kubelet[3131]: W0904 00:06:19.104539 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104550 kubelet[3131]: E0904 00:06:19.104544 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104851 kubelet[3131]: E0904 00:06:19.104697 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104851 kubelet[3131]: W0904 00:06:19.104704 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.104851 kubelet[3131]: E0904 00:06:19.104724 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.104960 kubelet[3131]: E0904 00:06:19.104937 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.104960 kubelet[3131]: W0904 00:06:19.104958 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.105006 kubelet[3131]: E0904 00:06:19.104976 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.105169 kubelet[3131]: E0904 00:06:19.105147 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.105169 kubelet[3131]: W0904 00:06:19.105168 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.105212 kubelet[3131]: E0904 00:06:19.105177 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.105438 kubelet[3131]: E0904 00:06:19.105415 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.105438 kubelet[3131]: W0904 00:06:19.105437 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.105507 kubelet[3131]: E0904 00:06:19.105445 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.105541 kubelet[3131]: E0904 00:06:19.105537 3131 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 00:06:19.105560 kubelet[3131]: W0904 00:06:19.105542 3131 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 00:06:19.105560 kubelet[3131]: E0904 00:06:19.105548 3131 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 00:06:19.576359 containerd[1717]: time="2025-09-04T00:06:19.576332285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:19.578542 containerd[1717]: time="2025-09-04T00:06:19.578413082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 00:06:19.581017 containerd[1717]: time="2025-09-04T00:06:19.580996293Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:19.584669 containerd[1717]: time="2025-09-04T00:06:19.584646406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:19.585101 containerd[1717]: time="2025-09-04T00:06:19.585081532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.165924466s" Sep 4 00:06:19.585163 containerd[1717]: time="2025-09-04T00:06:19.585152728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 00:06:19.587495 containerd[1717]: time="2025-09-04T00:06:19.587467826Z" level=info msg="CreateContainer within sandbox \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 00:06:19.605402 containerd[1717]: time="2025-09-04T00:06:19.602591135Z" level=info msg="Container 6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:19.618806 containerd[1717]: time="2025-09-04T00:06:19.618785143Z" level=info msg="CreateContainer within sandbox \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\"" Sep 4 00:06:19.619275 containerd[1717]: time="2025-09-04T00:06:19.619096122Z" level=info msg="StartContainer for \"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\"" Sep 4 00:06:19.620489 containerd[1717]: time="2025-09-04T00:06:19.620453937Z" level=info msg="connecting to shim 6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233" address="unix:///run/containerd/s/6ba3bbf8f083203a168f606d56abbe724afbaced08a4c7ae547f940d2b779690" protocol=ttrpc version=3 Sep 4 00:06:19.637768 systemd[1]: Started cri-containerd-6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233.scope - libcontainer container 6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233. Sep 4 00:06:19.666273 containerd[1717]: time="2025-09-04T00:06:19.666249491Z" level=info msg="StartContainer for \"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\" returns successfully" Sep 4 00:06:19.674165 systemd[1]: cri-containerd-6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233.scope: Deactivated successfully. Sep 4 00:06:19.678375 containerd[1717]: time="2025-09-04T00:06:19.678176616Z" level=info msg="received exit event container_id:\"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\" id:\"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\" pid:3793 exited_at:{seconds:1756944379 nanos:677866402}" Sep 4 00:06:19.678375 containerd[1717]: time="2025-09-04T00:06:19.678354646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\" id:\"6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233\" pid:3793 exited_at:{seconds:1756944379 nanos:677866402}" Sep 4 00:06:19.694452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6834b28f41125ede1750d0d8679b55d88a0338879d7dc975230e379798a2c233-rootfs.mount: Deactivated successfully. Sep 4 00:06:20.003742 kubelet[3131]: I0904 00:06:20.003728 3131 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:20.017875 kubelet[3131]: I0904 00:06:20.017808 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cdd7d9fdf-bp8jj" podStartSLOduration=2.193531887 podStartE2EDuration="4.017795963s" podCreationTimestamp="2025-09-04 00:06:16 +0000 UTC" firstStartedPulling="2025-09-04 00:06:16.594534398 +0000 UTC m=+25.748381790" lastFinishedPulling="2025-09-04 00:06:18.418798461 +0000 UTC m=+27.572645866" observedRunningTime="2025-09-04 00:06:19.011778118 +0000 UTC m=+28.165625521" watchObservedRunningTime="2025-09-04 00:06:20.017795963 +0000 UTC m=+29.171643367" Sep 4 00:06:20.923035 kubelet[3131]: E0904 00:06:20.922935 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4rms" podUID="dad91c5e-f1ce-4b93-bc07-61538ab43fa7" Sep 4 00:06:22.010701 containerd[1717]: time="2025-09-04T00:06:22.010668813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 00:06:22.922061 kubelet[3131]: E0904 00:06:22.921449 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4rms" podUID="dad91c5e-f1ce-4b93-bc07-61538ab43fa7" Sep 4 00:06:24.456404 containerd[1717]: time="2025-09-04T00:06:24.456378089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:24.460167 containerd[1717]: time="2025-09-04T00:06:24.460139056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 00:06:24.462988 containerd[1717]: time="2025-09-04T00:06:24.462951548Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:24.466122 containerd[1717]: time="2025-09-04T00:06:24.466086766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:24.466613 containerd[1717]: time="2025-09-04T00:06:24.466409396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.455699276s" Sep 4 00:06:24.466613 containerd[1717]: time="2025-09-04T00:06:24.466432574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 00:06:24.468317 containerd[1717]: time="2025-09-04T00:06:24.468292415Z" level=info msg="CreateContainer within sandbox \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 00:06:24.488272 containerd[1717]: time="2025-09-04T00:06:24.488191452Z" level=info msg="Container e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:24.502511 containerd[1717]: time="2025-09-04T00:06:24.502490263Z" level=info msg="CreateContainer within sandbox \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\"" Sep 4 00:06:24.503040 containerd[1717]: time="2025-09-04T00:06:24.502830986Z" level=info msg="StartContainer for \"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\"" Sep 4 00:06:24.504257 containerd[1717]: time="2025-09-04T00:06:24.504235441Z" level=info msg="connecting to shim e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0" address="unix:///run/containerd/s/6ba3bbf8f083203a168f606d56abbe724afbaced08a4c7ae547f940d2b779690" protocol=ttrpc version=3 Sep 4 00:06:24.524755 systemd[1]: Started cri-containerd-e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0.scope - libcontainer container e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0. Sep 4 00:06:24.557527 containerd[1717]: time="2025-09-04T00:06:24.557506838Z" level=info msg="StartContainer for \"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\" returns successfully" Sep 4 00:06:24.921629 kubelet[3131]: E0904 00:06:24.920977 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4rms" podUID="dad91c5e-f1ce-4b93-bc07-61538ab43fa7" Sep 4 00:06:25.639263 containerd[1717]: time="2025-09-04T00:06:25.639231399Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:06:25.641416 containerd[1717]: time="2025-09-04T00:06:25.641368771Z" level=info msg="received exit event container_id:\"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\" id:\"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\" pid:3852 exited_at:{seconds:1756944385 nanos:641226944}" Sep 4 00:06:25.641416 containerd[1717]: time="2025-09-04T00:06:25.641399634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\" id:\"e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0\" pid:3852 exited_at:{seconds:1756944385 nanos:641226944}" Sep 4 00:06:25.641713 systemd[1]: cri-containerd-e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0.scope: Deactivated successfully. Sep 4 00:06:25.642476 systemd[1]: cri-containerd-e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0.scope: Consumed 348ms CPU time, 194.2M memory peak, 171.3M written to disk. Sep 4 00:06:25.660081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8459c9253ad118e73bd15a9c5056bff176a1199f15a12c0eae0fe49059571a0-rootfs.mount: Deactivated successfully. Sep 4 00:06:25.721594 kubelet[3131]: I0904 00:06:25.721577 3131 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 00:06:25.761084 systemd[1]: Created slice kubepods-burstable-pod6ab12f71_4124_4df3_9aa6_7b3ea03b921f.slice - libcontainer container kubepods-burstable-pod6ab12f71_4124_4df3_9aa6_7b3ea03b921f.slice. Sep 4 00:06:25.769597 systemd[1]: Created slice kubepods-besteffort-pod9d8bf113_ad2c_477d_b7a0_ea1ea0bd862c.slice - libcontainer container kubepods-besteffort-pod9d8bf113_ad2c_477d_b7a0_ea1ea0bd862c.slice. Sep 4 00:06:25.776480 kubelet[3131]: W0904 00:06:25.776454 3131 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4372.1.0-n-f08c63113b" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object Sep 4 00:06:25.776550 kubelet[3131]: E0904 00:06:25.776488 3131 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4372.1.0-n-f08c63113b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object" logger="UnhandledError" Sep 4 00:06:25.776550 kubelet[3131]: W0904 00:06:25.776524 3131 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4372.1.0-n-f08c63113b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object Sep 4 00:06:25.776550 kubelet[3131]: E0904 00:06:25.776533 3131 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4372.1.0-n-f08c63113b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object" logger="UnhandledError" Sep 4 00:06:25.779040 systemd[1]: Created slice kubepods-burstable-pod7305ad0a_ab78_4e36_bd2e_b494a556ee8c.slice - libcontainer container kubepods-burstable-pod7305ad0a_ab78_4e36_bd2e_b494a556ee8c.slice. Sep 4 00:06:25.780630 kubelet[3131]: W0904 00:06:25.780593 3131 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4372.1.0-n-f08c63113b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object Sep 4 00:06:25.782046 kubelet[3131]: E0904 00:06:25.781305 3131 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4372.1.0-n-f08c63113b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object" logger="UnhandledError" Sep 4 00:06:25.782046 kubelet[3131]: W0904 00:06:25.781367 3131 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4372.1.0-n-f08c63113b" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object Sep 4 00:06:25.782046 kubelet[3131]: E0904 00:06:25.781378 3131 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4372.1.0-n-f08c63113b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object" logger="UnhandledError" Sep 4 00:06:25.782046 kubelet[3131]: W0904 00:06:25.781412 3131 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4372.1.0-n-f08c63113b" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object Sep 4 00:06:25.782183 kubelet[3131]: E0904 00:06:25.781421 3131 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4372.1.0-n-f08c63113b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-n-f08c63113b' and this object" logger="UnhandledError" Sep 4 00:06:25.786713 systemd[1]: Created slice kubepods-besteffort-pod1b827302_779d_4aaf_bb2d_0b145bc49c1e.slice - libcontainer container kubepods-besteffort-pod1b827302_779d_4aaf_bb2d_0b145bc49c1e.slice. Sep 4 00:06:25.793423 systemd[1]: Created slice kubepods-besteffort-pod6da523f2_f37a_41ae_8a42_7d012bf2a528.slice - libcontainer container kubepods-besteffort-pod6da523f2_f37a_41ae_8a42_7d012bf2a528.slice. Sep 4 00:06:25.800284 systemd[1]: Created slice kubepods-besteffort-podcfcef207_3e31_425d_a942_e646548681b1.slice - libcontainer container kubepods-besteffort-podcfcef207_3e31_425d_a942_e646548681b1.slice. Sep 4 00:06:25.804395 systemd[1]: Created slice kubepods-besteffort-pod879a3434_5495_4c03_83be_8761399ce6fc.slice - libcontainer container kubepods-besteffort-pod879a3434_5495_4c03_83be_8761399ce6fc.slice. Sep 4 00:06:25.848113 kubelet[3131]: I0904 00:06:25.848080 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnc28\" (UniqueName: \"kubernetes.io/projected/879a3434-5495-4c03-83be-8761399ce6fc-kube-api-access-fnc28\") pod \"calico-apiserver-745d5857c4-7v6zj\" (UID: \"879a3434-5495-4c03-83be-8761399ce6fc\") " pod="calico-apiserver/calico-apiserver-745d5857c4-7v6zj" Sep 4 00:06:25.848182 kubelet[3131]: I0904 00:06:25.848123 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ch8c\" (UniqueName: \"kubernetes.io/projected/6ab12f71-4124-4df3-9aa6-7b3ea03b921f-kube-api-access-6ch8c\") pod \"coredns-7c65d6cfc9-5t4zk\" (UID: \"6ab12f71-4124-4df3-9aa6-7b3ea03b921f\") " pod="kube-system/coredns-7c65d6cfc9-5t4zk" Sep 4 00:06:25.848182 kubelet[3131]: I0904 00:06:25.848139 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2h8d\" (UniqueName: \"kubernetes.io/projected/1b827302-779d-4aaf-bb2d-0b145bc49c1e-kube-api-access-g2h8d\") pod \"calico-apiserver-745d5857c4-4llsq\" (UID: \"1b827302-779d-4aaf-bb2d-0b145bc49c1e\") " pod="calico-apiserver/calico-apiserver-745d5857c4-4llsq" Sep 4 00:06:25.848182 kubelet[3131]: I0904 00:06:25.848154 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9l6p\" (UniqueName: \"kubernetes.io/projected/6da523f2-f37a-41ae-8a42-7d012bf2a528-kube-api-access-l9l6p\") pod \"goldmane-7988f88666-t4mp9\" (UID: \"6da523f2-f37a-41ae-8a42-7d012bf2a528\") " pod="calico-system/goldmane-7988f88666-t4mp9" Sep 4 00:06:25.848182 kubelet[3131]: I0904 00:06:25.848169 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfcef207-3e31-425d-a942-e646548681b1-whisker-ca-bundle\") pod \"whisker-576fcff7d4-rstgr\" (UID: \"cfcef207-3e31-425d-a942-e646548681b1\") " pod="calico-system/whisker-576fcff7d4-rstgr" Sep 4 00:06:25.848273 kubelet[3131]: I0904 00:06:25.848187 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txc74\" (UniqueName: \"kubernetes.io/projected/7305ad0a-ab78-4e36-bd2e-b494a556ee8c-kube-api-access-txc74\") pod \"coredns-7c65d6cfc9-fwqf2\" (UID: \"7305ad0a-ab78-4e36-bd2e-b494a556ee8c\") " pod="kube-system/coredns-7c65d6cfc9-fwqf2" Sep 4 00:06:25.848273 kubelet[3131]: I0904 00:06:25.848203 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da523f2-f37a-41ae-8a42-7d012bf2a528-goldmane-ca-bundle\") pod \"goldmane-7988f88666-t4mp9\" (UID: \"6da523f2-f37a-41ae-8a42-7d012bf2a528\") " pod="calico-system/goldmane-7988f88666-t4mp9" Sep 4 00:06:25.848273 kubelet[3131]: I0904 00:06:25.848219 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da523f2-f37a-41ae-8a42-7d012bf2a528-config\") pod \"goldmane-7988f88666-t4mp9\" (UID: \"6da523f2-f37a-41ae-8a42-7d012bf2a528\") " pod="calico-system/goldmane-7988f88666-t4mp9" Sep 4 00:06:25.848273 kubelet[3131]: I0904 00:06:25.848236 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cfcef207-3e31-425d-a942-e646548681b1-whisker-backend-key-pair\") pod \"whisker-576fcff7d4-rstgr\" (UID: \"cfcef207-3e31-425d-a942-e646548681b1\") " pod="calico-system/whisker-576fcff7d4-rstgr" Sep 4 00:06:25.848273 kubelet[3131]: I0904 00:06:25.848252 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c-tigera-ca-bundle\") pod \"calico-kube-controllers-778fcdd7dc-bg992\" (UID: \"9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c\") " pod="calico-system/calico-kube-controllers-778fcdd7dc-bg992" Sep 4 00:06:25.848375 kubelet[3131]: I0904 00:06:25.848272 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6da523f2-f37a-41ae-8a42-7d012bf2a528-goldmane-key-pair\") pod \"goldmane-7988f88666-t4mp9\" (UID: \"6da523f2-f37a-41ae-8a42-7d012bf2a528\") " pod="calico-system/goldmane-7988f88666-t4mp9" Sep 4 00:06:25.848375 kubelet[3131]: I0904 00:06:25.848288 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b827302-779d-4aaf-bb2d-0b145bc49c1e-calico-apiserver-certs\") pod \"calico-apiserver-745d5857c4-4llsq\" (UID: \"1b827302-779d-4aaf-bb2d-0b145bc49c1e\") " pod="calico-apiserver/calico-apiserver-745d5857c4-4llsq" Sep 4 00:06:25.848375 kubelet[3131]: I0904 00:06:25.848305 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/879a3434-5495-4c03-83be-8761399ce6fc-calico-apiserver-certs\") pod \"calico-apiserver-745d5857c4-7v6zj\" (UID: \"879a3434-5495-4c03-83be-8761399ce6fc\") " pod="calico-apiserver/calico-apiserver-745d5857c4-7v6zj" Sep 4 00:06:25.848375 kubelet[3131]: I0904 00:06:25.848320 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7305ad0a-ab78-4e36-bd2e-b494a556ee8c-config-volume\") pod \"coredns-7c65d6cfc9-fwqf2\" (UID: \"7305ad0a-ab78-4e36-bd2e-b494a556ee8c\") " pod="kube-system/coredns-7c65d6cfc9-fwqf2" Sep 4 00:06:25.848375 kubelet[3131]: I0904 00:06:25.848336 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ab12f71-4124-4df3-9aa6-7b3ea03b921f-config-volume\") pod \"coredns-7c65d6cfc9-5t4zk\" (UID: \"6ab12f71-4124-4df3-9aa6-7b3ea03b921f\") " pod="kube-system/coredns-7c65d6cfc9-5t4zk" Sep 4 00:06:25.848472 kubelet[3131]: I0904 00:06:25.848351 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvsn\" (UniqueName: \"kubernetes.io/projected/9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c-kube-api-access-wrvsn\") pod \"calico-kube-controllers-778fcdd7dc-bg992\" (UID: \"9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c\") " pod="calico-system/calico-kube-controllers-778fcdd7dc-bg992" Sep 4 00:06:25.848472 kubelet[3131]: I0904 00:06:25.848366 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vv29\" (UniqueName: \"kubernetes.io/projected/cfcef207-3e31-425d-a942-e646548681b1-kube-api-access-4vv29\") pod \"whisker-576fcff7d4-rstgr\" (UID: \"cfcef207-3e31-425d-a942-e646548681b1\") " pod="calico-system/whisker-576fcff7d4-rstgr" Sep 4 00:06:26.065230 containerd[1717]: time="2025-09-04T00:06:26.065102624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5t4zk,Uid:6ab12f71-4124-4df3-9aa6-7b3ea03b921f,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:26.076948 containerd[1717]: time="2025-09-04T00:06:26.076923945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778fcdd7dc-bg992,Uid:9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:26.083863 containerd[1717]: time="2025-09-04T00:06:26.083842609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fwqf2,Uid:7305ad0a-ab78-4e36-bd2e-b494a556ee8c,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:26.089344 containerd[1717]: time="2025-09-04T00:06:26.089324898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-4llsq,Uid:1b827302-779d-4aaf-bb2d-0b145bc49c1e,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:26.106579 containerd[1717]: time="2025-09-04T00:06:26.106551515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-7v6zj,Uid:879a3434-5495-4c03-83be-8761399ce6fc,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:26.701854 containerd[1717]: time="2025-09-04T00:06:26.701814109Z" level=error msg="Failed to destroy network for sandbox \"f77e515315b82a71607c4c33835ad44e737b1604c08b154ed82dba2a835b7b1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.704534 systemd[1]: run-netns-cni\x2da24817ac\x2d112b\x2d3444\x2d40c5\x2dc1dcad654e61.mount: Deactivated successfully. Sep 4 00:06:26.707165 containerd[1717]: time="2025-09-04T00:06:26.706972304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5t4zk,Uid:6ab12f71-4124-4df3-9aa6-7b3ea03b921f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e515315b82a71607c4c33835ad44e737b1604c08b154ed82dba2a835b7b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.707480 kubelet[3131]: E0904 00:06:26.707439 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e515315b82a71607c4c33835ad44e737b1604c08b154ed82dba2a835b7b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.707715 kubelet[3131]: E0904 00:06:26.707502 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e515315b82a71607c4c33835ad44e737b1604c08b154ed82dba2a835b7b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5t4zk" Sep 4 00:06:26.707715 kubelet[3131]: E0904 00:06:26.707521 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77e515315b82a71607c4c33835ad44e737b1604c08b154ed82dba2a835b7b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5t4zk" Sep 4 00:06:26.707715 kubelet[3131]: E0904 00:06:26.707634 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5t4zk_kube-system(6ab12f71-4124-4df3-9aa6-7b3ea03b921f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5t4zk_kube-system(6ab12f71-4124-4df3-9aa6-7b3ea03b921f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f77e515315b82a71607c4c33835ad44e737b1604c08b154ed82dba2a835b7b1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5t4zk" podUID="6ab12f71-4124-4df3-9aa6-7b3ea03b921f" Sep 4 00:06:26.715494 containerd[1717]: time="2025-09-04T00:06:26.715466324Z" level=error msg="Failed to destroy network for sandbox \"bb14655fa4ca5e27aa9db90d005c76951490986ac5aa452a270078b150a8578d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.718251 systemd[1]: run-netns-cni\x2da1326eed\x2d834a\x2d1533\x2de6ef\x2d2c0b4e4a2b9d.mount: Deactivated successfully. Sep 4 00:06:26.720228 containerd[1717]: time="2025-09-04T00:06:26.719518783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778fcdd7dc-bg992,Uid:9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb14655fa4ca5e27aa9db90d005c76951490986ac5aa452a270078b150a8578d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.720326 kubelet[3131]: E0904 00:06:26.719672 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb14655fa4ca5e27aa9db90d005c76951490986ac5aa452a270078b150a8578d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.720326 kubelet[3131]: E0904 00:06:26.719715 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb14655fa4ca5e27aa9db90d005c76951490986ac5aa452a270078b150a8578d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-778fcdd7dc-bg992" Sep 4 00:06:26.720326 kubelet[3131]: E0904 00:06:26.719732 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb14655fa4ca5e27aa9db90d005c76951490986ac5aa452a270078b150a8578d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-778fcdd7dc-bg992" Sep 4 00:06:26.720412 kubelet[3131]: E0904 00:06:26.719771 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-778fcdd7dc-bg992_calico-system(9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-778fcdd7dc-bg992_calico-system(9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb14655fa4ca5e27aa9db90d005c76951490986ac5aa452a270078b150a8578d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-778fcdd7dc-bg992" podUID="9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c" Sep 4 00:06:26.722417 containerd[1717]: time="2025-09-04T00:06:26.722391931Z" level=error msg="Failed to destroy network for sandbox \"94eafcafbbc0291324d7d7dcefe65948f6db41a7d426bf4b5ad1b33d3d61a3c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.724309 systemd[1]: run-netns-cni\x2d82bc09ad\x2d4873\x2ded43\x2d1d77\x2d64654a0602a2.mount: Deactivated successfully. Sep 4 00:06:26.726203 containerd[1717]: time="2025-09-04T00:06:26.726175027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-4llsq,Uid:1b827302-779d-4aaf-bb2d-0b145bc49c1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94eafcafbbc0291324d7d7dcefe65948f6db41a7d426bf4b5ad1b33d3d61a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.726506 kubelet[3131]: E0904 00:06:26.726416 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94eafcafbbc0291324d7d7dcefe65948f6db41a7d426bf4b5ad1b33d3d61a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.726506 kubelet[3131]: E0904 00:06:26.726467 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94eafcafbbc0291324d7d7dcefe65948f6db41a7d426bf4b5ad1b33d3d61a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745d5857c4-4llsq" Sep 4 00:06:26.726506 kubelet[3131]: E0904 00:06:26.726484 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94eafcafbbc0291324d7d7dcefe65948f6db41a7d426bf4b5ad1b33d3d61a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745d5857c4-4llsq" Sep 4 00:06:26.726792 containerd[1717]: time="2025-09-04T00:06:26.726300724Z" level=error msg="Failed to destroy network for sandbox \"a105fc1e5f97b68aea84a0eca45f9abe58cecd5b931a87cde2305277e535ca13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.727167 kubelet[3131]: E0904 00:06:26.726666 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-745d5857c4-4llsq_calico-apiserver(1b827302-779d-4aaf-bb2d-0b145bc49c1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-745d5857c4-4llsq_calico-apiserver(1b827302-779d-4aaf-bb2d-0b145bc49c1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94eafcafbbc0291324d7d7dcefe65948f6db41a7d426bf4b5ad1b33d3d61a3c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745d5857c4-4llsq" podUID="1b827302-779d-4aaf-bb2d-0b145bc49c1e" Sep 4 00:06:26.730784 containerd[1717]: time="2025-09-04T00:06:26.730695275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fwqf2,Uid:7305ad0a-ab78-4e36-bd2e-b494a556ee8c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a105fc1e5f97b68aea84a0eca45f9abe58cecd5b931a87cde2305277e535ca13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.731021 kubelet[3131]: E0904 00:06:26.730946 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a105fc1e5f97b68aea84a0eca45f9abe58cecd5b931a87cde2305277e535ca13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.731021 kubelet[3131]: E0904 00:06:26.730987 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a105fc1e5f97b68aea84a0eca45f9abe58cecd5b931a87cde2305277e535ca13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fwqf2" Sep 4 00:06:26.731021 kubelet[3131]: E0904 00:06:26.731004 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a105fc1e5f97b68aea84a0eca45f9abe58cecd5b931a87cde2305277e535ca13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fwqf2" Sep 4 00:06:26.731346 kubelet[3131]: E0904 00:06:26.731124 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fwqf2_kube-system(7305ad0a-ab78-4e36-bd2e-b494a556ee8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fwqf2_kube-system(7305ad0a-ab78-4e36-bd2e-b494a556ee8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a105fc1e5f97b68aea84a0eca45f9abe58cecd5b931a87cde2305277e535ca13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fwqf2" podUID="7305ad0a-ab78-4e36-bd2e-b494a556ee8c" Sep 4 00:06:26.736897 containerd[1717]: time="2025-09-04T00:06:26.736865432Z" level=error msg="Failed to destroy network for sandbox \"16a0c3f0591b4960a675fdfd458a6063311552dc558ced72ae00f8d348fc0e1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.739876 containerd[1717]: time="2025-09-04T00:06:26.739848842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-7v6zj,Uid:879a3434-5495-4c03-83be-8761399ce6fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16a0c3f0591b4960a675fdfd458a6063311552dc558ced72ae00f8d348fc0e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.739999 kubelet[3131]: E0904 00:06:26.739978 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16a0c3f0591b4960a675fdfd458a6063311552dc558ced72ae00f8d348fc0e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.740036 kubelet[3131]: E0904 00:06:26.740012 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16a0c3f0591b4960a675fdfd458a6063311552dc558ced72ae00f8d348fc0e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745d5857c4-7v6zj" Sep 4 00:06:26.740059 kubelet[3131]: E0904 00:06:26.740029 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16a0c3f0591b4960a675fdfd458a6063311552dc558ced72ae00f8d348fc0e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745d5857c4-7v6zj" Sep 4 00:06:26.740086 kubelet[3131]: E0904 00:06:26.740058 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-745d5857c4-7v6zj_calico-apiserver(879a3434-5495-4c03-83be-8761399ce6fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-745d5857c4-7v6zj_calico-apiserver(879a3434-5495-4c03-83be-8761399ce6fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16a0c3f0591b4960a675fdfd458a6063311552dc558ced72ae00f8d348fc0e1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745d5857c4-7v6zj" podUID="879a3434-5495-4c03-83be-8761399ce6fc" Sep 4 00:06:26.926816 systemd[1]: Created slice kubepods-besteffort-poddad91c5e_f1ce_4b93_bc07_61538ab43fa7.slice - libcontainer container kubepods-besteffort-poddad91c5e_f1ce_4b93_bc07_61538ab43fa7.slice. Sep 4 00:06:26.928489 containerd[1717]: time="2025-09-04T00:06:26.928469376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4rms,Uid:dad91c5e-f1ce-4b93-bc07-61538ab43fa7,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:26.950250 kubelet[3131]: E0904 00:06:26.950058 3131 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 4 00:06:26.950250 kubelet[3131]: E0904 00:06:26.950115 3131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6da523f2-f37a-41ae-8a42-7d012bf2a528-goldmane-key-pair podName:6da523f2-f37a-41ae-8a42-7d012bf2a528 nodeName:}" failed. No retries permitted until 2025-09-04 00:06:27.450098278 +0000 UTC m=+36.603945673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/6da523f2-f37a-41ae-8a42-7d012bf2a528-goldmane-key-pair") pod "goldmane-7988f88666-t4mp9" (UID: "6da523f2-f37a-41ae-8a42-7d012bf2a528") : failed to sync secret cache: timed out waiting for the condition Sep 4 00:06:26.950829 kubelet[3131]: E0904 00:06:26.950805 3131 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:26.950896 kubelet[3131]: E0904 00:06:26.950855 3131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6da523f2-f37a-41ae-8a42-7d012bf2a528-goldmane-ca-bundle podName:6da523f2-f37a-41ae-8a42-7d012bf2a528 nodeName:}" failed. No retries permitted until 2025-09-04 00:06:27.450842026 +0000 UTC m=+36.604689417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/6da523f2-f37a-41ae-8a42-7d012bf2a528-goldmane-ca-bundle") pod "goldmane-7988f88666-t4mp9" (UID: "6da523f2-f37a-41ae-8a42-7d012bf2a528") : failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:26.953131 kubelet[3131]: E0904 00:06:26.952996 3131 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:26.953131 kubelet[3131]: E0904 00:06:26.953053 3131 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfcef207-3e31-425d-a942-e646548681b1-whisker-ca-bundle podName:cfcef207-3e31-425d-a942-e646548681b1 nodeName:}" failed. No retries permitted until 2025-09-04 00:06:27.453030499 +0000 UTC m=+36.606877902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/cfcef207-3e31-425d-a942-e646548681b1-whisker-ca-bundle") pod "whisker-576fcff7d4-rstgr" (UID: "cfcef207-3e31-425d-a942-e646548681b1") : failed to sync configmap cache: timed out waiting for the condition Sep 4 00:06:26.969245 containerd[1717]: time="2025-09-04T00:06:26.969215701Z" level=error msg="Failed to destroy network for sandbox \"798c1062f7fd90131b72e63a9ab3dd3fc90ac7513494b597aae21dd4538592fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.972021 containerd[1717]: time="2025-09-04T00:06:26.971994280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4rms,Uid:dad91c5e-f1ce-4b93-bc07-61538ab43fa7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"798c1062f7fd90131b72e63a9ab3dd3fc90ac7513494b597aae21dd4538592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.972163 kubelet[3131]: E0904 00:06:26.972131 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798c1062f7fd90131b72e63a9ab3dd3fc90ac7513494b597aae21dd4538592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:26.972205 kubelet[3131]: E0904 00:06:26.972190 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798c1062f7fd90131b72e63a9ab3dd3fc90ac7513494b597aae21dd4538592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:26.972230 kubelet[3131]: E0904 00:06:26.972211 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"798c1062f7fd90131b72e63a9ab3dd3fc90ac7513494b597aae21dd4538592fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4rms" Sep 4 00:06:26.972630 kubelet[3131]: E0904 00:06:26.972366 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t4rms_calico-system(dad91c5e-f1ce-4b93-bc07-61538ab43fa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t4rms_calico-system(dad91c5e-f1ce-4b93-bc07-61538ab43fa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"798c1062f7fd90131b72e63a9ab3dd3fc90ac7513494b597aae21dd4538592fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t4rms" podUID="dad91c5e-f1ce-4b93-bc07-61538ab43fa7" Sep 4 00:06:27.022742 containerd[1717]: time="2025-09-04T00:06:27.022707386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 00:06:27.597038 containerd[1717]: time="2025-09-04T00:06:27.597017735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-t4mp9,Uid:6da523f2-f37a-41ae-8a42-7d012bf2a528,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:27.602443 containerd[1717]: time="2025-09-04T00:06:27.602421153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-576fcff7d4-rstgr,Uid:cfcef207-3e31-425d-a942-e646548681b1,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:27.644534 containerd[1717]: time="2025-09-04T00:06:27.644501933Z" level=error msg="Failed to destroy network for sandbox \"e5f0d8fa958f1300f8288cbeb654805fdef1ebf2cb04b25ddf56710a0eff2630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:27.648143 containerd[1717]: time="2025-09-04T00:06:27.648080590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-t4mp9,Uid:6da523f2-f37a-41ae-8a42-7d012bf2a528,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f0d8fa958f1300f8288cbeb654805fdef1ebf2cb04b25ddf56710a0eff2630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:27.648509 kubelet[3131]: E0904 00:06:27.648479 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f0d8fa958f1300f8288cbeb654805fdef1ebf2cb04b25ddf56710a0eff2630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:27.648562 kubelet[3131]: E0904 00:06:27.648519 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f0d8fa958f1300f8288cbeb654805fdef1ebf2cb04b25ddf56710a0eff2630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-t4mp9" Sep 4 00:06:27.648562 kubelet[3131]: E0904 00:06:27.648535 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f0d8fa958f1300f8288cbeb654805fdef1ebf2cb04b25ddf56710a0eff2630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-t4mp9" Sep 4 00:06:27.648682 kubelet[3131]: E0904 00:06:27.648562 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-t4mp9_calico-system(6da523f2-f37a-41ae-8a42-7d012bf2a528)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-t4mp9_calico-system(6da523f2-f37a-41ae-8a42-7d012bf2a528)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5f0d8fa958f1300f8288cbeb654805fdef1ebf2cb04b25ddf56710a0eff2630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-t4mp9" podUID="6da523f2-f37a-41ae-8a42-7d012bf2a528" Sep 4 00:06:27.651140 containerd[1717]: time="2025-09-04T00:06:27.651115538Z" level=error msg="Failed to destroy network for sandbox \"b38042fd2f9c5a499ee6b34240f511b18ae3d97d6b363d63da94336bce3faddb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:27.653909 containerd[1717]: time="2025-09-04T00:06:27.653881745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-576fcff7d4-rstgr,Uid:cfcef207-3e31-425d-a942-e646548681b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38042fd2f9c5a499ee6b34240f511b18ae3d97d6b363d63da94336bce3faddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:27.654035 kubelet[3131]: E0904 00:06:27.654004 3131 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38042fd2f9c5a499ee6b34240f511b18ae3d97d6b363d63da94336bce3faddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 00:06:27.654090 kubelet[3131]: E0904 00:06:27.654032 3131 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38042fd2f9c5a499ee6b34240f511b18ae3d97d6b363d63da94336bce3faddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-576fcff7d4-rstgr" Sep 4 00:06:27.654090 kubelet[3131]: E0904 00:06:27.654047 3131 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38042fd2f9c5a499ee6b34240f511b18ae3d97d6b363d63da94336bce3faddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-576fcff7d4-rstgr" Sep 4 00:06:27.654090 kubelet[3131]: E0904 00:06:27.654075 3131 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-576fcff7d4-rstgr_calico-system(cfcef207-3e31-425d-a942-e646548681b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-576fcff7d4-rstgr_calico-system(cfcef207-3e31-425d-a942-e646548681b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38042fd2f9c5a499ee6b34240f511b18ae3d97d6b363d63da94336bce3faddb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-576fcff7d4-rstgr" podUID="cfcef207-3e31-425d-a942-e646548681b1" Sep 4 00:06:27.660193 systemd[1]: run-netns-cni\x2d75566628\x2d5c42\x2d34ae\x2d1d0c\x2d358f245efd0a.mount: Deactivated successfully. Sep 4 00:06:27.660258 systemd[1]: run-netns-cni\x2de9a2f880\x2dbe65\x2d609f\x2d52f0\x2d6dc8a7bfa526.mount: Deactivated successfully. Sep 4 00:06:31.265431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068280420.mount: Deactivated successfully. Sep 4 00:06:31.294394 containerd[1717]: time="2025-09-04T00:06:31.294358572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:31.297304 containerd[1717]: time="2025-09-04T00:06:31.297274662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 00:06:31.300152 containerd[1717]: time="2025-09-04T00:06:31.300111693Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:31.303597 containerd[1717]: time="2025-09-04T00:06:31.303561762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:31.304042 containerd[1717]: time="2025-09-04T00:06:31.303799076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.28105113s" Sep 4 00:06:31.304042 containerd[1717]: time="2025-09-04T00:06:31.303824193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 00:06:31.309640 containerd[1717]: time="2025-09-04T00:06:31.308985152Z" level=info msg="CreateContainer within sandbox \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 00:06:31.328150 containerd[1717]: time="2025-09-04T00:06:31.328127263Z" level=info msg="Container c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:31.343644 containerd[1717]: time="2025-09-04T00:06:31.343598558Z" level=info msg="CreateContainer within sandbox \"e35a6348724be6c328e0af4d74e98e29172f72b298f180e4f230985bd31ed4fb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\"" Sep 4 00:06:31.344050 containerd[1717]: time="2025-09-04T00:06:31.344030222Z" level=info msg="StartContainer for \"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\"" Sep 4 00:06:31.345229 containerd[1717]: time="2025-09-04T00:06:31.345205264Z" level=info msg="connecting to shim c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb" address="unix:///run/containerd/s/6ba3bbf8f083203a168f606d56abbe724afbaced08a4c7ae547f940d2b779690" protocol=ttrpc version=3 Sep 4 00:06:31.364739 systemd[1]: Started cri-containerd-c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb.scope - libcontainer container c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb. Sep 4 00:06:31.397634 containerd[1717]: time="2025-09-04T00:06:31.397558714Z" level=info msg="StartContainer for \"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" returns successfully" Sep 4 00:06:31.596293 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 00:06:31.596358 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 00:06:31.777461 kubelet[3131]: I0904 00:06:31.777198 3131 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfcef207-3e31-425d-a942-e646548681b1-whisker-ca-bundle\") pod \"cfcef207-3e31-425d-a942-e646548681b1\" (UID: \"cfcef207-3e31-425d-a942-e646548681b1\") " Sep 4 00:06:31.777461 kubelet[3131]: I0904 00:06:31.777235 3131 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vv29\" (UniqueName: \"kubernetes.io/projected/cfcef207-3e31-425d-a942-e646548681b1-kube-api-access-4vv29\") pod \"cfcef207-3e31-425d-a942-e646548681b1\" (UID: \"cfcef207-3e31-425d-a942-e646548681b1\") " Sep 4 00:06:31.777461 kubelet[3131]: I0904 00:06:31.777253 3131 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cfcef207-3e31-425d-a942-e646548681b1-whisker-backend-key-pair\") pod \"cfcef207-3e31-425d-a942-e646548681b1\" (UID: \"cfcef207-3e31-425d-a942-e646548681b1\") " Sep 4 00:06:31.777988 kubelet[3131]: I0904 00:06:31.777969 3131 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfcef207-3e31-425d-a942-e646548681b1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cfcef207-3e31-425d-a942-e646548681b1" (UID: "cfcef207-3e31-425d-a942-e646548681b1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 00:06:31.781389 kubelet[3131]: I0904 00:06:31.781363 3131 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfcef207-3e31-425d-a942-e646548681b1-kube-api-access-4vv29" (OuterVolumeSpecName: "kube-api-access-4vv29") pod "cfcef207-3e31-425d-a942-e646548681b1" (UID: "cfcef207-3e31-425d-a942-e646548681b1"). InnerVolumeSpecName "kube-api-access-4vv29". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 00:06:31.781760 kubelet[3131]: I0904 00:06:31.781713 3131 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcef207-3e31-425d-a942-e646548681b1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cfcef207-3e31-425d-a942-e646548681b1" (UID: "cfcef207-3e31-425d-a942-e646548681b1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 00:06:31.878113 kubelet[3131]: I0904 00:06:31.878093 3131 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cfcef207-3e31-425d-a942-e646548681b1-whisker-backend-key-pair\") on node \"ci-4372.1.0-n-f08c63113b\" DevicePath \"\"" Sep 4 00:06:31.878113 kubelet[3131]: I0904 00:06:31.878112 3131 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vv29\" (UniqueName: \"kubernetes.io/projected/cfcef207-3e31-425d-a942-e646548681b1-kube-api-access-4vv29\") on node \"ci-4372.1.0-n-f08c63113b\" DevicePath \"\"" Sep 4 00:06:31.878197 kubelet[3131]: I0904 00:06:31.878121 3131 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfcef207-3e31-425d-a942-e646548681b1-whisker-ca-bundle\") on node \"ci-4372.1.0-n-f08c63113b\" DevicePath \"\"" Sep 4 00:06:32.043931 systemd[1]: Removed slice kubepods-besteffort-podcfcef207_3e31_425d_a942_e646548681b1.slice - libcontainer container kubepods-besteffort-podcfcef207_3e31_425d_a942_e646548681b1.slice. Sep 4 00:06:32.076313 kubelet[3131]: I0904 00:06:32.075946 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vtnsf" podStartSLOduration=1.607289879 podStartE2EDuration="16.075931943s" podCreationTimestamp="2025-09-04 00:06:16 +0000 UTC" firstStartedPulling="2025-09-04 00:06:16.83557172 +0000 UTC m=+25.989419117" lastFinishedPulling="2025-09-04 00:06:31.304213774 +0000 UTC m=+40.458061181" observedRunningTime="2025-09-04 00:06:32.057990396 +0000 UTC m=+41.211837802" watchObservedRunningTime="2025-09-04 00:06:32.075931943 +0000 UTC m=+41.229779391" Sep 4 00:06:32.120172 containerd[1717]: time="2025-09-04T00:06:32.120143461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"0e60a437b4ee02e5d5b4d9338eeb35a4c0b7e2e094b94439f3ce89a026cdeed8\" pid:4184 exit_status:1 exited_at:{seconds:1756944392 nanos:118183751}" Sep 4 00:06:32.125815 systemd[1]: Created slice kubepods-besteffort-pod6a86091a_5fbe_4cf5_b180_8ddfe445f64c.slice - libcontainer container kubepods-besteffort-pod6a86091a_5fbe_4cf5_b180_8ddfe445f64c.slice. Sep 4 00:06:32.184923 kubelet[3131]: I0904 00:06:32.184887 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a86091a-5fbe-4cf5-b180-8ddfe445f64c-whisker-ca-bundle\") pod \"whisker-78cd5b9d79-4tbhd\" (UID: \"6a86091a-5fbe-4cf5-b180-8ddfe445f64c\") " pod="calico-system/whisker-78cd5b9d79-4tbhd" Sep 4 00:06:32.184995 kubelet[3131]: I0904 00:06:32.184937 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4kd\" (UniqueName: \"kubernetes.io/projected/6a86091a-5fbe-4cf5-b180-8ddfe445f64c-kube-api-access-jt4kd\") pod \"whisker-78cd5b9d79-4tbhd\" (UID: \"6a86091a-5fbe-4cf5-b180-8ddfe445f64c\") " pod="calico-system/whisker-78cd5b9d79-4tbhd" Sep 4 00:06:32.184995 kubelet[3131]: I0904 00:06:32.184959 3131 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a86091a-5fbe-4cf5-b180-8ddfe445f64c-whisker-backend-key-pair\") pod \"whisker-78cd5b9d79-4tbhd\" (UID: \"6a86091a-5fbe-4cf5-b180-8ddfe445f64c\") " pod="calico-system/whisker-78cd5b9d79-4tbhd" Sep 4 00:06:32.264115 systemd[1]: var-lib-kubelet-pods-cfcef207\x2d3e31\x2d425d\x2da942\x2de646548681b1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 00:06:32.264192 systemd[1]: var-lib-kubelet-pods-cfcef207\x2d3e31\x2d425d\x2da942\x2de646548681b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vv29.mount: Deactivated successfully. Sep 4 00:06:32.430183 containerd[1717]: time="2025-09-04T00:06:32.430135317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cd5b9d79-4tbhd,Uid:6a86091a-5fbe-4cf5-b180-8ddfe445f64c,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:32.518146 systemd-networkd[1514]: calia7ad143706b: Link UP Sep 4 00:06:32.518270 systemd-networkd[1514]: calia7ad143706b: Gained carrier Sep 4 00:06:32.531495 containerd[1717]: 2025-09-04 00:06:32.453 [INFO][4198] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:06:32.531495 containerd[1717]: 2025-09-04 00:06:32.460 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0 whisker-78cd5b9d79- calico-system 6a86091a-5fbe-4cf5-b180-8ddfe445f64c 869 0 2025-09-04 00:06:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78cd5b9d79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b whisker-78cd5b9d79-4tbhd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia7ad143706b [] [] }} ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-" Sep 4 00:06:32.531495 containerd[1717]: 2025-09-04 00:06:32.460 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.531495 containerd[1717]: 2025-09-04 00:06:32.479 [INFO][4211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" HandleID="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Workload="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.479 [INFO][4211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" HandleID="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Workload="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-f08c63113b", "pod":"whisker-78cd5b9d79-4tbhd", "timestamp":"2025-09-04 00:06:32.479475998 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.479 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.479 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.479 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.484 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.487 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.490 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.491 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531711 containerd[1717]: 2025-09-04 00:06:32.493 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.493 [INFO][4211] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.494 [INFO][4211] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.502 [INFO][4211] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.506 [INFO][4211] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.1/26] block=192.168.26.0/26 handle="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.506 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.1/26] handle="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.506 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:32.531925 containerd[1717]: 2025-09-04 00:06:32.506 [INFO][4211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.1/26] IPv6=[] ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" HandleID="k8s-pod-network.feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Workload="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.532028 containerd[1717]: 2025-09-04 00:06:32.509 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0", GenerateName:"whisker-78cd5b9d79-", Namespace:"calico-system", SelfLink:"", UID:"6a86091a-5fbe-4cf5-b180-8ddfe445f64c", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cd5b9d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"whisker-78cd5b9d79-4tbhd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia7ad143706b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:32.532028 containerd[1717]: 2025-09-04 00:06:32.509 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.1/32] ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.532102 containerd[1717]: 2025-09-04 00:06:32.509 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7ad143706b ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.532102 containerd[1717]: 2025-09-04 00:06:32.517 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.532127 containerd[1717]: 2025-09-04 00:06:32.517 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0", GenerateName:"whisker-78cd5b9d79-", Namespace:"calico-system", SelfLink:"", UID:"6a86091a-5fbe-4cf5-b180-8ddfe445f64c", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cd5b9d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d", Pod:"whisker-78cd5b9d79-4tbhd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia7ad143706b", MAC:"9a:c6:94:c6:db:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:32.532177 containerd[1717]: 2025-09-04 00:06:32.529 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" Namespace="calico-system" Pod="whisker-78cd5b9d79-4tbhd" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-whisker--78cd5b9d79--4tbhd-eth0" Sep 4 00:06:32.568099 containerd[1717]: time="2025-09-04T00:06:32.568049182Z" level=info msg="connecting to shim feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d" address="unix:///run/containerd/s/10965a3c5b856a9930cfff112ecc2b89d9bdd6e7f92b2e887d8a42e0dd7d94f9" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:32.585774 systemd[1]: Started cri-containerd-feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d.scope - libcontainer container feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d. Sep 4 00:06:32.620307 containerd[1717]: time="2025-09-04T00:06:32.620272576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cd5b9d79-4tbhd,Uid:6a86091a-5fbe-4cf5-b180-8ddfe445f64c,Namespace:calico-system,Attempt:0,} returns sandbox id \"feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d\"" Sep 4 00:06:32.621748 containerd[1717]: time="2025-09-04T00:06:32.621723960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 00:06:32.926031 kubelet[3131]: I0904 00:06:32.926000 3131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfcef207-3e31-425d-a942-e646548681b1" path="/var/lib/kubelet/pods/cfcef207-3e31-425d-a942-e646548681b1/volumes" Sep 4 00:06:33.154008 containerd[1717]: time="2025-09-04T00:06:33.153941737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"55c14e1b270643094ed467a22fe158cf4c8150e07558990474d274a8db056f81\" pid:4371 exit_status:1 exited_at:{seconds:1756944393 nanos:153769395}" Sep 4 00:06:33.785749 systemd-networkd[1514]: calia7ad143706b: Gained IPv6LL Sep 4 00:06:33.870443 containerd[1717]: time="2025-09-04T00:06:33.870415455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:33.872925 containerd[1717]: time="2025-09-04T00:06:33.872898167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 00:06:33.875493 containerd[1717]: time="2025-09-04T00:06:33.875455232Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:33.879349 containerd[1717]: time="2025-09-04T00:06:33.879311571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:33.879662 containerd[1717]: time="2025-09-04T00:06:33.879640962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.257533047s" Sep 4 00:06:33.879705 containerd[1717]: time="2025-09-04T00:06:33.879669739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 00:06:33.881832 containerd[1717]: time="2025-09-04T00:06:33.881706419Z" level=info msg="CreateContainer within sandbox \"feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 00:06:33.898834 containerd[1717]: time="2025-09-04T00:06:33.898699216Z" level=info msg="Container 58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:33.901694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208221324.mount: Deactivated successfully. Sep 4 00:06:33.921122 containerd[1717]: time="2025-09-04T00:06:33.921102503Z" level=info msg="CreateContainer within sandbox \"feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52\"" Sep 4 00:06:33.922798 containerd[1717]: time="2025-09-04T00:06:33.922781566Z" level=info msg="StartContainer for \"58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52\"" Sep 4 00:06:33.924473 containerd[1717]: time="2025-09-04T00:06:33.924432172Z" level=info msg="connecting to shim 58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52" address="unix:///run/containerd/s/10965a3c5b856a9930cfff112ecc2b89d9bdd6e7f92b2e887d8a42e0dd7d94f9" protocol=ttrpc version=3 Sep 4 00:06:33.951762 systemd[1]: Started cri-containerd-58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52.scope - libcontainer container 58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52. Sep 4 00:06:33.992194 containerd[1717]: time="2025-09-04T00:06:33.991741183Z" level=info msg="StartContainer for \"58502e17fb12c13a39ca223a5de3e3d146bb4f2da80f3e1c5b9e97ebd9112f52\" returns successfully" Sep 4 00:06:33.992803 containerd[1717]: time="2025-09-04T00:06:33.992785236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 00:06:35.942545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122945349.mount: Deactivated successfully. Sep 4 00:06:35.994448 containerd[1717]: time="2025-09-04T00:06:35.994421701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:35.997438 containerd[1717]: time="2025-09-04T00:06:35.997405027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 00:06:36.000686 containerd[1717]: time="2025-09-04T00:06:36.000648008Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.005345 containerd[1717]: time="2025-09-04T00:06:36.005315553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.005749 containerd[1717]: time="2025-09-04T00:06:36.005731431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.012920269s" Sep 4 00:06:36.005811 containerd[1717]: time="2025-09-04T00:06:36.005801031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 00:06:36.007977 containerd[1717]: time="2025-09-04T00:06:36.007955525Z" level=info msg="CreateContainer within sandbox \"feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 00:06:36.025019 containerd[1717]: time="2025-09-04T00:06:36.024223929Z" level=info msg="Container 237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:36.040469 containerd[1717]: time="2025-09-04T00:06:36.040440434Z" level=info msg="CreateContainer within sandbox \"feaf3fd6590e94d0c4834354c9ae0f5c4425416db6e922f8a12715e355264b7d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571\"" Sep 4 00:06:36.041099 containerd[1717]: time="2025-09-04T00:06:36.041082307Z" level=info msg="StartContainer for \"237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571\"" Sep 4 00:06:36.042264 containerd[1717]: time="2025-09-04T00:06:36.042243799Z" level=info msg="connecting to shim 237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571" address="unix:///run/containerd/s/10965a3c5b856a9930cfff112ecc2b89d9bdd6e7f92b2e887d8a42e0dd7d94f9" protocol=ttrpc version=3 Sep 4 00:06:36.065773 systemd[1]: Started cri-containerd-237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571.scope - libcontainer container 237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571. Sep 4 00:06:36.107294 containerd[1717]: time="2025-09-04T00:06:36.107273051Z" level=info msg="StartContainer for \"237a02fd2f6e0fc39d4c998b0ae171b47d115ac89c991c7528c454ae93b69571\" returns successfully" Sep 4 00:06:37.065342 kubelet[3131]: I0904 00:06:37.065303 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78cd5b9d79-4tbhd" podStartSLOduration=1.6802324990000002 podStartE2EDuration="5.065289897s" podCreationTimestamp="2025-09-04 00:06:32 +0000 UTC" firstStartedPulling="2025-09-04 00:06:32.621247148 +0000 UTC m=+41.775094538" lastFinishedPulling="2025-09-04 00:06:36.006304546 +0000 UTC m=+45.160151936" observedRunningTime="2025-09-04 00:06:37.064595434 +0000 UTC m=+46.218442839" watchObservedRunningTime="2025-09-04 00:06:37.065289897 +0000 UTC m=+46.219137299" Sep 4 00:06:37.922170 containerd[1717]: time="2025-09-04T00:06:37.922147100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-7v6zj,Uid:879a3434-5495-4c03-83be-8761399ce6fc,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:37.922405 containerd[1717]: time="2025-09-04T00:06:37.922192638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5t4zk,Uid:6ab12f71-4124-4df3-9aa6-7b3ea03b921f,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:38.022893 systemd-networkd[1514]: cali76c97a1a9e4: Link UP Sep 4 00:06:38.023285 systemd-networkd[1514]: cali76c97a1a9e4: Gained carrier Sep 4 00:06:38.035709 containerd[1717]: 2025-09-04 00:06:37.957 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:06:38.035709 containerd[1717]: 2025-09-04 00:06:37.965 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0 calico-apiserver-745d5857c4- calico-apiserver 879a3434-5495-4c03-83be-8761399ce6fc 799 0 2025-09-04 00:06:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745d5857c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b calico-apiserver-745d5857c4-7v6zj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali76c97a1a9e4 [] [] }} ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-" Sep 4 00:06:38.035709 containerd[1717]: 2025-09-04 00:06:37.966 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.035709 containerd[1717]: 2025-09-04 00:06:37.991 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" HandleID="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:37.991 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" HandleID="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-f08c63113b", "pod":"calico-apiserver-745d5857c4-7v6zj", "timestamp":"2025-09-04 00:06:37.991274074 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:37.991 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:37.991 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:37.991 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:37.996 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:37.999 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:38.002 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:38.003 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035840 containerd[1717]: 2025-09-04 00:06:38.004 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.004 [INFO][4578] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.005 [INFO][4578] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84 Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.008 [INFO][4578] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.016 [INFO][4578] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.2/26] block=192.168.26.0/26 handle="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.016 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.2/26] handle="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.016 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:38.035974 containerd[1717]: 2025-09-04 00:06:38.016 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.2/26] IPv6=[] ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" HandleID="k8s-pod-network.416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.036077 containerd[1717]: 2025-09-04 00:06:38.017 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0", GenerateName:"calico-apiserver-745d5857c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"879a3434-5495-4c03-83be-8761399ce6fc", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745d5857c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"calico-apiserver-745d5857c4-7v6zj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76c97a1a9e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:38.036112 containerd[1717]: 2025-09-04 00:06:38.018 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.2/32] ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.036112 containerd[1717]: 2025-09-04 00:06:38.018 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76c97a1a9e4 ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.036112 containerd[1717]: 2025-09-04 00:06:38.022 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.036150 containerd[1717]: 2025-09-04 00:06:38.023 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0", GenerateName:"calico-apiserver-745d5857c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"879a3434-5495-4c03-83be-8761399ce6fc", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745d5857c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84", Pod:"calico-apiserver-745d5857c4-7v6zj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76c97a1a9e4", MAC:"0a:98:7d:5c:9d:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:38.036186 containerd[1717]: 2025-09-04 00:06:38.034 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-7v6zj" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--7v6zj-eth0" Sep 4 00:06:38.075186 containerd[1717]: time="2025-09-04T00:06:38.075106918Z" level=info msg="connecting to shim 416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84" address="unix:///run/containerd/s/5b9f478db6f62610b545ff29094a29240bc6a00dc70fba2f7a748dfef059c0f6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:38.092753 systemd[1]: Started cri-containerd-416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84.scope - libcontainer container 416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84. Sep 4 00:06:38.132309 systemd-networkd[1514]: caliae4c81c1875: Link UP Sep 4 00:06:38.132665 systemd-networkd[1514]: caliae4c81c1875: Gained carrier Sep 4 00:06:38.143412 containerd[1717]: time="2025-09-04T00:06:38.143288935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-7v6zj,Uid:879a3434-5495-4c03-83be-8761399ce6fc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84\"" Sep 4 00:06:38.145708 containerd[1717]: time="2025-09-04T00:06:38.145432935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 00:06:38.154278 containerd[1717]: 2025-09-04 00:06:37.963 [INFO][4553] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 00:06:38.154278 containerd[1717]: 2025-09-04 00:06:37.972 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0 coredns-7c65d6cfc9- kube-system 6ab12f71-4124-4df3-9aa6-7b3ea03b921f 795 0 2025-09-04 00:05:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b coredns-7c65d6cfc9-5t4zk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae4c81c1875 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-" Sep 4 00:06:38.154278 containerd[1717]: 2025-09-04 00:06:37.972 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.154278 containerd[1717]: 2025-09-04 00:06:37.998 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" HandleID="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Workload="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:37.998 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" HandleID="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Workload="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-f08c63113b", "pod":"coredns-7c65d6cfc9-5t4zk", "timestamp":"2025-09-04 00:06:37.9986831 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:37.999 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.016 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.016 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.099 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.105 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.110 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.112 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154418 containerd[1717]: 2025-09-04 00:06:38.114 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.114 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.115 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7 Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.119 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.128 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.3/26] block=192.168.26.0/26 handle="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.128 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.3/26] handle="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.128 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:38.154555 containerd[1717]: 2025-09-04 00:06:38.128 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.3/26] IPv6=[] ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" HandleID="k8s-pod-network.a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Workload="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.154667 containerd[1717]: 2025-09-04 00:06:38.130 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ab12f71-4124-4df3-9aa6-7b3ea03b921f", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"coredns-7c65d6cfc9-5t4zk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae4c81c1875", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:38.154667 containerd[1717]: 2025-09-04 00:06:38.130 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.3/32] ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.154667 containerd[1717]: 2025-09-04 00:06:38.130 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae4c81c1875 ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.154667 containerd[1717]: 2025-09-04 00:06:38.132 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.154667 containerd[1717]: 2025-09-04 00:06:38.133 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ab12f71-4124-4df3-9aa6-7b3ea03b921f", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7", Pod:"coredns-7c65d6cfc9-5t4zk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae4c81c1875", MAC:"02:4f:70:f0:58:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:38.154667 containerd[1717]: 2025-09-04 00:06:38.153 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5t4zk" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--5t4zk-eth0" Sep 4 00:06:38.188290 containerd[1717]: time="2025-09-04T00:06:38.188209216Z" level=info msg="connecting to shim a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7" address="unix:///run/containerd/s/428b3bb8705a2ebf594d32c732b4ccafd5db7a91b7d72f77fe52b03a33219b3a" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:38.203724 systemd[1]: Started cri-containerd-a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7.scope - libcontainer container a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7. Sep 4 00:06:38.235207 containerd[1717]: time="2025-09-04T00:06:38.235155566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5t4zk,Uid:6ab12f71-4124-4df3-9aa6-7b3ea03b921f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7\"" Sep 4 00:06:38.237630 containerd[1717]: time="2025-09-04T00:06:38.237059116Z" level=info msg="CreateContainer within sandbox \"a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:06:38.253094 containerd[1717]: time="2025-09-04T00:06:38.253065819Z" level=info msg="Container f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:38.268494 containerd[1717]: time="2025-09-04T00:06:38.268465721Z" level=info msg="CreateContainer within sandbox \"a6c6bb09206556ea5b4399f4acb465785a623438beb333ca3c1156d854fdb3f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c\"" Sep 4 00:06:38.268829 containerd[1717]: time="2025-09-04T00:06:38.268814099Z" level=info msg="StartContainer for \"f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c\"" Sep 4 00:06:38.269621 containerd[1717]: time="2025-09-04T00:06:38.269577767Z" level=info msg="connecting to shim f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c" address="unix:///run/containerd/s/428b3bb8705a2ebf594d32c732b4ccafd5db7a91b7d72f77fe52b03a33219b3a" protocol=ttrpc version=3 Sep 4 00:06:38.288899 systemd[1]: Started cri-containerd-f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c.scope - libcontainer container f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c. Sep 4 00:06:38.316088 containerd[1717]: time="2025-09-04T00:06:38.316070979Z" level=info msg="StartContainer for \"f8a11c38a1e6468eb7832937fd619845a30a72f476a4bcc7a9402aeacb165a4c\" returns successfully" Sep 4 00:06:38.866815 kubelet[3131]: I0904 00:06:38.866709 3131 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:39.070580 kubelet[3131]: I0904 00:06:39.070527 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5t4zk" podStartSLOduration=43.070510975 podStartE2EDuration="43.070510975s" podCreationTimestamp="2025-09-04 00:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:39.070314043 +0000 UTC m=+48.224161447" watchObservedRunningTime="2025-09-04 00:06:39.070510975 +0000 UTC m=+48.224358427" Sep 4 00:06:39.098669 systemd-networkd[1514]: cali76c97a1a9e4: Gained IPv6LL Sep 4 00:06:39.354688 systemd-networkd[1514]: caliae4c81c1875: Gained IPv6LL Sep 4 00:06:39.665016 systemd-networkd[1514]: vxlan.calico: Link UP Sep 4 00:06:39.665022 systemd-networkd[1514]: vxlan.calico: Gained carrier Sep 4 00:06:40.274912 containerd[1717]: time="2025-09-04T00:06:40.274882782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:40.277300 containerd[1717]: time="2025-09-04T00:06:40.277267530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 00:06:40.279845 containerd[1717]: time="2025-09-04T00:06:40.279809599Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:40.283312 containerd[1717]: time="2025-09-04T00:06:40.283136803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:40.283622 containerd[1717]: time="2025-09-04T00:06:40.283582695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.138119383s" Sep 4 00:06:40.283666 containerd[1717]: time="2025-09-04T00:06:40.283635939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 00:06:40.285585 containerd[1717]: time="2025-09-04T00:06:40.285561907Z" level=info msg="CreateContainer within sandbox \"416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:06:40.304616 containerd[1717]: time="2025-09-04T00:06:40.304384375Z" level=info msg="Container f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:40.322620 containerd[1717]: time="2025-09-04T00:06:40.322488408Z" level=info msg="CreateContainer within sandbox \"416a407367bc802b011fbad5c4e184458d05dd272a0782bd38fe5deedaa06a84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e\"" Sep 4 00:06:40.323111 containerd[1717]: time="2025-09-04T00:06:40.323092335Z" level=info msg="StartContainer for \"f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e\"" Sep 4 00:06:40.324121 containerd[1717]: time="2025-09-04T00:06:40.324099079Z" level=info msg="connecting to shim f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e" address="unix:///run/containerd/s/5b9f478db6f62610b545ff29094a29240bc6a00dc70fba2f7a748dfef059c0f6" protocol=ttrpc version=3 Sep 4 00:06:40.347727 systemd[1]: Started cri-containerd-f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e.scope - libcontainer container f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e. Sep 4 00:06:40.386916 containerd[1717]: time="2025-09-04T00:06:40.386884128Z" level=info msg="StartContainer for \"f60fbd981b078fb61b511e222abe6002a6f8f2b62b4f071b3b1e8f3c1e65105e\" returns successfully" Sep 4 00:06:40.922929 containerd[1717]: time="2025-09-04T00:06:40.922778254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-4llsq,Uid:1b827302-779d-4aaf-bb2d-0b145bc49c1e,Namespace:calico-apiserver,Attempt:0,}" Sep 4 00:06:40.922929 containerd[1717]: time="2025-09-04T00:06:40.922871303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fwqf2,Uid:7305ad0a-ab78-4e36-bd2e-b494a556ee8c,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:40.924113 containerd[1717]: time="2025-09-04T00:06:40.924094074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-t4mp9,Uid:6da523f2-f37a-41ae-8a42-7d012bf2a528,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:41.109285 kubelet[3131]: I0904 00:06:41.109135 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745d5857c4-7v6zj" podStartSLOduration=25.970059594 podStartE2EDuration="28.109118084s" podCreationTimestamp="2025-09-04 00:06:13 +0000 UTC" firstStartedPulling="2025-09-04 00:06:38.145113381 +0000 UTC m=+47.298960776" lastFinishedPulling="2025-09-04 00:06:40.284171873 +0000 UTC m=+49.438019266" observedRunningTime="2025-09-04 00:06:41.107634577 +0000 UTC m=+50.261481978" watchObservedRunningTime="2025-09-04 00:06:41.109118084 +0000 UTC m=+50.262965486" Sep 4 00:06:41.166590 systemd-networkd[1514]: calif8202e415f1: Link UP Sep 4 00:06:41.168343 systemd-networkd[1514]: calif8202e415f1: Gained carrier Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:40.997 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0 calico-apiserver-745d5857c4- calico-apiserver 1b827302-779d-4aaf-bb2d-0b145bc49c1e 804 0 2025-09-04 00:06:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745d5857c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b calico-apiserver-745d5857c4-4llsq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8202e415f1 [] [] }} ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:40.997 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.069 [INFO][4974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" HandleID="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.069 [INFO][4974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" HandleID="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfd90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-f08c63113b", "pod":"calico-apiserver-745d5857c4-4llsq", "timestamp":"2025-09-04 00:06:41.060354444 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.069 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.069 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.070 [INFO][4974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.098 [INFO][4974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.116 [INFO][4974] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.126 [INFO][4974] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.130 [INFO][4974] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.133 [INFO][4974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.133 [INFO][4974] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.138 [INFO][4974] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785 Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.145 [INFO][4974] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.156 [INFO][4974] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.4/26] block=192.168.26.0/26 handle="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.156 [INFO][4974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.4/26] handle="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.156 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:41.191874 containerd[1717]: 2025-09-04 00:06:41.156 [INFO][4974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.4/26] IPv6=[] ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" HandleID="k8s-pod-network.8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.192352 containerd[1717]: 2025-09-04 00:06:41.159 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0", GenerateName:"calico-apiserver-745d5857c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b827302-779d-4aaf-bb2d-0b145bc49c1e", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745d5857c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"calico-apiserver-745d5857c4-4llsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8202e415f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:41.192352 containerd[1717]: 2025-09-04 00:06:41.159 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.4/32] ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.192352 containerd[1717]: 2025-09-04 00:06:41.159 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8202e415f1 ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.192352 containerd[1717]: 2025-09-04 00:06:41.170 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.192352 containerd[1717]: 2025-09-04 00:06:41.171 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0", GenerateName:"calico-apiserver-745d5857c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b827302-779d-4aaf-bb2d-0b145bc49c1e", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745d5857c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785", Pod:"calico-apiserver-745d5857c4-4llsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8202e415f1", MAC:"c6:9a:52:13:22:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:41.192352 containerd[1717]: 2025-09-04 00:06:41.189 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" Namespace="calico-apiserver" Pod="calico-apiserver-745d5857c4-4llsq" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--apiserver--745d5857c4--4llsq-eth0" Sep 4 00:06:41.243978 containerd[1717]: time="2025-09-04T00:06:41.243663624Z" level=info msg="connecting to shim 8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785" address="unix:///run/containerd/s/c47275f3ef118a18a64acf4ef1f3da07f37d72f701b5a051d58d977dc1ff83f9" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:41.258659 systemd-networkd[1514]: cali8d61a8c9828: Link UP Sep 4 00:06:41.259709 systemd-networkd[1514]: cali8d61a8c9828: Gained carrier Sep 4 00:06:41.280747 systemd[1]: Started cri-containerd-8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785.scope - libcontainer container 8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785. Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.004 [INFO][4941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0 coredns-7c65d6cfc9- kube-system 7305ad0a-ab78-4e36-bd2e-b494a556ee8c 802 0 2025-09-04 00:05:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b coredns-7c65d6cfc9-fwqf2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d61a8c9828 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.004 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.079 [INFO][4968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" HandleID="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Workload="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.079 [INFO][4968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" HandleID="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Workload="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036c5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-f08c63113b", "pod":"coredns-7c65d6cfc9-fwqf2", "timestamp":"2025-09-04 00:06:41.078991345 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.079 [INFO][4968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.156 [INFO][4968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.156 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.197 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.215 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.223 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.225 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.227 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.227 [INFO][4968] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.229 [INFO][4968] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5 Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.238 [INFO][4968] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.245 [INFO][4968] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.5/26] block=192.168.26.0/26 handle="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.245 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.5/26] handle="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.245 [INFO][4968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:41.284876 containerd[1717]: 2025-09-04 00:06:41.245 [INFO][4968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.5/26] IPv6=[] ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" HandleID="k8s-pod-network.c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Workload="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.285471 containerd[1717]: 2025-09-04 00:06:41.251 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7305ad0a-ab78-4e36-bd2e-b494a556ee8c", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"coredns-7c65d6cfc9-fwqf2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d61a8c9828", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:41.285471 containerd[1717]: 2025-09-04 00:06:41.252 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.5/32] ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.285471 containerd[1717]: 2025-09-04 00:06:41.253 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d61a8c9828 ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.285471 containerd[1717]: 2025-09-04 00:06:41.260 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.285471 containerd[1717]: 2025-09-04 00:06:41.260 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7305ad0a-ab78-4e36-bd2e-b494a556ee8c", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 5, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5", Pod:"coredns-7c65d6cfc9-fwqf2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d61a8c9828", MAC:"62:ff:02:9b:51:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:41.285471 containerd[1717]: 2025-09-04 00:06:41.280 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fwqf2" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-coredns--7c65d6cfc9--fwqf2-eth0" Sep 4 00:06:41.329206 containerd[1717]: time="2025-09-04T00:06:41.329151933Z" level=info msg="connecting to shim c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5" address="unix:///run/containerd/s/adba5ed1d38c1b82dcc4e3e12cad13bea13843d53791f46e37e2d2c91a0a4b80" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:41.358261 containerd[1717]: time="2025-09-04T00:06:41.358179592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745d5857c4-4llsq,Uid:1b827302-779d-4aaf-bb2d-0b145bc49c1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785\"" Sep 4 00:06:41.366865 systemd[1]: Started cri-containerd-c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5.scope - libcontainer container c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5. Sep 4 00:06:41.374073 systemd-networkd[1514]: cali33041836032: Link UP Sep 4 00:06:41.374898 containerd[1717]: time="2025-09-04T00:06:41.374845925Z" level=info msg="CreateContainer within sandbox \"8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 00:06:41.376574 systemd-networkd[1514]: cali33041836032: Gained carrier Sep 4 00:06:41.393963 containerd[1717]: time="2025-09-04T00:06:41.393153646Z" level=info msg="Container c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.020 [INFO][4951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0 goldmane-7988f88666- calico-system 6da523f2-f37a-41ae-8a42-7d012bf2a528 806 0 2025-09-04 00:06:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b goldmane-7988f88666-t4mp9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali33041836032 [] [] }} ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.021 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.087 [INFO][4982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" HandleID="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Workload="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.087 [INFO][4982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" HandleID="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Workload="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-f08c63113b", "pod":"goldmane-7988f88666-t4mp9", "timestamp":"2025-09-04 00:06:41.08724253 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.087 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.245 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.245 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.290 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.317 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.324 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.325 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.328 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.328 [INFO][4982] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.330 [INFO][4982] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613 Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.338 [INFO][4982] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.354 [INFO][4982] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.6/26] block=192.168.26.0/26 handle="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.354 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.6/26] handle="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.354 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:41.402387 containerd[1717]: 2025-09-04 00:06:41.354 [INFO][4982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.6/26] IPv6=[] ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" HandleID="k8s-pod-network.76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Workload="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.404589 containerd[1717]: 2025-09-04 00:06:41.357 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6da523f2-f37a-41ae-8a42-7d012bf2a528", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"goldmane-7988f88666-t4mp9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33041836032", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:41.404589 containerd[1717]: 2025-09-04 00:06:41.357 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.6/32] ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.404589 containerd[1717]: 2025-09-04 00:06:41.358 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33041836032 ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.404589 containerd[1717]: 2025-09-04 00:06:41.378 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.404589 containerd[1717]: 2025-09-04 00:06:41.378 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"6da523f2-f37a-41ae-8a42-7d012bf2a528", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613", Pod:"goldmane-7988f88666-t4mp9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33041836032", MAC:"aa:a7:2a:26:ec:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:41.404589 containerd[1717]: 2025-09-04 00:06:41.397 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" Namespace="calico-system" Pod="goldmane-7988f88666-t4mp9" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-goldmane--7988f88666--t4mp9-eth0" Sep 4 00:06:41.418269 containerd[1717]: time="2025-09-04T00:06:41.418167978Z" level=info msg="CreateContainer within sandbox \"8bd757dc9f6688321fd1054f6931dd2de394f96d8b7b07d8de7e164f3e123785\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541\"" Sep 4 00:06:41.418803 containerd[1717]: time="2025-09-04T00:06:41.418783476Z" level=info msg="StartContainer for \"c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541\"" Sep 4 00:06:41.420926 containerd[1717]: time="2025-09-04T00:06:41.420899220Z" level=info msg="connecting to shim c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541" address="unix:///run/containerd/s/c47275f3ef118a18a64acf4ef1f3da07f37d72f701b5a051d58d977dc1ff83f9" protocol=ttrpc version=3 Sep 4 00:06:41.428713 containerd[1717]: time="2025-09-04T00:06:41.428689225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fwqf2,Uid:7305ad0a-ab78-4e36-bd2e-b494a556ee8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5\"" Sep 4 00:06:41.432835 containerd[1717]: time="2025-09-04T00:06:41.432761349Z" level=info msg="CreateContainer within sandbox \"c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:06:41.442738 systemd[1]: Started cri-containerd-c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541.scope - libcontainer container c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541. Sep 4 00:06:41.461400 containerd[1717]: time="2025-09-04T00:06:41.461359599Z" level=info msg="connecting to shim 76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613" address="unix:///run/containerd/s/ad22d4e8b3ba31515e6ac86793b904e2e3ded6830f1bc2a833ed522de6b247ae" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:41.464312 containerd[1717]: time="2025-09-04T00:06:41.464289193Z" level=info msg="Container 31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:41.484407 containerd[1717]: time="2025-09-04T00:06:41.484380350Z" level=info msg="CreateContainer within sandbox \"c89b771d75adb4b6325ae823a2b86e0066b2ee2180e4782dc6e4f17f9c94b0e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477\"" Sep 4 00:06:41.484769 containerd[1717]: time="2025-09-04T00:06:41.484750119Z" level=info msg="StartContainer for \"31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477\"" Sep 4 00:06:41.485762 systemd[1]: Started cri-containerd-76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613.scope - libcontainer container 76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613. Sep 4 00:06:41.487073 containerd[1717]: time="2025-09-04T00:06:41.487036214Z" level=info msg="connecting to shim 31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477" address="unix:///run/containerd/s/adba5ed1d38c1b82dcc4e3e12cad13bea13843d53791f46e37e2d2c91a0a4b80" protocol=ttrpc version=3 Sep 4 00:06:41.502594 containerd[1717]: time="2025-09-04T00:06:41.502566199Z" level=info msg="StartContainer for \"c8409a45c1b4537cef4a5982e27d9e07f2737ff6630f82aea952f75313b1f541\" returns successfully" Sep 4 00:06:41.509766 systemd[1]: Started cri-containerd-31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477.scope - libcontainer container 31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477. Sep 4 00:06:41.539730 containerd[1717]: time="2025-09-04T00:06:41.539670753Z" level=info msg="StartContainer for \"31933274caa67cc06128eb0790c5eee46188958430df5f4d32bd04801f806477\" returns successfully" Sep 4 00:06:41.558339 containerd[1717]: time="2025-09-04T00:06:41.558282679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-t4mp9,Uid:6da523f2-f37a-41ae-8a42-7d012bf2a528,Namespace:calico-system,Attempt:0,} returns sandbox id \"76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613\"" Sep 4 00:06:41.559753 containerd[1717]: time="2025-09-04T00:06:41.559721527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 00:06:41.593683 systemd-networkd[1514]: vxlan.calico: Gained IPv6LL Sep 4 00:06:41.921833 containerd[1717]: time="2025-09-04T00:06:41.921810427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778fcdd7dc-bg992,Uid:9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:41.923405 containerd[1717]: time="2025-09-04T00:06:41.923078495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4rms,Uid:dad91c5e-f1ce-4b93-bc07-61538ab43fa7,Namespace:calico-system,Attempt:0,}" Sep 4 00:06:42.100033 kubelet[3131]: I0904 00:06:42.099780 3131 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:42.116987 systemd-networkd[1514]: cali3ec6ac78146: Link UP Sep 4 00:06:42.118669 systemd-networkd[1514]: cali3ec6ac78146: Gained carrier Sep 4 00:06:42.140412 kubelet[3131]: I0904 00:06:42.140354 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fwqf2" podStartSLOduration=46.140341114 podStartE2EDuration="46.140341114s" podCreationTimestamp="2025-09-04 00:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:42.124995658 +0000 UTC m=+51.278843082" watchObservedRunningTime="2025-09-04 00:06:42.140341114 +0000 UTC m=+51.294188504" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:41.998 [INFO][5231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0 calico-kube-controllers-778fcdd7dc- calico-system 9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c 803 0 2025-09-04 00:06:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:778fcdd7dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b calico-kube-controllers-778fcdd7dc-bg992 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3ec6ac78146 [] [] }} ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:41.999 [INFO][5231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.057 [INFO][5256] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" HandleID="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.058 [INFO][5256] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" HandleID="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-f08c63113b", "pod":"calico-kube-controllers-778fcdd7dc-bg992", "timestamp":"2025-09-04 00:06:42.057005825 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.058 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.058 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.058 [INFO][5256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.069 [INFO][5256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.073 [INFO][5256] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.079 [INFO][5256] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.081 [INFO][5256] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.083 [INFO][5256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.083 [INFO][5256] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.091 [INFO][5256] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414 Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.100 [INFO][5256] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.112 [INFO][5256] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.7/26] block=192.168.26.0/26 handle="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.112 [INFO][5256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.7/26] handle="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.112 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:42.141800 containerd[1717]: 2025-09-04 00:06:42.112 [INFO][5256] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.7/26] IPv6=[] ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" HandleID="k8s-pod-network.7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Workload="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.142963 containerd[1717]: 2025-09-04 00:06:42.114 [INFO][5231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0", GenerateName:"calico-kube-controllers-778fcdd7dc-", Namespace:"calico-system", SelfLink:"", UID:"9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"778fcdd7dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"calico-kube-controllers-778fcdd7dc-bg992", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec6ac78146", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:42.142963 containerd[1717]: 2025-09-04 00:06:42.114 [INFO][5231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.7/32] ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.142963 containerd[1717]: 2025-09-04 00:06:42.114 [INFO][5231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ec6ac78146 ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.142963 containerd[1717]: 2025-09-04 00:06:42.119 [INFO][5231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.142963 containerd[1717]: 2025-09-04 00:06:42.121 [INFO][5231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0", GenerateName:"calico-kube-controllers-778fcdd7dc-", Namespace:"calico-system", SelfLink:"", UID:"9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"778fcdd7dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414", Pod:"calico-kube-controllers-778fcdd7dc-bg992", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec6ac78146", MAC:"6a:f7:23:02:97:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:42.142963 containerd[1717]: 2025-09-04 00:06:42.138 [INFO][5231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" Namespace="calico-system" Pod="calico-kube-controllers-778fcdd7dc-bg992" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-calico--kube--controllers--778fcdd7dc--bg992-eth0" Sep 4 00:06:42.208558 systemd-networkd[1514]: cali069add1521f: Link UP Sep 4 00:06:42.210233 systemd-networkd[1514]: cali069add1521f: Gained carrier Sep 4 00:06:42.222905 kubelet[3131]: I0904 00:06:42.222867 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745d5857c4-4llsq" podStartSLOduration=29.222853584 podStartE2EDuration="29.222853584s" podCreationTimestamp="2025-09-04 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:42.164104107 +0000 UTC m=+51.317951512" watchObservedRunningTime="2025-09-04 00:06:42.222853584 +0000 UTC m=+51.376701011" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.014 [INFO][5238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0 csi-node-driver- calico-system dad91c5e-f1ce-4b93-bc07-61538ab43fa7 697 0 2025-09-04 00:06:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-n-f08c63113b csi-node-driver-t4rms eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali069add1521f [] [] }} ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.015 [INFO][5238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.068 [INFO][5261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" HandleID="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Workload="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.068 [INFO][5261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" HandleID="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Workload="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-f08c63113b", "pod":"csi-node-driver-t4rms", "timestamp":"2025-09-04 00:06:42.068027696 +0000 UTC"}, Hostname:"ci-4372.1.0-n-f08c63113b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.069 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.112 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.113 [INFO][5261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-f08c63113b' Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.170 [INFO][5261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.177 [INFO][5261] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.181 [INFO][5261] ipam/ipam.go 511: Trying affinity for 192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.184 [INFO][5261] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.186 [INFO][5261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.186 [INFO][5261] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.187 [INFO][5261] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91 Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.195 [INFO][5261] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.202 [INFO][5261] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.8/26] block=192.168.26.0/26 handle="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.202 [INFO][5261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.8/26] handle="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" host="ci-4372.1.0-n-f08c63113b" Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.202 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 00:06:42.227744 containerd[1717]: 2025-09-04 00:06:42.202 [INFO][5261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.8/26] IPv6=[] ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" HandleID="k8s-pod-network.d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Workload="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.228643 containerd[1717]: 2025-09-04 00:06:42.205 [INFO][5238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dad91c5e-f1ce-4b93-bc07-61538ab43fa7", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"", Pod:"csi-node-driver-t4rms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali069add1521f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:42.228643 containerd[1717]: 2025-09-04 00:06:42.205 [INFO][5238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.8/32] ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.228643 containerd[1717]: 2025-09-04 00:06:42.205 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali069add1521f ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.228643 containerd[1717]: 2025-09-04 00:06:42.210 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.228643 containerd[1717]: 2025-09-04 00:06:42.211 [INFO][5238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dad91c5e-f1ce-4b93-bc07-61538ab43fa7", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 0, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-f08c63113b", ContainerID:"d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91", Pod:"csi-node-driver-t4rms", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali069add1521f", MAC:"ca:2d:26:2b:b8:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 00:06:42.228643 containerd[1717]: 2025-09-04 00:06:42.223 [INFO][5238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" Namespace="calico-system" Pod="csi-node-driver-t4rms" WorkloadEndpoint="ci--4372.1.0--n--f08c63113b-k8s-csi--node--driver--t4rms-eth0" Sep 4 00:06:42.316222 containerd[1717]: time="2025-09-04T00:06:42.316174378Z" level=info msg="connecting to shim 7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414" address="unix:///run/containerd/s/cedc2534df1acb49aa1915f13042e47c4b1f6e5ba3174f6818bd12e7cd810e0d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:42.328729 containerd[1717]: time="2025-09-04T00:06:42.328700994Z" level=info msg="connecting to shim d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91" address="unix:///run/containerd/s/f01a16567a71b56fc4a8f34a7cae2432a51af41797f0e412358ef2ab50986783" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:42.351749 systemd[1]: Started cri-containerd-7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414.scope - libcontainer container 7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414. Sep 4 00:06:42.354111 systemd[1]: Started cri-containerd-d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91.scope - libcontainer container d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91. Sep 4 00:06:42.385102 containerd[1717]: time="2025-09-04T00:06:42.383744576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4rms,Uid:dad91c5e-f1ce-4b93-bc07-61538ab43fa7,Namespace:calico-system,Attempt:0,} returns sandbox id \"d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91\"" Sep 4 00:06:42.398194 containerd[1717]: time="2025-09-04T00:06:42.398178539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778fcdd7dc-bg992,Uid:9d8bf113-ad2c-477d-b7a0-ea1ea0bd862c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414\"" Sep 4 00:06:42.489971 systemd-networkd[1514]: cali33041836032: Gained IPv6LL Sep 4 00:06:42.617698 systemd-networkd[1514]: calif8202e415f1: Gained IPv6LL Sep 4 00:06:42.873993 systemd-networkd[1514]: cali8d61a8c9828: Gained IPv6LL Sep 4 00:06:43.556130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381887706.mount: Deactivated successfully. Sep 4 00:06:43.833918 systemd-networkd[1514]: cali3ec6ac78146: Gained IPv6LL Sep 4 00:06:44.146078 containerd[1717]: time="2025-09-04T00:06:44.146050232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:44.148794 containerd[1717]: time="2025-09-04T00:06:44.148767976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 00:06:44.151787 containerd[1717]: time="2025-09-04T00:06:44.151738339Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:44.153934 systemd-networkd[1514]: cali069add1521f: Gained IPv6LL Sep 4 00:06:44.155993 containerd[1717]: time="2025-09-04T00:06:44.155458095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:44.155993 containerd[1717]: time="2025-09-04T00:06:44.155919530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.596074541s" Sep 4 00:06:44.155993 containerd[1717]: time="2025-09-04T00:06:44.155943271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 00:06:44.156780 containerd[1717]: time="2025-09-04T00:06:44.156763677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 00:06:44.157916 containerd[1717]: time="2025-09-04T00:06:44.157888740Z" level=info msg="CreateContainer within sandbox \"76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 00:06:44.174625 containerd[1717]: time="2025-09-04T00:06:44.174056803Z" level=info msg="Container 98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:44.191365 containerd[1717]: time="2025-09-04T00:06:44.191343531Z" level=info msg="CreateContainer within sandbox \"76864ff75b94458f1b591284a49e12cf766881b3de1d50fbd56d4f541a621613\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\"" Sep 4 00:06:44.193626 containerd[1717]: time="2025-09-04T00:06:44.192510265Z" level=info msg="StartContainer for \"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\"" Sep 4 00:06:44.193626 containerd[1717]: time="2025-09-04T00:06:44.193348863Z" level=info msg="connecting to shim 98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503" address="unix:///run/containerd/s/ad22d4e8b3ba31515e6ac86793b904e2e3ded6830f1bc2a833ed522de6b247ae" protocol=ttrpc version=3 Sep 4 00:06:44.212734 systemd[1]: Started cri-containerd-98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503.scope - libcontainer container 98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503. Sep 4 00:06:44.252808 containerd[1717]: time="2025-09-04T00:06:44.252787888Z" level=info msg="StartContainer for \"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" returns successfully" Sep 4 00:06:45.121868 kubelet[3131]: I0904 00:06:45.121823 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-t4mp9" podStartSLOduration=27.524374499 podStartE2EDuration="30.121811135s" podCreationTimestamp="2025-09-04 00:06:15 +0000 UTC" firstStartedPulling="2025-09-04 00:06:41.559186578 +0000 UTC m=+50.713033971" lastFinishedPulling="2025-09-04 00:06:44.156623209 +0000 UTC m=+53.310470607" observedRunningTime="2025-09-04 00:06:45.120367164 +0000 UTC m=+54.274214568" watchObservedRunningTime="2025-09-04 00:06:45.121811135 +0000 UTC m=+54.275658538" Sep 4 00:06:45.176752 containerd[1717]: time="2025-09-04T00:06:45.176730071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"26c447e798fc1801a7486336f89514c83a74ce391a2fa249736aeeb357d56d0a\" pid:5448 exit_status:1 exited_at:{seconds:1756944405 nanos:176415092}" Sep 4 00:06:45.376416 containerd[1717]: time="2025-09-04T00:06:45.376358612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:45.378744 containerd[1717]: time="2025-09-04T00:06:45.378715009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 00:06:45.381598 containerd[1717]: time="2025-09-04T00:06:45.381574874Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:45.385157 containerd[1717]: time="2025-09-04T00:06:45.385119847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:45.385439 containerd[1717]: time="2025-09-04T00:06:45.385420055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.228566861s" Sep 4 00:06:45.385482 containerd[1717]: time="2025-09-04T00:06:45.385444501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 00:06:45.386216 containerd[1717]: time="2025-09-04T00:06:45.386158579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 00:06:45.387314 containerd[1717]: time="2025-09-04T00:06:45.387292545Z" level=info msg="CreateContainer within sandbox \"d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 00:06:45.407656 containerd[1717]: time="2025-09-04T00:06:45.406697010Z" level=info msg="Container 09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:45.421698 containerd[1717]: time="2025-09-04T00:06:45.421671981Z" level=info msg="CreateContainer within sandbox \"d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5\"" Sep 4 00:06:45.422208 containerd[1717]: time="2025-09-04T00:06:45.422185619Z" level=info msg="StartContainer for \"09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5\"" Sep 4 00:06:45.423454 containerd[1717]: time="2025-09-04T00:06:45.423415392Z" level=info msg="connecting to shim 09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5" address="unix:///run/containerd/s/f01a16567a71b56fc4a8f34a7cae2432a51af41797f0e412358ef2ab50986783" protocol=ttrpc version=3 Sep 4 00:06:45.441727 systemd[1]: Started cri-containerd-09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5.scope - libcontainer container 09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5. Sep 4 00:06:45.469997 containerd[1717]: time="2025-09-04T00:06:45.469978508Z" level=info msg="StartContainer for \"09a9c33a205c8eb3c506af5c98eb82af3bc1680776333c65ef21b2b6702d2da5\" returns successfully" Sep 4 00:06:46.169822 containerd[1717]: time="2025-09-04T00:06:46.169779841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"b7d3023ce7ca7c77cd08c9d31d1c8d9fb77fb812222c539d990870c564cde62e\" pid:5504 exited_at:{seconds:1756944406 nanos:169449488}" Sep 4 00:06:48.168053 containerd[1717]: time="2025-09-04T00:06:48.168024428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:48.170556 containerd[1717]: time="2025-09-04T00:06:48.170452733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 00:06:48.173631 containerd[1717]: time="2025-09-04T00:06:48.173294651Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:48.176587 containerd[1717]: time="2025-09-04T00:06:48.176552857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:48.177152 containerd[1717]: time="2025-09-04T00:06:48.176860181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.790680179s" Sep 4 00:06:48.177152 containerd[1717]: time="2025-09-04T00:06:48.176885605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 00:06:48.177731 containerd[1717]: time="2025-09-04T00:06:48.177711723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 00:06:48.189825 containerd[1717]: time="2025-09-04T00:06:48.189802130Z" level=info msg="CreateContainer within sandbox \"7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 00:06:48.213233 containerd[1717]: time="2025-09-04T00:06:48.212920227Z" level=info msg="Container e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:48.229521 containerd[1717]: time="2025-09-04T00:06:48.229495895Z" level=info msg="CreateContainer within sandbox \"7140bc445b6a5cc161c34d0430b7fa5e811d199139ba0ba36cf284d43c712414\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\"" Sep 4 00:06:48.229884 containerd[1717]: time="2025-09-04T00:06:48.229861162Z" level=info msg="StartContainer for \"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\"" Sep 4 00:06:48.231635 containerd[1717]: time="2025-09-04T00:06:48.231592796Z" level=info msg="connecting to shim e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0" address="unix:///run/containerd/s/cedc2534df1acb49aa1915f13042e47c4b1f6e5ba3174f6818bd12e7cd810e0d" protocol=ttrpc version=3 Sep 4 00:06:48.254721 systemd[1]: Started cri-containerd-e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0.scope - libcontainer container e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0. Sep 4 00:06:48.298037 containerd[1717]: time="2025-09-04T00:06:48.297987047Z" level=info msg="StartContainer for \"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" returns successfully" Sep 4 00:06:49.132924 kubelet[3131]: I0904 00:06:49.132787 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-778fcdd7dc-bg992" podStartSLOduration=27.354313826 podStartE2EDuration="33.132771941s" podCreationTimestamp="2025-09-04 00:06:16 +0000 UTC" firstStartedPulling="2025-09-04 00:06:42.398872673 +0000 UTC m=+51.552720070" lastFinishedPulling="2025-09-04 00:06:48.177330792 +0000 UTC m=+57.331178185" observedRunningTime="2025-09-04 00:06:49.132317378 +0000 UTC m=+58.286164780" watchObservedRunningTime="2025-09-04 00:06:49.132771941 +0000 UTC m=+58.286619347" Sep 4 00:06:49.156477 containerd[1717]: time="2025-09-04T00:06:49.156457810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"6ef5bff08b5676132b39c403dca3c87232b3727027d5a24f6ab0a280497712aa\" pid:5580 exited_at:{seconds:1756944409 nanos:156286908}" Sep 4 00:06:49.534769 containerd[1717]: time="2025-09-04T00:06:49.534675281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:49.537911 containerd[1717]: time="2025-09-04T00:06:49.537887127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 00:06:49.540457 containerd[1717]: time="2025-09-04T00:06:49.540434312Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:49.545399 containerd[1717]: time="2025-09-04T00:06:49.545375153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:49.547360 containerd[1717]: time="2025-09-04T00:06:49.547332184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.369442926s" Sep 4 00:06:49.547427 containerd[1717]: time="2025-09-04T00:06:49.547363886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 00:06:49.548753 containerd[1717]: time="2025-09-04T00:06:49.548733414Z" level=info msg="CreateContainer within sandbox \"d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 00:06:49.568117 containerd[1717]: time="2025-09-04T00:06:49.568029738Z" level=info msg="Container 450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:49.572097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764544846.mount: Deactivated successfully. Sep 4 00:06:49.588833 containerd[1717]: time="2025-09-04T00:06:49.588792264Z" level=info msg="CreateContainer within sandbox \"d976c74ccffa0921269ea02ba7905cb763405f8edb0e21b6b5e7572532aedc91\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f\"" Sep 4 00:06:49.590041 containerd[1717]: time="2025-09-04T00:06:49.590018412Z" level=info msg="StartContainer for \"450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f\"" Sep 4 00:06:49.591693 containerd[1717]: time="2025-09-04T00:06:49.591668885Z" level=info msg="connecting to shim 450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f" address="unix:///run/containerd/s/f01a16567a71b56fc4a8f34a7cae2432a51af41797f0e412358ef2ab50986783" protocol=ttrpc version=3 Sep 4 00:06:49.619746 systemd[1]: Started cri-containerd-450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f.scope - libcontainer container 450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f. Sep 4 00:06:49.652036 containerd[1717]: time="2025-09-04T00:06:49.652016749Z" level=info msg="StartContainer for \"450e2cbc90bee2cbdeee9bd6711fde287ad1488f76c71440d23ab474c4ba700f\" returns successfully" Sep 4 00:06:50.009130 kubelet[3131]: I0904 00:06:50.009112 3131 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 00:06:50.009130 kubelet[3131]: I0904 00:06:50.009134 3131 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 00:06:50.134424 kubelet[3131]: I0904 00:06:50.134366 3131 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t4rms" podStartSLOduration=26.972814492 podStartE2EDuration="34.134350743s" podCreationTimestamp="2025-09-04 00:06:16 +0000 UTC" firstStartedPulling="2025-09-04 00:06:42.386237453 +0000 UTC m=+51.540084844" lastFinishedPulling="2025-09-04 00:06:49.547773703 +0000 UTC m=+58.701621095" observedRunningTime="2025-09-04 00:06:50.134008953 +0000 UTC m=+59.287856360" watchObservedRunningTime="2025-09-04 00:06:50.134350743 +0000 UTC m=+59.288198148" Sep 4 00:06:51.229217 containerd[1717]: time="2025-09-04T00:06:51.229190853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"106dce6462c01945e27bb2f91fa973c6556edfad915d6d6235468acdfd3c52cc\" pid:5648 exited_at:{seconds:1756944411 nanos:229001944}" Sep 4 00:06:52.027911 kubelet[3131]: I0904 00:06:52.027887 3131 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 00:06:56.117541 containerd[1717]: time="2025-09-04T00:06:56.117437124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"7d8ea6fd4a45b50e5e62fca920c004ec48093b19136eacc51255590ddc9a17cd\" pid:5676 exited_at:{seconds:1756944416 nanos:117209414}" Sep 4 00:06:57.669680 containerd[1717]: time="2025-09-04T00:06:57.669641236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"2505f87e6ca8728daa6e1b55ca1a1c7170b60b6d275b3e0d146ca989fdd47760\" pid:5699 exited_at:{seconds:1756944417 nanos:668106961}" Sep 4 00:07:08.993644 containerd[1717]: time="2025-09-04T00:07:08.993568780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"ca630b75d166d743bc7404537bc5f3afe7246626c38a14d18554d382cc55cb24\" pid:5731 exited_at:{seconds:1756944428 nanos:993341989}" Sep 4 00:07:21.240489 containerd[1717]: time="2025-09-04T00:07:21.240400615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"32689804e98057b0825e4d77c906fe6e057fa55980b0be1e60c0c500b296a6bf\" pid:5761 exited_at:{seconds:1756944441 nanos:240065707}" Sep 4 00:07:26.109255 containerd[1717]: time="2025-09-04T00:07:26.109192518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"01c8c7c745bd5bcefedc8a8b1ccabf44dcb416d61b1231b756eb50dee69db51e\" pid:5788 exited_at:{seconds:1756944446 nanos:108991274}" Sep 4 00:07:27.657274 containerd[1717]: time="2025-09-04T00:07:27.657232489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"319a121622ea45bdad1d652eb5603f6273575c1698d3314e7ca853bb356d3e78\" pid:5810 exited_at:{seconds:1756944447 nanos:657019991}" Sep 4 00:07:47.550500 systemd[1]: Started sshd@7-10.200.8.18:22-10.200.16.10:44408.service - OpenSSH per-connection server daemon (10.200.16.10:44408). Sep 4 00:07:48.186099 sshd[5829]: Accepted publickey for core from 10.200.16.10 port 44408 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:48.187033 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:48.190592 systemd-logind[1692]: New session 10 of user core. Sep 4 00:07:48.193774 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:07:48.691055 sshd[5831]: Connection closed by 10.200.16.10 port 44408 Sep 4 00:07:48.692049 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:48.694429 systemd-logind[1692]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:07:48.694619 systemd[1]: sshd@7-10.200.8.18:22-10.200.16.10:44408.service: Deactivated successfully. Sep 4 00:07:48.696249 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:07:48.697982 systemd-logind[1692]: Removed session 10. Sep 4 00:07:51.311950 containerd[1717]: time="2025-09-04T00:07:51.311905736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"43fb038efe72fdfacc974229ce0e68ac17c3bc94493d3e27fbf60f85dcd8b17f\" pid:5855 exited_at:{seconds:1756944471 nanos:311616458}" Sep 4 00:07:53.814860 systemd[1]: Started sshd@8-10.200.8.18:22-10.200.16.10:54306.service - OpenSSH per-connection server daemon (10.200.16.10:54306). Sep 4 00:07:54.451379 sshd[5867]: Accepted publickey for core from 10.200.16.10 port 54306 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:07:54.452181 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:07:54.456328 systemd-logind[1692]: New session 11 of user core. Sep 4 00:07:54.462741 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:07:54.965614 sshd[5869]: Connection closed by 10.200.16.10 port 54306 Sep 4 00:07:54.966835 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:54.969401 systemd[1]: sshd@8-10.200.8.18:22-10.200.16.10:54306.service: Deactivated successfully. Sep 4 00:07:54.971367 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:07:54.972043 systemd-logind[1692]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:07:54.973545 systemd-logind[1692]: Removed session 11. Sep 4 00:07:56.242104 containerd[1717]: time="2025-09-04T00:07:56.242066859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"c28d40f4bd039a069bc955e7c3c2b401bf55948078937c0d33ce42631fc50ab1\" pid:5892 exited_at:{seconds:1756944476 nanos:241816790}" Sep 4 00:07:56.747272 containerd[1717]: time="2025-09-04T00:07:56.747189030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"54a941b2cb2489e859ece5fe6164f0142698ebdd29161cc3ead4765d584b6ad1\" pid:5913 exited_at:{seconds:1756944476 nanos:747006646}" Sep 4 00:07:57.657301 containerd[1717]: time="2025-09-04T00:07:57.657272993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"9ab603e2842894356dba7925790c9f1f763f1aa8574aa8fbd08333925fb2c803\" pid:5935 exited_at:{seconds:1756944477 nanos:657103043}" Sep 4 00:08:00.078421 systemd[1]: Started sshd@9-10.200.8.18:22-10.200.16.10:39754.service - OpenSSH per-connection server daemon (10.200.16.10:39754). Sep 4 00:08:00.716889 sshd[5953]: Accepted publickey for core from 10.200.16.10 port 39754 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:00.717755 sshd-session[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:00.721669 systemd-logind[1692]: New session 12 of user core. Sep 4 00:08:00.725772 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:08:01.210889 sshd[5955]: Connection closed by 10.200.16.10 port 39754 Sep 4 00:08:01.211763 sshd-session[5953]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:01.214147 systemd-logind[1692]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:08:01.214655 systemd[1]: sshd@9-10.200.8.18:22-10.200.16.10:39754.service: Deactivated successfully. Sep 4 00:08:01.216269 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:08:01.217464 systemd-logind[1692]: Removed session 12. Sep 4 00:08:01.327127 systemd[1]: Started sshd@10-10.200.8.18:22-10.200.16.10:39758.service - OpenSSH per-connection server daemon (10.200.16.10:39758). Sep 4 00:08:01.976381 sshd[5968]: Accepted publickey for core from 10.200.16.10 port 39758 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:01.976765 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:01.979943 systemd-logind[1692]: New session 13 of user core. Sep 4 00:08:01.982761 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:08:02.488292 sshd[5970]: Connection closed by 10.200.16.10 port 39758 Sep 4 00:08:02.488681 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:02.490977 systemd[1]: sshd@10-10.200.8.18:22-10.200.16.10:39758.service: Deactivated successfully. Sep 4 00:08:02.492078 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:08:02.492992 systemd-logind[1692]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:08:02.494592 systemd-logind[1692]: Removed session 13. Sep 4 00:08:02.599756 systemd[1]: Started sshd@11-10.200.8.18:22-10.200.16.10:39768.service - OpenSSH per-connection server daemon (10.200.16.10:39768). Sep 4 00:08:03.236029 sshd[5981]: Accepted publickey for core from 10.200.16.10 port 39768 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:03.236849 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:03.239945 systemd-logind[1692]: New session 14 of user core. Sep 4 00:08:03.247718 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:08:03.810212 sshd[5987]: Connection closed by 10.200.16.10 port 39768 Sep 4 00:08:03.811738 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:03.813872 systemd-logind[1692]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:08:03.817183 systemd[1]: sshd@11-10.200.8.18:22-10.200.16.10:39768.service: Deactivated successfully. Sep 4 00:08:03.819059 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:08:03.820964 systemd-logind[1692]: Removed session 14. Sep 4 00:08:08.835056 systemd[1]: Started sshd@12-10.200.8.18:22-10.200.16.10:39782.service - OpenSSH per-connection server daemon (10.200.16.10:39782). Sep 4 00:08:08.981207 containerd[1717]: time="2025-09-04T00:08:08.981102922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"b374b948e4b42774b7ef7d6c0a25d3c8aea889397be4d202489d625f1c29eb53\" pid:6015 exited_at:{seconds:1756944488 nanos:980912051}" Sep 4 00:08:09.469810 sshd[6001]: Accepted publickey for core from 10.200.16.10 port 39782 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:09.470848 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:09.474848 systemd-logind[1692]: New session 15 of user core. Sep 4 00:08:09.478773 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:08:09.961530 sshd[6027]: Connection closed by 10.200.16.10 port 39782 Sep 4 00:08:09.961872 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:09.963585 systemd[1]: sshd@12-10.200.8.18:22-10.200.16.10:39782.service: Deactivated successfully. Sep 4 00:08:09.965974 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:08:09.966962 systemd-logind[1692]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:08:09.967686 systemd-logind[1692]: Removed session 15. Sep 4 00:08:15.074010 systemd[1]: Started sshd@13-10.200.8.18:22-10.200.16.10:34654.service - OpenSSH per-connection server daemon (10.200.16.10:34654). Sep 4 00:08:15.723438 sshd[6046]: Accepted publickey for core from 10.200.16.10 port 34654 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:15.724243 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:15.727729 systemd-logind[1692]: New session 16 of user core. Sep 4 00:08:15.731732 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:08:16.215172 sshd[6048]: Connection closed by 10.200.16.10 port 34654 Sep 4 00:08:16.216072 sshd-session[6046]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:16.218313 systemd-logind[1692]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:08:16.218473 systemd[1]: sshd@13-10.200.8.18:22-10.200.16.10:34654.service: Deactivated successfully. Sep 4 00:08:16.220250 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:08:16.221713 systemd-logind[1692]: Removed session 16. Sep 4 00:08:21.236312 containerd[1717]: time="2025-09-04T00:08:21.236275268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"7f6f4240e035285760d612eb55d7c4f1f11a824c1a4674f67ce57b4797c06dfc\" pid:6086 exited_at:{seconds:1756944501 nanos:235950882}" Sep 4 00:08:21.328244 systemd[1]: Started sshd@14-10.200.8.18:22-10.200.16.10:35494.service - OpenSSH per-connection server daemon (10.200.16.10:35494). Sep 4 00:08:21.968336 sshd[6099]: Accepted publickey for core from 10.200.16.10 port 35494 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:21.969297 sshd-session[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:21.976804 systemd-logind[1692]: New session 17 of user core. Sep 4 00:08:21.980744 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:08:22.483365 sshd[6101]: Connection closed by 10.200.16.10 port 35494 Sep 4 00:08:22.483762 sshd-session[6099]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:22.487141 systemd[1]: sshd@14-10.200.8.18:22-10.200.16.10:35494.service: Deactivated successfully. Sep 4 00:08:22.489718 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:08:22.493086 systemd-logind[1692]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:08:22.494119 systemd-logind[1692]: Removed session 17. Sep 4 00:08:22.597767 systemd[1]: Started sshd@15-10.200.8.18:22-10.200.16.10:35504.service - OpenSSH per-connection server daemon (10.200.16.10:35504). Sep 4 00:08:23.262530 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 35504 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:23.263419 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:23.266917 systemd-logind[1692]: New session 18 of user core. Sep 4 00:08:23.271732 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:08:23.833675 sshd[6116]: Connection closed by 10.200.16.10 port 35504 Sep 4 00:08:23.836713 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:23.839876 systemd-logind[1692]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:08:23.841871 systemd[1]: sshd@15-10.200.8.18:22-10.200.16.10:35504.service: Deactivated successfully. Sep 4 00:08:23.844454 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:08:23.848395 systemd-logind[1692]: Removed session 18. Sep 4 00:08:23.946820 systemd[1]: Started sshd@16-10.200.8.18:22-10.200.16.10:35508.service - OpenSSH per-connection server daemon (10.200.16.10:35508). Sep 4 00:08:24.585687 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 35508 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:24.586334 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:24.590102 systemd-logind[1692]: New session 19 of user core. Sep 4 00:08:24.597745 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:08:26.218199 containerd[1717]: time="2025-09-04T00:08:26.218155217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"8e2093e9b586315571c4cac91c2da4226771796de43d8110a5c86b7610d5f5d7\" pid:6149 exited_at:{seconds:1756944506 nanos:217881683}" Sep 4 00:08:26.644889 sshd[6128]: Connection closed by 10.200.16.10 port 35508 Sep 4 00:08:26.645352 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:26.648305 systemd[1]: sshd@16-10.200.8.18:22-10.200.16.10:35508.service: Deactivated successfully. Sep 4 00:08:26.650447 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:08:26.650648 systemd[1]: session-19.scope: Consumed 374ms CPU time, 78.7M memory peak. Sep 4 00:08:26.651249 systemd-logind[1692]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:08:26.652486 systemd-logind[1692]: Removed session 19. Sep 4 00:08:26.756490 systemd[1]: Started sshd@17-10.200.8.18:22-10.200.16.10:35520.service - OpenSSH per-connection server daemon (10.200.16.10:35520). Sep 4 00:08:27.393188 sshd[6166]: Accepted publickey for core from 10.200.16.10 port 35520 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:27.394045 sshd-session[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:27.397977 systemd-logind[1692]: New session 20 of user core. Sep 4 00:08:27.401743 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:08:27.656338 containerd[1717]: time="2025-09-04T00:08:27.656272949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"724f0ea5c266c6ef5cdf9474296cfdb71d3a194ba88b808145310d373b1e89d4\" pid:6182 exited_at:{seconds:1756944507 nanos:656079166}" Sep 4 00:08:27.956805 sshd[6168]: Connection closed by 10.200.16.10 port 35520 Sep 4 00:08:27.957762 sshd-session[6166]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:27.959516 systemd[1]: sshd@17-10.200.8.18:22-10.200.16.10:35520.service: Deactivated successfully. Sep 4 00:08:27.960992 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:08:27.962160 systemd-logind[1692]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:08:27.963939 systemd-logind[1692]: Removed session 20. Sep 4 00:08:28.068039 systemd[1]: Started sshd@18-10.200.8.18:22-10.200.16.10:35528.service - OpenSSH per-connection server daemon (10.200.16.10:35528). Sep 4 00:08:28.703398 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 35528 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:28.704121 sshd-session[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:28.707636 systemd-logind[1692]: New session 21 of user core. Sep 4 00:08:28.714733 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:08:29.191271 sshd[6202]: Connection closed by 10.200.16.10 port 35528 Sep 4 00:08:29.191739 sshd-session[6200]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:29.193548 systemd[1]: sshd@18-10.200.8.18:22-10.200.16.10:35528.service: Deactivated successfully. Sep 4 00:08:29.195263 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:08:29.196352 systemd-logind[1692]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:08:29.197242 systemd-logind[1692]: Removed session 21. Sep 4 00:08:34.311836 systemd[1]: Started sshd@19-10.200.8.18:22-10.200.16.10:59210.service - OpenSSH per-connection server daemon (10.200.16.10:59210). Sep 4 00:08:34.958479 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 59210 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:34.959501 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:34.963439 systemd-logind[1692]: New session 22 of user core. Sep 4 00:08:34.968808 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:08:35.456632 sshd[6221]: Connection closed by 10.200.16.10 port 59210 Sep 4 00:08:35.457496 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:35.459924 systemd[1]: sshd@19-10.200.8.18:22-10.200.16.10:59210.service: Deactivated successfully. Sep 4 00:08:35.461546 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:08:35.464385 systemd-logind[1692]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:08:35.465574 systemd-logind[1692]: Removed session 22. Sep 4 00:08:40.567790 systemd[1]: Started sshd@20-10.200.8.18:22-10.200.16.10:41594.service - OpenSSH per-connection server daemon (10.200.16.10:41594). Sep 4 00:08:41.212037 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 41594 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:41.212887 sshd-session[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:41.216618 systemd-logind[1692]: New session 23 of user core. Sep 4 00:08:41.223726 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:08:41.702852 sshd[6235]: Connection closed by 10.200.16.10 port 41594 Sep 4 00:08:41.703852 sshd-session[6233]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:41.706083 systemd[1]: sshd@20-10.200.8.18:22-10.200.16.10:41594.service: Deactivated successfully. Sep 4 00:08:41.708103 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:08:41.708858 systemd-logind[1692]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:08:41.710065 systemd-logind[1692]: Removed session 23. Sep 4 00:08:46.814827 systemd[1]: Started sshd@21-10.200.8.18:22-10.200.16.10:41598.service - OpenSSH per-connection server daemon (10.200.16.10:41598). Sep 4 00:08:47.449106 sshd[6247]: Accepted publickey for core from 10.200.16.10 port 41598 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:47.449927 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:47.453465 systemd-logind[1692]: New session 24 of user core. Sep 4 00:08:47.460732 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:08:47.938244 sshd[6249]: Connection closed by 10.200.16.10 port 41598 Sep 4 00:08:47.938576 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:47.940338 systemd[1]: sshd@21-10.200.8.18:22-10.200.16.10:41598.service: Deactivated successfully. Sep 4 00:08:47.942062 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:08:47.942886 systemd-logind[1692]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:08:47.943769 systemd-logind[1692]: Removed session 24. Sep 4 00:08:51.231801 containerd[1717]: time="2025-09-04T00:08:51.231757563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22ab941081cb9304b4e2afdd0764a4d2e27ec965bbb4cd7d98bbea3e06e31eb\" id:\"96c81f198edb4263370f711f2501f3c837797bad6bf490cb25a4bcb3833f5d81\" pid:6274 exited_at:{seconds:1756944531 nanos:231485521}" Sep 4 00:08:53.059821 systemd[1]: Started sshd@22-10.200.8.18:22-10.200.16.10:55830.service - OpenSSH per-connection server daemon (10.200.16.10:55830). Sep 4 00:08:53.708452 sshd[6287]: Accepted publickey for core from 10.200.16.10 port 55830 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:53.710085 sshd-session[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:53.717560 systemd-logind[1692]: New session 25 of user core. Sep 4 00:08:53.723984 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:08:54.204515 sshd[6289]: Connection closed by 10.200.16.10 port 55830 Sep 4 00:08:54.205835 sshd-session[6287]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:54.208197 systemd[1]: sshd@22-10.200.8.18:22-10.200.16.10:55830.service: Deactivated successfully. Sep 4 00:08:54.209952 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:08:54.210784 systemd-logind[1692]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:08:54.212436 systemd-logind[1692]: Removed session 25. Sep 4 00:08:56.118736 containerd[1717]: time="2025-09-04T00:08:56.118685836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"df36a52b91f8c576c5b7fbf8e71fa4051fc073906577800143dae479ce4b8a9d\" pid:6312 exited_at:{seconds:1756944536 nanos:118432404}" Sep 4 00:08:56.750910 containerd[1717]: time="2025-09-04T00:08:56.750876228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e10a0104421b04184d816a77e5e1de4366a72f003ec3216d51d270f7ee07a3a0\" id:\"3b7a0e1f7692d83a0ebd040c1a056927c4f79411f61b330fc67b1e6626d5cf78\" pid:6335 exited_at:{seconds:1756944536 nanos:750655424}" Sep 4 00:08:57.656935 containerd[1717]: time="2025-09-04T00:08:57.656884031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98c9ec80e544e460904fe188d7ffe5258fd4d10b91cd778e5f8a84e41604d503\" id:\"951b6dca1949d623ca72318e2d9ef67b757f5d32388ba42b4149193be3410050\" pid:6356 exited_at:{seconds:1756944537 nanos:656564231}" Sep 4 00:08:59.317208 systemd[1]: Started sshd@23-10.200.8.18:22-10.200.16.10:55834.service - OpenSSH per-connection server daemon (10.200.16.10:55834). Sep 4 00:08:59.950639 sshd[6368]: Accepted publickey for core from 10.200.16.10 port 55834 ssh2: RSA SHA256:ajs8UC12FXDHzSNmKcWAWo1SQQptJoO4PYz1sgPm2w4 Sep 4 00:08:59.951485 sshd-session[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:59.955293 systemd-logind[1692]: New session 26 of user core. Sep 4 00:08:59.959743 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:09:00.438217 sshd[6370]: Connection closed by 10.200.16.10 port 55834 Sep 4 00:09:00.438768 sshd-session[6368]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:00.440907 systemd[1]: sshd@23-10.200.8.18:22-10.200.16.10:55834.service: Deactivated successfully. Sep 4 00:09:00.442683 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:09:00.443880 systemd-logind[1692]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:09:00.445094 systemd-logind[1692]: Removed session 26. Sep 4 00:09:04.500226 kubelet[3131]: E0904 00:09:04.500198 3131 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: EOF"